threads
listlengths
1
2.99k
[ { "msg_contents": "Hello.\n\nI see the following description in the doc.\n\nhttps://www.postgresql.org/docs/13/ssl-tcp.html\n\nIntermediate certificates that chain up to existing root certificates\ncan also appear in the ssl_ca_file file if you wish to avoid storing\nthem on clients (assuming the root and intermediate certificates were\ncreated with v3_ca extensions). Certificate Revocation List (CRL)\nentries are also checked if the parameter ssl_crl_file is set. (See\nhttp://h41379.www4.hpe.com/doc/83final/ba554_90007/ch04s02.html for\ndiagrams showing SSL certificate usage.)\n\nI follwed the URL above and saw the \"Support and other resources\" page\nof the document \"OpeNVMS Systems Documemtation Index page\".\n\nFWIW the folloing URL shows \"HP Open Source Security for OpenVMS\nVolume 2: HP SSL for Open VMS\", which seems to be the originally\nintended document..\n\nhttps://support.hpe.com/hpesc/public/docDisplay?docId=emr_na-c04622320\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n\n", "msg_date": "Thu, 09 Jul 2020 16:12:26 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Stale external URL in doc?" }, { "msg_contents": "> On 9 Jul 2020, at 09:12, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> \n> Hello.\n> \n> I see the following description in the doc.\n> \n> https://www.postgresql.org/docs/13/ssl-tcp.html\n> \n> Intermediate certificates that chain up to existing root certificates\n> can also appear in the ssl_ca_file file if you wish to avoid storing\n> them on clients (assuming the root and intermediate certificates were\n> created with v3_ca extensions). Certificate Revocation List (CRL)\n> entries are also checked if the parameter ssl_crl_file is set. (See\n> http://h41379.www4.hpe.com/doc/83final/ba554_90007/ch04s02.html for\n> diagrams showing SSL certificate usage.)\n> \n> I follwed the URL above and saw the \"Support and other resources\" page\n> of the document \"OpeNVMS Systems Documemtation Index page\".\n\nRight, it's redirecting there now. The same goes for a link to hpe.com on\nhttps://www.postgresql.org/docs/13/libpq-ssl.html which too is redirected\nto a larger documentation set.\n\n> FWIW the folloing URL shows \"HP Open Source Security for OpenVMS\n> Volume 2: HP SSL for Open VMS\", which seems to be the originally\n> intended document..\n> \n> https://support.hpe.com/hpesc/public/docDisplay?docId=emr_na-c04622320\n\nThe intended document is a page which is more concise and to the point, the\nfull OpenVMS SSL documentation set doesn't really fit the purpose for this\nlink.\n\nAs a short term fix we should either a) remove these links completely or b)\nlink to archived copies of the pages on archive.org; or c) find a more\nappropriate pages to link to. A quick search didn't turn up anything I would\nprefer for (c), and I'm not sure what he legality of linking to a cached copy\nis, so I would advocate for (a).\n\nLonger term we should try to incorporate (some of) these diagrams and content\ninto our own documentation now that we have proper capability for inline\nimages.\n\ncheers ./daniel\n\n\n", "msg_date": "Thu, 9 Jul 2020 09:46:44 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Stale external URL in doc?" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> As a short term fix we should either a) remove these links completely or b)\n> link to archived copies of the pages on archive.org; or c) find a more\n> appropriate pages to link to. A quick search didn't turn up anything I would\n> prefer for (c), and I'm not sure what he legality of linking to a cached copy\n> is, so I would advocate for (a).\n\n+1. It should have been obvious just from the spelling of this URL that\nit wasn't intended to be a long term stable location. Digging in the\ngit history shows we've already updated it twice, and I wonder how many\nchanges there were that we didn't notice.\n\nJust reverting bbd3bdba3 seems appropriate to me.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 09 Jul 2020 09:51:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Stale external URL in doc?" }, { "msg_contents": "On Thu, Jul 9, 2020 at 3:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Daniel Gustafsson <daniel@yesql.se> writes:\n> > As a short term fix we should either a) remove these links completely or\n> b)\n> > link to archived copies of the pages on archive.org; or c) find a more\n> > appropriate pages to link to. A quick search didn't turn up anything I\n> would\n> > prefer for (c), and I'm not sure what he legality of linking to a cached\n> copy\n> > is, so I would advocate for (a).\n>\n> +1. It should have been obvious just from the spelling of this URL that\n> it wasn't intended to be a long term stable location. Digging in the\n> git history shows we've already updated it twice, and I wonder how many\n> changes there were that we didn't notice.\n>\n> Just reverting bbd3bdba3 seems appropriate to me.\n>\n\n+1.\n\nIf we want to keep a set of such links, probably the wiki is a better place\nas more people can easily fix them there.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Thu, Jul 9, 2020 at 3:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Daniel Gustafsson <daniel@yesql.se> writes:\n> As a short term fix we should either a) remove these links completely or b)\n> link to archived copies of the pages on archive.org; or c) find a more\n> appropriate pages to link to.  A quick search didn't turn up anything I would\n> prefer for (c), and I'm not sure what he legality of linking to a cached copy\n> is, so I would advocate for (a).\n\n+1.  It should have been obvious just from the spelling of this URL that\nit wasn't intended to be a long term stable location.  Digging in the\ngit history shows we've already updated it twice, and I wonder how many\nchanges there were that we didn't notice.\n\nJust reverting bbd3bdba3 seems appropriate to me.+1.If we want to keep a set of such links, probably the wiki is a better place as more people can easily fix them there. --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Thu, 9 Jul 2020 17:54:53 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Stale external URL in doc?" }, { "msg_contents": "On 2020-Jul-09, Magnus Hagander wrote:\n\n> If we want to keep a set of such links, probably the wiki is a better place\n> as more people can easily fix them there.\n\nOr, since our docs have diagram capabilities now, we can make our own\ndiagram.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 9 Jul 2020 12:36:43 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Stale external URL in doc?" }, { "msg_contents": "On Thu, Jul 9, 2020 at 6:36 PM Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> On 2020-Jul-09, Magnus Hagander wrote:\n>\n> > If we want to keep a set of such links, probably the wiki is a better\n> place\n> > as more people can easily fix them there.\n>\n> Or, since our docs have diagram capabilities now, we can make our own\n> diagram.\n>\n\nAbsolutely. I meant more as a general thing if we want to refer to websites\noutside of our control for other things.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Thu, Jul 9, 2020 at 6:36 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:On 2020-Jul-09, Magnus Hagander wrote:\n\n> If we want to keep a set of such links, probably the wiki is a better place\n> as more people can easily fix them there.\n\nOr, since our docs have diagram capabilities now, we can make our own\ndiagram.Absolutely. I meant more as a general thing if we want to refer to websites outside of our control for other things. --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Thu, 9 Jul 2020 20:03:50 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Stale external URL in doc?" }, { "msg_contents": "> On 9 Jul 2020, at 17:54, Magnus Hagander <magnus@hagander.net> wrote:\n> \n> On Thu, Jul 9, 2020 at 3:52 PM Tom Lane <tgl@sss.pgh.pa.us <mailto:tgl@sss.pgh.pa.us>> wrote:\n> Daniel Gustafsson <daniel@yesql.se <mailto:daniel@yesql.se>> writes:\n> > As a short term fix we should either a) remove these links completely or b)\n> > link to archived copies of the pages on archive.org <http://archive.org/>; or c) find a more\n> > appropriate pages to link to. A quick search didn't turn up anything I would\n> > prefer for (c), and I'm not sure what he legality of linking to a cached copy\n> > is, so I would advocate for (a).\n> \n> +1. It should have been obvious just from the spelling of this URL that\n> it wasn't intended to be a long term stable location. Digging in the\n> git history shows we've already updated it twice, and I wonder how many\n> changes there were that we didn't notice.\n> \n> Just reverting bbd3bdba3 seems appropriate to me.\n> \n> +1.\n> \n> If we want to keep a set of such links, probably the wiki is a better place as more people can easily fix them there.\n\nTaking a look at other links to external resources, most links seemed to\nresolve still (but I didn't test all of them). I did find another one on the\nGEQO page which is now dead without the content available elsewhere, as well as\na larger problem with the AIX references.\n\nWe have a list of links to the AIX 6.1 documentation which no longer works as\nIBM only provides docset PDFs for 6.1. Looking that 7.x documentation they\nhave reorganized enough to make the older links not directly translatable. I\ndo wonder if updating this list is worth the effort, or if it will only lead to\nus revisiting this once IBM does another site change.\n\nThe attached suggestion removes the reported SSL links, the FAQ linked to on\nGEQO and all the IBM links, fully realizing that it might be controversial to\nsome extent.\n\ncheers ./daniel", "msg_date": "Fri, 10 Jul 2020 00:06:42 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Stale external URL in doc?" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> Taking a look at other links to external resources, most links seemed to\n> resolve still (but I didn't test all of them). I did find another one on the\n> GEQO page which is now dead without the content available elsewhere, as well as\n> a larger problem with the AIX references.\n\n> We have a list of links to the AIX 6.1 documentation which no longer works as\n> IBM only provides docset PDFs for 6.1. Looking that 7.x documentation they\n> have reorganized enough to make the older links not directly translatable. I\n> do wonder if updating this list is worth the effort, or if it will only lead to\n> us revisiting this once IBM does another site change.\n\n> The attached suggestion removes the reported SSL links, the FAQ linked to on\n> GEQO and all the IBM links, fully realizing that it might be controversial to\n> some extent.\n\n+1 for just deleting all of it. I don't think we need to be telling users\nof obsolete AIX versions how to run their systems. The comp.ai.genetic\nFAQ link might be more of a loss, but on the other hand I'd be willing to\nbet it wasn't very up to date anymore. Netnews has been moribund for a\nlong time :-(\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 09 Jul 2020 18:51:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Stale external URL in doc?" }, { "msg_contents": "On 2020-Jul-10, Daniel Gustafsson wrote:\n\n> Taking a look at other links to external resources, most links seemed to\n> resolve still (but I didn't test all of them). I did find another one on the\n> GEQO page which is now dead without the content available elsewhere, as well as\n> a larger problem with the AIX references.\n\nUm, the comp.ai.genetic FAQ can still be found, eg. \nhttp://www.faqs.org/faqs/ai-faq/genetic/part1/\n\n\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 9 Jul 2020 20:08:38 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Stale external URL in doc?" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-Jul-10, Daniel Gustafsson wrote:\n>> Taking a look at other links to external resources, most links seemed to\n>> resolve still (but I didn't test all of them). I did find another one on the\n>> GEQO page which is now dead without the content available elsewhere, as well as\n>> a larger problem with the AIX references.\n\n> Um, the comp.ai.genetic FAQ can still be found, eg. \n> http://www.faqs.org/faqs/ai-faq/genetic/part1/\n\nSo it is, although that also shows it hasn't been updated since 2001.\n\nOTOH, that vintage of info is probably just fine for understanding GEQO.\n\nI'll go update that pointer and remove the other links per Daniel's\npatch.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 10 Jul 2020 12:28:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Stale external URL in doc?" }, { "msg_contents": "> On 10 Jul 2020, at 18:28, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n\n>> Um, the comp.ai.genetic FAQ can still be found, eg. \n>> http://www.faqs.org/faqs/ai-faq/genetic/part1/\n> \n> So it is, although that also shows it hasn't been updated since 2001.\n\nAh, I missed the alternative source.\n\n> I'll go update that pointer and remove the other links per Daniel's\n> patch.\n\nThanks for the fixup.\n\ncheers ./daniel\n\n\n", "msg_date": "Fri, 10 Jul 2020 23:41:38 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Stale external URL in doc?" }, { "msg_contents": "On Fri, Jul 10, 2020 at 10:07 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> (but I didn't test all of them)\n\nCave-person shell script time:\n\nfor url in ` git grep 'url=\"http' | sed 's/.*url=\"//;s/\".*//' | sort | uniq `\ndo\n if ! curl --output /dev/null --silent --head --fail \"$url\"\n then\n echo \"bad URL: $url\"\n fi\ndone\n\nbad URL: https://mingw-w64.org/\nbad URL: https://msdn.microsoft.com/en-us/library/aa380493%28VS.85%29.aspx\nbad URL: https://ssl.icu-project.org/icu-bin/locexp\nbad URL: https://www.ismn-international.org/ranges.html\n\nThe Microsoft one is OK, it's a redirect, but the redirect target\nlooks like a more permanent URL to me so maybe we should change it.\nThe others required minor manual sleuthing to correct; I hope I found\nthe correct ISN ranges page. Please see attached.\n\nLooking at the ICU URL, I found a couple like that in our source tree,\nand fixed those too, including one used by\nsrc/backend/utils/mb/Unicode/Makefile to fetch source data which has\nmoved (http://site.icu-project.org/repository says \"Announcement\n07/16/2018: The ICU source code repository has been migrated from\nSubversion to Git, and is now hosted on GitHub.\").", "msg_date": "Sat, 11 Jul 2020 09:42:17 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Stale external URL in doc?" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> The Microsoft one is OK, it's a redirect, but the redirect target\n> looks like a more permanent URL to me so maybe we should change it.\n\n+1\n\n> The others required minor manual sleuthing to correct; I hope I found\n> the correct ISN ranges page. Please see attached.\n\nI didn't actually check any of these, but they look like sane changes.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 10 Jul 2020 17:47:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Stale external URL in doc?" }, { "msg_contents": "> On 10 Jul 2020, at 23:47, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n\n>> The others required minor manual sleuthing to correct; I hope I found\n>> the correct ISN ranges page. Please see attached.\n> \n> I didn't actually check any of these, but they look like sane changes.\n\n+1, looks good, thanks!\n\ncheers ./daniel\n\n\n", "msg_date": "Fri, 10 Jul 2020 23:55:59 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Stale external URL in doc?" }, { "msg_contents": "On Sat, Jul 11, 2020 at 9:56 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > On 10 Jul 2020, at 23:47, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Thomas Munro <thomas.munro@gmail.com> writes:\n> >> The others required minor manual sleuthing to correct; I hope I found\n> >> the correct ISN ranges page. Please see attached.\n> >\n> > I didn't actually check any of these, but they look like sane changes.\n>\n> +1, looks good, thanks!\n\nIs it OK that I see the following warning many times when running\n\"make\" under src/backend/utils/mb/Unicode? It looks like this code is\nfrom commit 1de9cc0d. Horiguchi-san, do you think something changed\n(input data format, etc) since you wrote it, or maybe some later\nchanges just made our perl scripts more picky about warnings?\n\n Use of uninitialized value $val in printf at convutils.pm line 612.\n\n\n", "msg_date": "Sat, 11 Jul 2020 15:25:54 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Stale external URL in doc?" }, { "msg_contents": "> On 11 Jul 2020, at 05:25, Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> Is it OK that I see the following warning many times when running\n> \"make\" under src/backend/utils/mb/Unicode? It looks like this code is\n> from commit 1de9cc0d. Horiguchi-san, do you think something changed\n> (input data format, etc) since you wrote it, or maybe some later\n> changes just made our perl scripts more picky about warnings?\n> \n> Use of uninitialized value $val in printf at convutils.pm line 612.\n\nConfirmed here as well, combined with the below ones for a few of the files:\n\nUse of uninitialized value in hash element at convutils.pm line 448.\nUse of uninitialized value $b1root in printf at convutils.pm line 558.\nUse of uninitialized value $b1_lower in printf at convutils.pm line 560.\nUse of uninitialized value $b1_upper in printf at convutils.pm line 561.\nUse of uninitialized value $b3root in printf at convutils.pm line 570.\nUse of uninitialized value $b3_1_lower in printf at convutils.pm line 572.\nUse of uninitialized value $b3_1_upper in printf at convutils.pm line 573.\nUse of uninitialized value $b3_2_lower in printf at convutils.pm line 574.\nUse of uninitialized value $b3_2_upper in printf at convutils.pm line 575.\nUse of uninitialized value $b3_3_lower in printf at convutils.pm line 576.\nUse of uninitialized value $b3_3_upper in printf at convutils.pm line 577.\nUse of uninitialized value $val in printf at convutils.pm line 612.\n\ncheers ./daniel\n\n\n", "msg_date": "Mon, 13 Jul 2020 11:36:17 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Stale external URL in doc?" }, { "msg_contents": "At Mon, 13 Jul 2020 11:36:17 +0200, Daniel Gustafsson <daniel@yesql.se> wrote in \n> > On 11 Jul 2020, at 05:25, Thomas Munro <thomas.munro@gmail.com> wrote:\n> \n> > Is it OK that I see the following warning many times when running\n> > \"make\" under src/backend/utils/mb/Unicode? It looks like this code is\n> > from commit 1de9cc0d. Horiguchi-san, do you think something changed\n> > (input data format, etc) since you wrote it, or maybe some later\n> > changes just made our perl scripts more picky about warnings?\n> > \n> > Use of uninitialized value $val in printf at convutils.pm line 612.\n> \n> Confirmed here as well, combined with the below ones for a few of the files:\n> \n> Use of uninitialized value in hash element at convutils.pm line 448.\n> Use of uninitialized value $b1root in printf at convutils.pm line 558.\n> Use of uninitialized value $b1_lower in printf at convutils.pm line 560.\n\nMmm. I see the same, too. I'm looking into that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 14 Jul 2020 09:00:11 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Stale external URL in doc?" }, { "msg_contents": "It is found to be a time capsule full of worms..\n\nAt Tue, 14 Jul 2020 09:00:11 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > Use of uninitialized value $b1_lower in printf at convutils.pm line 560.\n> \n> Mmm. I see the same, too. I'm looking into that.\n\nThere are three easy-to-fix issues:\n\n1. The script set utilized undef as zeros, so most of them are fixed\n by using zero for undefs.\n\n2. Some Japanese-related converter scripts seem to be affected by a\n change of regexp greediness and easily fixed.\n\n3. I got a certificate error for ssl.icu-project.org and found that\n the name is changed to icu-project.org. \n\nAnd one issue that I'm not sure how we shold treat this:\n\nA. I didn't find the files gb-18030-2000.xml and windows-949-2000.xml\n in the ICU site. We have our own copy in our repository so it's not\n a serious problem but I'm not sure what we should do for this.\n\n I found CP949.TXT for windows-949-2000.xml but the former is missing\n mappings for certaion code ranges (c9xx and fexx).\n\n\nThe attached is the fix for 1 to 3 above. It doesn't contain changes\nin .map files.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 14 Jul 2020 12:26:56 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Stale external URL in doc?" }, { "msg_contents": "On Tue, Jul 14, 2020 at 3:27 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> A. I didn't find the files gb-18030-2000.xml and windows-949-2000.xml\n> in the ICU site. We have our own copy in our repository so it's not\n> a serious problem but I'm not sure what we should do for this.\n\nThe patch I posted earlier fixes that problem (their source repository moved).\n\n\n", "msg_date": "Tue, 14 Jul 2020 15:40:41 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Stale external URL in doc?" }, { "msg_contents": "At Tue, 14 Jul 2020 15:40:41 +1200, Thomas Munro <thomas.munro@gmail.com> wrote in \n> On Tue, Jul 14, 2020 at 3:27 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > A. I didn't find the files gb-18030-2000.xml and windows-949-2000.xml\n> > in the ICU site. We have our own copy in our repository so it's not\n> > a serious problem but I'm not sure what we should do for this.\n> \n> The patch I posted earlier fixes that problem (their source repository moved).\n\n- $(DOWNLOAD) https://ssl.icu-project.org/repos/icu/data/trunk/charset/data/xml/$(@F)\n+ $(DOWNLOAD) https://raw.githubusercontent.com/unicode-org/icu-data/master/charset/data/xml/$(@F)\n\nWow. The URL works and makes no difference in related map files.\n\nThanks!\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 14 Jul 2020 14:03:32 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Stale external URL in doc?" }, { "msg_contents": "> On 10 Jul 2020, at 23:55, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 10 Jul 2020, at 23:47, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Thomas Munro <thomas.munro@gmail.com> writes:\n> \n>>> The others required minor manual sleuthing to correct; I hope I found\n>>> the correct ISN ranges page. Please see attached.\n>> \n>> I didn't actually check any of these, but they look like sane changes.\n> \n> +1, looks good, thanks!\n\nSince this is still in flight, I'm tacking on a few more in the attached diff\nthat I stumbled across. gnu.org will redirect from http to https so we might\nas well have that in our docs from the start.\n\ncheers ./daniel", "msg_date": "Thu, 16 Jul 2020 14:09:17 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Stale external URL in doc?" }, { "msg_contents": "At Thu, 16 Jul 2020 14:09:17 +0200, Daniel Gustafsson <daniel@yesql.se> wrote in \n> > On 10 Jul 2020, at 23:55, Daniel Gustafsson <daniel@yesql.se> wrote:\n> > \n> >> On 10 Jul 2020, at 23:47, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Thomas Munro <thomas.munro@gmail.com> writes:\n> > \n> >>> The others required minor manual sleuthing to correct; I hope I found\n> >>> the correct ISN ranges page. Please see attached.\n> >> \n> >> I didn't actually check any of these, but they look like sane changes.\n> > \n> > +1, looks good, thanks!\n> \n> Since this is still in flight, I'm tacking on a few more in the attached diff\n> that I stumbled across. gnu.org will redirect from http to https so we might\n> as well have that in our docs from the start.\n\nI checked through http:// URLs in the documentation.\n\n1. 505 Not found (0001-Fix-505-URL.patch)\n\n (Shows login page instead) \n http://citeseer.ist.psu.edu/seshadri95generalized.html \n => https://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.40.5740\n\n2. moved or rearranged (0002-Fixed-URLs-mofed-or-rearranged.patch)\n\n http://initd.org/psycopg/\n => https://www.psycopg.org/\n\n http://db.cs.berkeley.edu => https://dsf.berkeley.edu\n http://db.cs.berkeley.edu/jmh/\n http://db.cs.berkeley.edu/papers/\n http://db.cs.berkeley.edu/papers/ERL-M85-95.pdf\n http://db.cs.berkeley.edu/papers/ERL-M87-06.pdf\n http://db.cs.berkeley.edu/papers/ERL-M87-13.pdf\n http://db.cs.berkeley.edu/papers/ERL-M89-17.pdf\n http://db.cs.berkeley.edu/papers/ERL-M89-82.pdf\n http://db.cs.berkeley.edu/papers/ERL-M90-34.pdf\n http://db.cs.berkeley.edu/papers/ERL-M90-36.pdf\n http://db.cs.berkeley.edu/papers/UCB-MS-zfong.pdf\n http://db.cs.berkeley.edu/postgres.html\n\n (I counldn't find the eqquivalent for http://gist.cs.berkeley.edu/\n in dsf.berkeley.edu)\n\n http://json.org => https://www.json.org (Redirects to localized page)\n\n\n3. Has the same page for https:// (0003-change-http-URLs-to-https.patch)\n http://cve.mitre.org/\n http://jlcooke.ca/random/\n http://postgis.net/\n http://pqxx.org/\n http://pubs.opengroup.org/onlinepubs/009695399/functions/strftime.html\n http://snowballstem.org/\n http://sourceware.org/systemtap/\n http://standards.ieee.org/\n http://standards.iso.org/ittf/PubliclyAvailableStandards/c067367_ISO_IEC_TR_19075-6_2017.zip\n http://web.mit.edu/Kerberos/dist/index.html\n http://www.issn.org/\n http://www.iusmentis.com/security/passphrasefaq/\n http://www.loc.gov/standards/iso639-2/php/English_list.php\n http://www.npgsql.org/\n http://www.openwall.com/crypt/\n http://www.perl.org\n http://www.red3d.com/cwr/evolve.html\n http://www.slony.info\n http://www.tcl.tk/\n http://www.zlib.net\n http://zlatkovic.com/pub/libxml\n (http://www.gnu.org/software/gettext/)\n (http://www.gnu.org/software/libtool/)\n\n4. Has https:// page with some troubles.\n http://www.sunfreeware.com (insecure certificate)\n http://xmlsoft.org (insercure certificate)\n http://xmlsoft.org/\n http://xmlsoft.org/XSLT/\n http://www.tpc.org/ (private certificate and ... looks odd..)\n\n\n5. Seems not having https pages.\n http://gist.cs.berkeley.edu/\n http://gnuwin32.sourceforge.net\n http://newbiedoc.sourceforge.net/metadoc/docbook-guide.html\n http://meteora.ucsd.edu/s2k/s2k_home.html\n http://sg.danny.cz/sg/sdparm.html\n http://userguide.icu-project.org/collation/api\n http://userguide.icu-project.org/locale\n http://world.std.com/~reinhold/diceware.html\n http://www.faqs.org/faqs/ai-faq/genetic/part1/\n http://www.interhack.net/people/cmcurtin/snake-oil-faq.html\n http://www.mingw.org/\n http://www.mingw.org/wiki/MSYS\n http://www.ossp.org/pkg/lib/uuid/\n http://www.sai.msu.su/~megera/oddmuse/index.cgi/Gin\n http://www.sai.msu.su/~megera/postgres/gist/\n http://www.sai.msu.su/~megera/postgres/gist/papers/concurrency/access-methods-for-next-generation.pdf.gz\n http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/\n http://www.sai.msu.su/~megera/wiki/Gin\n http://www.sai.msu.su/~megera/wiki/spgist_dev\n http://xahlee.info/UnixResource_dir/_/ldpath.html\n\nI attached fixes for 1, 2 and 3, and not for 4. (5 doesn't need\nchanges).\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Fri, 17 Jul 2020 12:13:08 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Stale external URL in doc?" }, { "msg_contents": "On Fri, Jul 17, 2020 at 12:13:08PM +0900, Kyotaro Horiguchi wrote:\n> I checked through http:// URLs in the documentation.\n\nIt would be better to get all that fixed and backpatched. Is somebody\nalready looking into that?\n--\nMichael", "msg_date": "Fri, 17 Jul 2020 14:03:18 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Stale external URL in doc?" }, { "msg_contents": "On Fri, Jul 17, 2020 at 02:03:18PM +0900, Michael Paquier wrote:\n> It would be better to get all that fixed and backpatched. Is somebody\n> already looking into that?\n\nI have been through this set, and applied the changes as of 045d03f & \nfriends. There was an extra URL broken in 9.5 and 9.6 related to the\npassphrase FAQ.\n--\nMichael", "msg_date": "Sat, 18 Jul 2020 22:48:47 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Stale external URL in doc?" }, { "msg_contents": "On Thu, Jul 9, 2020 at 09:51:51AM -0400, Tom Lane wrote:\n> Daniel Gustafsson <daniel@yesql.se> writes:\n> > As a short term fix we should either a) remove these links completely or b)\n> > link to archived copies of the pages on archive.org; or c) find a more\n> > appropriate pages to link to. A quick search didn't turn up anything I would\n> > prefer for (c), and I'm not sure what he legality of linking to a cached copy\n> > is, so I would advocate for (a).\n> \n> +1. It should have been obvious just from the spelling of this URL that\n> it wasn't intended to be a long term stable location. Digging in the\n> git history shows we've already updated it twice, and I wonder how many\n> changes there were that we didn't notice.\n> \n> Just reverting bbd3bdba3 seems appropriate to me.\n\nYes, I was keeping those URLs specifically to document intermediate\ncertificate usage, but now that we have documentation of how to set up\nintermediates, we don't need it anymore.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Mon, 20 Jul 2020 15:39:50 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Stale external URL in doc?" }, { "msg_contents": "At Sat, 18 Jul 2020 22:48:47 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Fri, Jul 17, 2020 at 02:03:18PM +0900, Michael Paquier wrote:\n> > It would be better to get all that fixed and backpatched. Is somebody\n> > already looking into that?\n> \n> I have been through this set, and applied the changes as of 045d03f & \n> friends. There was an extra URL broken in 9.5 and 9.6 related to the\n> passphrase FAQ.\n\nThanks!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 22 Jul 2020 11:37:07 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Stale external URL in doc?" } ]
[ { "msg_contents": "In PG13, we added the ability to add backtraces to the log output. \nAfter some practical experience with it, I think the order in which the \nBACKTRACE and the LOCATION fields are printed is wrong. I propose we \nput the LOCATION field before the BACKTRACE field, not after. This \nmakes more sense because the location is effectively at the lowest level \nof the backtrace.\n\nPatch attached.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 9 Jul 2020 11:17:32 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Log the location field before any backtrace" }, { "msg_contents": "> On 9 Jul 2020, at 11:17, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> \n> In PG13, we added the ability to add backtraces to the log output. After some practical experience with it, I think the order in which the BACKTRACE and the LOCATION fields are printed is wrong. I propose we put the LOCATION field before the BACKTRACE field, not after. This makes more sense because the location is effectively at the lowest level of the backtrace.\n\nMakes sense, +1\n\ncheers ./daniel\n\n", "msg_date": "Thu, 9 Jul 2020 13:25:43 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Log the location field before any backtrace" }, { "msg_contents": "On 2020-Jul-09, Daniel Gustafsson wrote:\n\n> > On 9 Jul 2020, at 11:17, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> > \n> > In PG13, we added the ability to add backtraces to the log output. After some practical experience with it, I think the order in which the BACKTRACE and the LOCATION fields are printed is wrong. I propose we put the LOCATION field before the BACKTRACE field, not after. This makes more sense because the location is effectively at the lowest level of the backtrace.\n> \n> Makes sense, +1\n\nLikewise\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 9 Jul 2020 12:31:38 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Log the location field before any backtrace" }, { "msg_contents": "On Thu, Jul 09, 2020 at 12:31:38PM -0400, Alvaro Herrera wrote:\n> On 2020-Jul-09, Daniel Gustafsson wrote:\n>> On 9 Jul 2020, at 11:17, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n>>> \n>>> In PG13, we added the ability to add backtraces to the log\n>>> output. After some practical experience with it, I think the\n>>> order in which the BACKTRACE and the LOCATION fields are printed\n>>> is wrong. I propose we put the LOCATION field before the\n>>> BACKTRACE field, not after. This makes more sense because the\n>>> location is effectively at the lowest level of the backtrace. \n>> \n>> Makes sense, +1\n> \n> Likewise\n\n+1.\n--\nMichael", "msg_date": "Fri, 10 Jul 2020 11:04:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Log the location field before any backtrace" }, { "msg_contents": "On 2020-07-10 04:04, Michael Paquier wrote:\n> On Thu, Jul 09, 2020 at 12:31:38PM -0400, Alvaro Herrera wrote:\n>> On 2020-Jul-09, Daniel Gustafsson wrote:\n>>> On 9 Jul 2020, at 11:17, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n>>>>\n>>>> In PG13, we added the ability to add backtraces to the log\n>>>> output. After some practical experience with it, I think the\n>>>> order in which the BACKTRACE and the LOCATION fields are printed\n>>>> is wrong. I propose we put the LOCATION field before the\n>>>> BACKTRACE field, not after. This makes more sense because the\n>>>> location is effectively at the lowest level of the backtrace.\n>>>\n>>> Makes sense, +1\n>>\n>> Likewise\n> \n> +1.\n\ncommitted\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 10 Jul 2020 08:36:07 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Log the location field before any backtrace" } ]
[ { "msg_contents": "Hi all,\n\n consider the following SQL:\n\n================================================================================================\n gpadmin=# explain (verbose, costs off)\n select * from t,\n (select a from generate_series(1, 1)a)x,\n (select a from generate_series(1, 1)a)y\n where ((x.a+y.a)/4.0) > random();\n QUERY PLAN\n----------------------------------------------------------------------------------------\n Nested Loop\n Output: t.a, t.b, a.a, a_1.a\n -> Nested Loop\n Output: a.a, a_1.a\n Join Filter: (((((a.a + a_1.a))::numeric / 4.0))::double precision > random())\n -> Function Scan on pg_catalog.generate_series a\n Output: a.a\n Function Call: generate_series(1, 1)\n -> Function Scan on pg_catalog.generate_series a_1\n Output: a_1.a\n Function Call: generate_series(1, 1)\n -> Seq Scan on public.t\n Output: t.a, t.b\n(13 rows)\n\n================================================================================================\n\n The where clause is \"pushed down to the x,y\" because it only references these two relations.\n\n The original query tree's join tree is like:\nFromExpr []\n [fromlist]\n RangeTblRef [rtindex=1]\n RangeTblRef [rtindex=4]\n RangeTblRef [rtindex=5]\n [quals]\n OpExpr [opno=674 opfuncid=297 opresulttype=16 opretset=false]\n FuncExpr [funcid=1746 funcresulttype=701 funcretset=false funcvariadic=false\n funcformat=COERCE_IMPLICIT_CAST]\n OpExpr [opno=1761 opfuncid=1727 opresulttype=1700 opretset=false]\n FuncExpr [funcid=1740 funcresulttype=1700 funcretset=false funcvariadic=false\n funcformat=COERCE_IMPLICIT_CAST]\n OpExpr [opno=551 opfuncid=177 opresulttype=23 opretset=false]\n Var [varno=4 varattno=1 vartype=23 varnoold=4 varoattno=1]\n Var [varno=5 varattno=1 vartype=23 varnoold=5 varoattno=1]\n Const [consttype=1700 constlen=-1 constvalue=94908966309104 constisnull=false\n constbyval=false]\n FuncExpr [funcid=1598 funcresulttype=701 funcretset=false funcvariadic=false\n funcformat=COERCE_EXPLICIT_CALL]\n\n It seems the semantics it wants to express is: filter after join all the tables.\n\n\n Thus maybe a plan like\n\nNested Loop\n Join Filter: (((((a.a + a_1.a))::numeric / 4.0))::double precision > random())\n -> Nested Loop\n -> Function Scan on generate_series a\n -> Function Scan on generate_series a_1\n -> Seq Scan on t (cost=0.00..32.60 rows=2260 width=8)\n\n May also be reasonable because it is just the direct translation from the original query tree.\n\n The above plans may have different property:\n * the first one, if we push down, can only produce 2 results: 0 rows, or 10 rows. No third possibility\n * the second one, will output 0 ~ 10 rows with equal probability.\n\n\n I am wondering if we should consider volatile functions in restrictinfo when try to distribute_restrictinfo_to_rels?\n\n\nBest,\nZhenghua Lyu\n\n\n\n\n\n\n\n\nHi all,\n\n  \n\n     consider the following SQL:\n\n\n\n\n================================================================================================\n\n   gpadmin=# explain (verbose, costs off) \n\n     select * from t, \n\n                             (select a from generate_series(1, 1)a)x, \n\n                             (select a from generate_series(1, 1)a)y \n\n      where ((x.a+y.a)/4.0) > random();\n\n\n\n                                       QUERY PLAN\n\n\n----------------------------------------------------------------------------------------\n\n\n Nested Loop\n\n\n   Output: t.a, t.b, a.a, a_1.a\n\n\n   ->  Nested Loop\n\n\n         Output: a.a, a_1.a\n\n\n         Join Filter: (((((a.a + a_1.a))::numeric / 4.0))::double precision > random())\n\n\n         ->  Function Scan on pg_catalog.generate_series a\n\n\n               Output: a.a\n\n\n               Function Call: generate_series(1, 1)\n\n\n         ->  Function Scan on pg_catalog.generate_series a_1\n\n\n               Output: a_1.a\n\n\n               Function Call: generate_series(1, 1)\n\n\n   ->  Seq Scan on public.t\n\n\n         Output: t.a, t.b\n\n\n(13 rows)\n\n\n\n\n\n================================================================================================\n\n\n\n\n        The where clause is \"pushed down to the x,y\" because it only references these two relations.\n\n\n\n\n        The original query tree's join tree is like:       \n\n\n\nFromExpr []\n\n\n\n\n        [fromlist]\n\n\n\n\n\n                RangeTblRef [rtindex=1]\n\n\n\n\n\n                RangeTblRef [rtindex=4]\n\n\n\n\n\n                RangeTblRef [rtindex=5]\n\n\n\n\n\n        [quals]\n\n\n\n\n\n                OpExpr [opno=674 opfuncid=297 opresulttype=16 opretset=false]\n\n\n\n\n\n                        FuncExpr [funcid=1746 funcresulttype=701 funcretset=false funcvariadic=false      \n                                  funcformat=COERCE_IMPLICIT_CAST]\n\n\n\n\n\n                                OpExpr [opno=1761 opfuncid=1727 opresulttype=1700 opretset=false]\n\n\n\n\n\n                                        FuncExpr [funcid=1740 funcresulttype=1700 funcretset=false funcvariadic=false \n                                                         funcformat=COERCE_IMPLICIT_CAST]\n\n\n\n\n\n                                                OpExpr [opno=551 opfuncid=177 opresulttype=23 opretset=false]\n\n\n\n\n\n                                                        Var [varno=4 varattno=1 vartype=23 varnoold=4 varoattno=1]\n\n\n\n\n\n                                                        Var [varno=5 varattno=1 vartype=23 varnoold=5 varoattno=1]\n\n\n\n\n\n                                        Const [consttype=1700 constlen=-1 constvalue=94908966309104 constisnull=false \n                                                     constbyval=false]\n\n\n\n\n                        FuncExpr [funcid=1598 funcresulttype=701 funcretset=false funcvariadic=false \n\n                                          funcformat=COERCE_EXPLICIT_CALL]\n\n\n\n  \n\n    It seems the semantics it wants to express is:   filter after join all the tables.\n\n\n\n\n    \n\n    Thus maybe a plan like \n\n\n\n\n\nNested Loop \n\n Join Filter: (((((a.a + a_1.a))::numeric / 4.0))::double precision > random())  \n\n ->  Nested Loop  \n\n         ->  Function Scan on generate_series a  \n\n         ->  Function Scan on generate_series a_1  \n\n\n\n   ->  Seq Scan on t  (cost=0.00..32.60 rows=2260 width=8)\n\n\n\n \n\n  May also be reasonable because it is just the direct translation from the original query tree.\n\n\n\n\n   The above plans may have different property:\n\n      * the first one, if we push down, can only produce 2 results: 0 rows, or 10 rows. No third possibility\n\n      * the second one, will output 0 ~ 10 rows with equal probability.\n\n\n\n\n\n\n\n    I am wondering if we should consider volatile functions in restrictinfo when try to distribute_restrictinfo_to_rels?\n\n\n\n\n\n\n\nBest,\n\nZhenghua Lyu", "msg_date": "Fri, 10 Jul 2020 04:23:04 +0000", "msg_from": "Zhenghua Lyu <zlyu@vmware.com>", "msg_from_op": true, "msg_subject": "distribute_restrictinfo_to_rels if restrictinfo contains volatile\n functions" }, { "msg_contents": "Zhenghua Lyu <zlyu@vmware.com> writes:\n> The where clause is \"pushed down to the x,y\" because it only references these two relations.\n\nYeah. I agree that it's somewhat unprincipled, but changing it doesn't\nseem like a great idea. There are a lot of users out there who aren't\nterribly careful about marking their UDFs as non-volatile, but would be\nunhappy if the optimizer suddenly crippled their queries because of\nbeing picky about this.\n\nAlso, we specifically document that order of evaluation in WHERE clauses\nis not guaranteed, so I feel no need to make promises about how often\nvolatile functions there will be evaluated. (Volatiles in SELECT lists\nare a different story.)\n\nThis behavior has stood for a couple of decades with few user complaints,\nso why are you concerned about changing it?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 10 Jul 2020 10:10:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: distribute_restrictinfo_to_rels if restrictinfo contains volatile\n functions" }, { "msg_contents": "Hi,\n Thanks for your reply.\n\n I find the problem in a distributed database based on Postgres (Greenplum). In distributed database\n there may be distributed tables:\n every single node only contain subpart of the data and combine them all will get the full data\n\n I think it may also be a problem for Postgres's parallel computing.\n 1. What postgres planner do for parallel scan a table and then join a generate_series() function scan?\n 2. What postgres planner do for parallel scan a table and then join a generate_series() function scan with a volatile filter?\n\n Thus running the SQL in the above case, since generate_series functions can can be taken as the same every where,\n And generate_series join generate_series also have this property: the data is complete in every single node. This property\n is very helpful in a distributed join: A distributed table join generate_series function can just join in every local node and then\n gather the result back to a single node.\n\n But things are different when there are volatile functions: volatile functions may be in where clause, targetlist and somewhere.\n\n That is why I come up with the above case and ask here.\n\n To be honest, I do not care the push down so much. It is not normal usage to writing volatile functions in where clause.\n I just find it lose the property.\n\nBest,\nZhenghua Lyu\n\n\n\n\n\n\n________________________________\nFrom: Tom Lane <tgl@sss.pgh.pa.us>\nSent: Friday, July 10, 2020 10:10 PM\nTo: Zhenghua Lyu <zlyu@vmware.com>\nCc: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: distribute_restrictinfo_to_rels if restrictinfo contains volatile functions\n\nZhenghua Lyu <zlyu@vmware.com> writes:\n> The where clause is \"pushed down to the x,y\" because it only references these two relations.\n\nYeah. I agree that it's somewhat unprincipled, but changing it doesn't\nseem like a great idea. There are a lot of users out there who aren't\nterribly careful about marking their UDFs as non-volatile, but would be\nunhappy if the optimizer suddenly crippled their queries because of\nbeing picky about this.\n\nAlso, we specifically document that order of evaluation in WHERE clauses\nis not guaranteed, so I feel no need to make promises about how often\nvolatile functions there will be evaluated. (Volatiles in SELECT lists\nare a different story.)\n\nThis behavior has stood for a couple of decades with few user complaints,\nso why are you concerned about changing it?\n\n regards, tom lane\n\n\n\n\n\n\n\n\nHi,\n\n    Thanks for your reply.\n\n \n\n    I find the problem in a distributed database based on Postgres (Greenplum). In distributed database\n\n    there may be distributed tables:\n\n         every single node only contain subpart of the data and combine them all will get the full data\n\n\n\n\n   \nI think it may also be a problem for Postgres's parallel computing.\n\n    1. What postgres planner do for parallel scan a table and then join a generate_series() function scan?\n\n    2. What postgres planner do for parallel scan a table and then join a generate_series() function scan with a volatile filter?\n\n\n\n\n   \nThus running the SQL in the above case, since generate_series functions can can be taken as the same every where,\n\n    And generate_series join generate_series also have this property:\nthe data is complete in every single node. This property\n\n    is very helpful in a distributed join: A distributed table join  generate_series\n function can just join in every local node and then  \n\n \n   gather the result back to a single node.\n\n\n\n\n    But things are different when there are volatile\n functions: volatile functions may be in where clause, targetlist and somewhere.\n\n    \n\n    That is why I come up with the above case and ask here.\n\n\n\n\n    To be honest, I do not care the push down so much. It is not normal usage to writing volatile functions in where clause.\n\n    I just find it lose the property.\n\n\n\n\nBest,\n\nZhenghua Lyu\n\n\n\n\n    \n\n    \n\n    \n\n\n\n\n\n\n\n\nFrom: Tom Lane <tgl@sss.pgh.pa.us>\nSent: Friday, July 10, 2020 10:10 PM\nTo: Zhenghua Lyu <zlyu@vmware.com>\nCc: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: distribute_restrictinfo_to_rels if restrictinfo contains volatile functions\n \n\n\nZhenghua Lyu <zlyu@vmware.com> writes:\n>         The where clause is \"pushed down to the x,y\" because it only references these two relations.\n\nYeah.  I agree that it's somewhat unprincipled, but changing it doesn't\nseem like a great idea.  There are a lot of users out there who aren't\nterribly careful about marking their UDFs as non-volatile, but would be\nunhappy if the optimizer suddenly crippled their queries because of\nbeing picky about this.\n\nAlso, we specifically document that order of evaluation in WHERE clauses\nis not guaranteed, so I feel no need to make promises about how often\nvolatile functions there will be evaluated.  (Volatiles in SELECT lists\nare a different story.)\n\nThis behavior has stood for a couple of decades with few user complaints,\nso why are you concerned about changing it?\n\n                        regards, tom lane", "msg_date": "Sat, 11 Jul 2020 00:32:32 +0000", "msg_from": "Zhenghua Lyu <zlyu@vmware.com>", "msg_from_op": true, "msg_subject": "Re: distribute_restrictinfo_to_rels if restrictinfo contains volatile\n functions" } ]
[ { "msg_contents": "Hello.\n\nIf psql connected using GSSAPI auth and server restarted, reconnection\nsequence stalls and won't return.\n\nI found that psql(libpq) sends startup packet via gss\nencryption. conn->gssenc should be reset when encryption state is\nfreed.\n\nThe reason that psql doesn't notice the error is pqPacketSend returns\nSTATUS_OK when write error occurred. That behavior contradicts to the\ncomment of the function. The function is used only while making\nconnection so it's ok to error out immediately by write failure. I\nthink other usage of pqFlush while making a connection should report\nwrite failure the same way.\n\nFinally, It's user-friendly if psql shows error message for error on\nreset attempts. (This perhaps should be arguable.)\n\nThe attached does the above. Any thoughts and/or opinions are welcome.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Fri, 10 Jul 2020 17:38:03 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "GSSENC'ed connection stalls while reconnection attempts." }, { "msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> If psql connected using GSSAPI auth and server restarted, reconnection\n> sequence stalls and won't return.\n\nYeah, reproduced here. (I wonder if there's any reasonable way to\nexercise this scenario in src/test/kerberos/.)\n\n> I found that psql(libpq) sends startup packet via gss\n> encryption. conn->gssenc should be reset when encryption state is\n> freed.\n\nActually, it looks to me like the GSS support was wedged in by somebody\nwho was paying no attention to how SSL is managed, or else we forgot\nto pay attention to GSS the last time we rearranged SSL support. It's\ncompletely broken for the multiple-host-addresses scenario as well,\nbecause try_gss is being set and cleared in the wrong places altogether.\nconn->gcred is not being handled correctly either I think --- we need\nto make sure that it's dropped in pqDropConnection.\n\nThe attached patch makes this all act more like the way SSL is handled,\nand for me it resolves the reconnection problem.\n\n> The reason that psql doesn't notice the error is pqPacketSend returns\n> STATUS_OK when write error occurred. That behavior contradicts to the\n> comment of the function. The function is used only while making\n> connection so it's ok to error out immediately by write failure. I\n> think other usage of pqFlush while making a connection should report\n> write failure the same way.\n\nI'm disinclined to mess with that, because (a) I don't think it's the\nactual source of the problem, and (b) it would affect way more than\njust GSS mode.\n\n> Finally, It's user-friendly if psql shows error message for error on\n> reset attempts. (This perhaps should be arguable.)\n\nMeh, that's changing fairly longstanding behavior that I don't think\nwe've had many complaints about.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 10 Jul 2020 12:01:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: GSSENC'ed connection stalls while reconnection attempts." }, { "msg_contents": "I wrote:\n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n>> If psql connected using GSSAPI auth and server restarted, reconnection\n>> sequence stalls and won't return.\n\n> Yeah, reproduced here. (I wonder if there's any reasonable way to\n> exercise this scenario in src/test/kerberos/.)\n\nI tried writing such a test based on the IO::Pty infrastructure used\nby src/bin/psql/t/010_tab_completion.pl, as attached. It works, but\nit feels pretty grotty, especially seeing that so much of the patch\nis copy-and-pasted from 010_tab_completion.pl. I think if we want\nto have a test like this, it'd be good to work a little harder on\nrefactoring so that more of that code can be shared. My Perl skillz\nare a bit weak for that, though.\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 11 Jul 2020 19:41:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: GSSENC'ed connection stalls while reconnection attempts." }, { "msg_contents": "At Fri, 10 Jul 2020 12:01:10 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > If psql connected using GSSAPI auth and server restarted, reconnection\n> > sequence stalls and won't return.\n> \n> Yeah, reproduced here. (I wonder if there's any reasonable way to\n> exercise this scenario in src/test/kerberos/.)\n> \n> > I found that psql(libpq) sends startup packet via gss\n> > encryption. conn->gssenc should be reset when encryption state is\n> > freed.\n> \n> Actually, it looks to me like the GSS support was wedged in by somebody\n> who was paying no attention to how SSL is managed, or else we forgot\n> to pay attention to GSS the last time we rearranged SSL support. It's\n> completely broken for the multiple-host-addresses scenario as well,\n> because try_gss is being set and cleared in the wrong places altogether.\n> conn->gcred is not being handled correctly either I think --- we need\n> to make sure that it's dropped in pqDropConnection.\n> \n> The attached patch makes this all act more like the way SSL is handled,\n> and for me it resolves the reconnection problem.\n\nIt looks good to me.\n\n> > The reason that psql doesn't notice the error is pqPacketSend returns\n> > STATUS_OK when write error occurred. That behavior contradicts to the\n> > comment of the function. The function is used only while making\n> > connection so it's ok to error out immediately by write failure. I\n> > think other usage of pqFlush while making a connection should report\n> > write failure the same way.\n> \n> I'm disinclined to mess with that, because (a) I don't think it's the\n> actual source of the problem, and (b) it would affect way more than\n> just GSS mode.\n\nIf I did that in pqFlush your objection would be right, but\npqPacketSend is defined as \"RETURNS: STATUS_ERROR if the write fails\"\nso not doing that is just wrong. (pqSendSome reported write failure in\nthis case.) For other parts in authentication code, I don't think it\ndoesn't affect badly because authentication should proceed without any\nread/write overlapping.\n\n> > Finally, It's user-friendly if psql shows error message for error on\n> > reset attempts. (This perhaps should be arguable.)\n> \n> Meh, that's changing fairly longstanding behavior that I don't think\n> we've had many complaints about.\n\nYeah, I haven't seen the message for any other reasons than the\nabsence of server. :p And, I noticed that, in the first place, I would\nsee that message in the next connection attempt from scratch.\n\nI agree to you on this point.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 13 Jul 2020 14:35:13 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: GSSENC'ed connection stalls while reconnection attempts." }, { "msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> At Fri, 10 Jul 2020 12:01:10 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n>> The attached patch makes this all act more like the way SSL is handled,\n>> and for me it resolves the reconnection problem.\n\n> It looks good to me.\n\nOK, thanks.\n\n>>> The reason that psql doesn't notice the error is pqPacketSend returns\n>>> STATUS_OK when write error occurred. That behavior contradicts to the\n>>> comment of the function. The function is used only while making\n>>> connection so it's ok to error out immediately by write failure. I\n>>> think other usage of pqFlush while making a connection should report\n>>> write failure the same way.\n\n>> I'm disinclined to mess with that, because (a) I don't think it's the\n>> actual source of the problem, and (b) it would affect way more than\n>> just GSS mode.\n\n> If I did that in pqFlush your objection would be right, but\n> pqPacketSend is defined as \"RETURNS: STATUS_ERROR if the write fails\"\n> so not doing that is just wrong. (pqSendSome reported write failure in\n> this case.) For other parts in authentication code, I don't think it\n> doesn't affect badly because authentication should proceed without any\n> read/write overlapping.\n\nAs the comment for pqSendSome says, we report a write failure immediately\nonly if we also cannot read. I don't really see a reason why the behavior\ndescribed there isn't fine during initial connection as well. If you feel\nthat the comment for pqPacketSend needs adjustment, we can do that.\nIn any case, I'm quite against changing pqPacketSend's behavior because\n\"it's only used during initial connection\"; there is nothing about the\nfunction that restricts it to that case.\n\nBottom line here is that I'm suspicious of changing the behavior of\nthe read/write code on the strength of a bug in the GSS state management\nlogic. If there's a reason to change the read/write code, we should be\nable to demonstrate it without the GSS bug.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 Jul 2020 11:08:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: GSSENC'ed connection stalls while reconnection attempts." }, { "msg_contents": "At Mon, 13 Jul 2020 11:08:09 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > At Fri, 10 Jul 2020 12:01:10 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote intgl> >> I'm disinclined to mess with that, because (a) I don't think it's the\n> >> actual source of the problem, and (b) it would affect way more than\n> >> just GSS mode.\n> \n> > If I did that in pqFlush your objection would be right, but\n> > pqPacketSend is defined as \"RETURNS: STATUS_ERROR if the write fails\"\n> > so not doing that is just wrong. (pqSendSome reported write failure in\n> > this case.) For other parts in authentication code, I don't think it\n> > doesn't affect badly because authentication should proceed without any\n> > read/write overlapping.\n> \n> As the comment for pqSendSome says, we report a write failure immediately\n> only if we also cannot read. I don't really see a reason why the behavior\n> described there isn't fine during initial connection as well. If you feel\n> that the comment for pqPacketSend needs adjustment, we can do that.\n\nI'm fine with that.\n\n> In any case, I'm quite against changing pqPacketSend's behavior because\n> \"it's only used during initial connection\"; there is nothing about the\n> function that restricts it to that case.\n\nThat sounds fair enough.\n\n> Bottom line here is that I'm suspicious of changing the behavior of\n> the read/write code on the strength of a bug in the GSS state management\n> logic. If there's a reason to change the read/write code, we should be\n> able to demonstrate it without the GSS bug.\n\nAgreed to separate the change from this issue. I also don't think\nthat change in behavior dramatically improve the situation since we\nshould have had a bunch of trouble when a write actually failed in the\nnormal case.\n\nI'm going to post a patch to change the comment of pqPacketSend.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 14 Jul 2020 13:31:31 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: GSSENC'ed connection stalls while reconnection attempts." }, { "msg_contents": "At Tue, 14 Jul 2020 13:31:31 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Agreed to separate the change from this issue. I also don't think\n> that change in behavior dramatically improve the situation since we\n> should have had a bunch of trouble when a write actually failed in the\n> normal case.\n> \n> I'm going to post a patch to change the comment of pqPacketSend.\n\nSo this is a proposal to add a description about the behavior on write\nfailure. The last half of the addition is a copy from the comment of\npqFlush.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 15 Jul 2020 11:49:06 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: GSSENC'ed connection stalls while reconnection attempts." } ]
[ { "msg_contents": "Hi all,\n\nDo you know what is the status of Request Pipelining and/or Batching in\nlibpq ?\n\nI could see that I'm not the first one to think about it, I see an item in\nthe todolist:\nhttps://web.archive.org/web/20200125013930/https://wiki.postgresql.org/wiki/Todo\n\nAnd a thread here:\nhttps://www.postgresql-archive.org/PATCH-Batch-pipelining-support-for-libpq-td5904551i80.html\n\nAnd a patch here:\nhttps://2ndquadrant.github.io/postgres/libpq-batch-mode.html\n\nSeems like this boost performances a lot, drogon, a c++ framework\noutperform all\nother web framework thanks to this fork:\nhttps://www.techempower.com/benchmarks/#section=data-r19&hw=ph&test=update\nhttps://github.com/TechEmpower/FrameworkBenchmarks/issues/5502\n\nIt would be nice to have it it the official libpq so we don't have to use\nan outdated fork\nto have this feature.\nIs anybody working on it ? is there lots of work to finalize this patch ?\n\nThanks in advance,\nMatthieu\n\nHi all,Do you know what is the status of Request Pipelining and/or Batching in libpq ? I could see that I'm not the first one to think about it, I see an item in the todolist:https://web.archive.org/web/20200125013930/https://wiki.postgresql.org/wiki/TodoAnd a thread here:https://www.postgresql-archive.org/PATCH-Batch-pipelining-support-for-libpq-td5904551i80.htmlAnd a patch here:https://2ndquadrant.github.io/postgres/libpq-batch-mode.htmlSeems like this boost performances a lot, drogon, a c++ framework outperform allother web framework thanks to this fork:https://www.techempower.com/benchmarks/#section=data-r19&hw=ph&test=updatehttps://github.com/TechEmpower/FrameworkBenchmarks/issues/5502It would be nice to have it it the official libpq so we don't have to use an outdated forkto have this feature. Is anybody working on it ? is there lots of work to finalize this patch ?Thanks in advance,Matthieu", "msg_date": "Fri, 10 Jul 2020 17:08:20 +0200", "msg_from": "Matthieu Garrigues <matthieu.garrigues@gmail.com>", "msg_from_op": true, "msg_subject": "libpq: Request Pipelining/Batching status ?" }, { "msg_contents": "Did my message made it to the mailing list ? or not yet ?\n\nMatthieu Garrigues\n\n\nOn Fri, Jul 10, 2020 at 5:08 PM Matthieu Garrigues <\nmatthieu.garrigues@gmail.com> wrote:\n\n> Hi all,\n>\n> Do you know what is the status of Request Pipelining and/or Batching in\n> libpq ?\n>\n> I could see that I'm not the first one to think about it, I see an item in\n> the todolist:\n>\n> https://web.archive.org/web/20200125013930/https://wiki.postgresql.org/wiki/Todo\n>\n> And a thread here:\n>\n> https://www.postgresql-archive.org/PATCH-Batch-pipelining-support-for-libpq-td5904551i80.html\n>\n> And a patch here:\n> https://2ndquadrant.github.io/postgres/libpq-batch-mode.html\n>\n> Seems like this boost performances a lot, drogon, a c++ framework\n> outperform all\n> other web framework thanks to this fork:\n> https://www.techempower.com/benchmarks/#section=data-r19&hw=ph&test=update\n> https://github.com/TechEmpower/FrameworkBenchmarks/issues/5502\n>\n> It would be nice to have it it the official libpq so we don't have to use\n> an outdated fork\n> to have this feature.\n> Is anybody working on it ? is there lots of work to finalize this patch ?\n>\n> Thanks in advance,\n> Matthieu\n>\n>\n\nDid my message made it to the mailing list ? or not yet ?Matthieu GarriguesOn Fri, Jul 10, 2020 at 5:08 PM Matthieu Garrigues <matthieu.garrigues@gmail.com> wrote:Hi all,Do you know what is the status of Request Pipelining and/or Batching in libpq ? I could see that I'm not the first one to think about it, I see an item in the todolist:https://web.archive.org/web/20200125013930/https://wiki.postgresql.org/wiki/TodoAnd a thread here:https://www.postgresql-archive.org/PATCH-Batch-pipelining-support-for-libpq-td5904551i80.htmlAnd a patch here:https://2ndquadrant.github.io/postgres/libpq-batch-mode.htmlSeems like this boost performances a lot, drogon, a c++ framework outperform allother web framework thanks to this fork:https://www.techempower.com/benchmarks/#section=data-r19&hw=ph&test=updatehttps://github.com/TechEmpower/FrameworkBenchmarks/issues/5502It would be nice to have it it the official libpq so we don't have to use an outdated forkto have this feature. Is anybody working on it ? is there lots of work to finalize this patch ?Thanks in advance,Matthieu", "msg_date": "Wed, 15 Jul 2020 15:13:00 +0200", "msg_from": "Matthieu Garrigues <matthieu.garrigues@gmail.com>", "msg_from_op": true, "msg_subject": "Re: libpq: Request Pipelining/Batching status ?" } ]
[ { "msg_contents": "> On 2 July 2020, at 06:39, Daniel Gustafsson <daniel@yesql.se> wrote:\n> > On 10 Apr 2020, at 23:50, Alexandra Wang <lewang@pivotal.io> wrote:\n>\n> > On Fri, Apr 10, 2020 at 8:37 AM Ashutosh Bapat <\nashutosh.bapat@2ndquadrant.com <mailto:ashutosh.bapat@2ndquadrant.com>>\nwrote:\n> > > for a multi-key value the ^\n> > > points to the first column and the reader may think that that's the\n> > > problematci column. Should it instead point to ( ?\n> >\n> > I attached a v2 of Amit's 0002 patch to also report the exact column\n> > for the partition overlap errors.\n>\n> This patch fails to apply to HEAD due to conflicts in the create_table\nexpected\n> output. Can you please submit a rebased version? I'm marking the CF\nentry\n> Waiting on Author in the meantime.\n\nThank you Daniel. Here's the rebased patch. I also squashed the two\npatches into one so it's easier to review.\n\n-- \n*Alexandra Wang*", "msg_date": "Fri, 10 Jul 2020 11:01:43 -0700", "msg_from": "Alexandra Wang <alexandra.wanglei@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Report error position in partition bound check" }, { "msg_contents": "On Fri, 10 Jul 2020 at 23:31, Alexandra Wang <alexandra.wanglei@gmail.com>\nwrote:\n\n>\n>\n> Thank you Daniel. Here's the rebased patch. I also squashed the two\n> patches into one so it's easier to review.\n>\n> Thanks for rebasing patch. It applies cleanly still. Here are some comments\n@@ -3320,7 +3338,9 @@ make_one_partition_rbound(PartitionKey key, int\nindex, List *datums, bool lower)\n * partition_rbound_cmp\n *\n * Return for two range bounds whether the 1st one (specified in datums1,\n\nI think it's better to reword it as. \"For two range bounds decide whether\n...\n\n- * kind1, and lower1) is <, =, or > the bound specified in *b2.\n+ * kind1, and lower1) is <, =, or > the bound specified in *b2. 0 is\nreturned if\n+ * equal and the 1-based index of the first mismatching bound if unequal;\n+ * multiplied by -1 if the 1st bound is smaller.\n\nThis sentence makes sense after the above correction. I liked this change,\nrequires very small changes in other parts.\n\n\n /*\n@@ -3495,7 +3518,7 @@ static int\n partition_range_bsearch(int partnatts, FmgrInfo *partsupfunc,\n Oid *partcollation,\n PartitionBoundInfo boundinfo,\n- PartitionRangeBound *probe, bool *is_equal)\n+ PartitionRangeBound *probe, bool *is_equal, int32\n*cmpval)\n\nPlease update the prologue explaining the new argument.\n\nAfter this change, the patch will be ready for a committer.\n-- \nBest Wishes,\nAshutosh\n\nOn Fri, 10 Jul 2020 at 23:31, Alexandra Wang <alexandra.wanglei@gmail.com> wrote:Thank you Daniel. Here's the rebased patch. I also squashed the twopatches into one so it's easier to review.Thanks for rebasing patch. It applies cleanly still. Here are some comments@@ -3320,7 +3338,9 @@ make_one_partition_rbound(PartitionKey key, int index, List *datums, bool lower)  * partition_rbound_cmp  *  * Return for two range bounds whether the 1st one (specified in datums1,I think it's better to reword it as. \"For two range bounds decide whether ... - * kind1, and lower1) is <, =, or > the bound specified in *b2.+ * kind1, and lower1) is <, =, or > the bound specified in *b2. 0 is returned if+ * equal and the 1-based index of the first mismatching bound if unequal;+ * multiplied by -1 if the 1st bound is smaller.This sentence makes sense after the above correction. I liked this change,requires very small changes in other parts.  /*@@ -3495,7 +3518,7 @@ static int partition_range_bsearch(int partnatts, FmgrInfo *partsupfunc,                        Oid *partcollation,                        PartitionBoundInfo boundinfo,-                       PartitionRangeBound *probe, bool *is_equal)+                       PartitionRangeBound *probe, bool *is_equal, int32 *cmpval)Please update the prologue explaining the new argument. After this change, the patch will be ready for a committer.-- Best Wishes,Ashutosh", "msg_date": "Fri, 4 Sep 2020 19:42:27 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Report error position in partition bound check" }, { "msg_contents": "On Fri, Sep 04, 2020 at 07:42:27PM +0530, Ashutosh Bapat wrote:\n> After this change, the patch will be ready for a committer.\n\nAlexandra, this patch is waiting on author after this review. Could\nyou answer to the points raised by Ashutosh and update this patch\naccordingly?\n--\nMichael", "msg_date": "Thu, 17 Sep 2020 13:30:00 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Report error position in partition bound check" }, { "msg_contents": "Hi Ashutosh,\n\nI had forgotten about this thread, but Michael's ping email brought it\nto my attention.\n\nOn Fri, Sep 4, 2020 at 11:12 PM Ashutosh Bapat\n<ashutosh.bapat@2ndquadrant.com> wrote:\n> Thanks for rebasing patch. It applies cleanly still. Here are some comments\n\nThanks for the review.\n\n> @@ -3320,7 +3338,9 @@ make_one_partition_rbound(PartitionKey key, int index, List *datums, bool lower)\n> * partition_rbound_cmp\n> *\n> * Return for two range bounds whether the 1st one (specified in datums1,\n>\n> I think it's better to reword it as. \"For two range bounds decide whether ...\n>\n> - * kind1, and lower1) is <, =, or > the bound specified in *b2.\n> + * kind1, and lower1) is <, =, or > the bound specified in *b2. 0 is returned if\n> + * equal and the 1-based index of the first mismatching bound if unequal;\n> + * multiplied by -1 if the 1st bound is smaller.\n>\n> This sentence makes sense after the above correction. I liked this change,\n> requires very small changes in other parts.\n\n+1 to your suggested rewording, although I wrote: \"For two range\nbounds this decides whether...\"\n\n> /*\n> @@ -3495,7 +3518,7 @@ static int\n> partition_range_bsearch(int partnatts, FmgrInfo *partsupfunc,\n> Oid *partcollation,\n> PartitionBoundInfo boundinfo,\n> - PartitionRangeBound *probe, bool *is_equal)\n> + PartitionRangeBound *probe, bool *is_equal, int32 *cmpval)\n>\n> Please update the prologue explaining the new argument.\n\nDone. Actually, I noticed that *is_equal was unused in this\nfunction's only caller. *cmpval == 0 already gives that, so removed\nis_equal parameter.\n\nAttached updated version.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 17 Sep 2020 16:35:58 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report error position in partition bound check" }, { "msg_contents": "On Thu, 17 Sep 2020 at 13:06, Amit Langote <amitlangote09@gmail.com> wrote:\n\n> Hi Ashutosh,\n>\n> I had forgotten about this thread, but Michael's ping email brought it\n> to my attention.\n>\n> Thanks Amit for addressing comments.\n\n@@ -4256,5 +4256,8 @@ transformPartitionBoundValue(ParseState *pstate, Node\n*val,\n if (!IsA(value, Const))\n elog(ERROR, \"could not evaluate partition bound expression\");\n\n+ /* Preserve parser location information. */\n+ ((Const *) value)->location = exprLocation(val);\n+\n return (Const *) value;\n }\n\nThis caught my attention and I was wondering whether transformExpr() itself\nshould transfer the location from input expression to the output\nexpression. Some minions of transformExprRecurse() seem to be doing that.\nThe change here may be an indication that some of them are not doing this.\nIn that case may be it's better to find those and fix rather than a\nwhite-wash fix here. In what case did we find that location was not set by\ntransformExpr? Sorry for not catching this earlier.\n\n/* New lower bound is certainly >= bound at offet. */\noffet/offset? But this comment is implied by the comment just two lines\nabove. So I am not sure it's really needed.\n\n/* Fetch the problem bound from lower datums list. */\nThis is fetching problematic key value rather than the whole problematic\nbound. I think the comment would be useful if it explains why cmpval -1 th\nkey is problematic but then that's evident from the prologue\nof partition_rbound_cmp() so I am not sure if this comment is really\nrequired. For example, we aren't adding a comment here\n+ overlap_location = ((PartitionRangeDatum *)\n+ list_nth(spec->upperdatums, -cmpval - 1))->location;\n\n-- \nBest Wishes,\nAshutosh\n\nOn Thu, 17 Sep 2020 at 13:06, Amit Langote <amitlangote09@gmail.com> wrote:Hi Ashutosh,\n\nI had forgotten about this thread, but Michael's ping email brought it\nto my attention.Thanks Amit for addressing comments.@@ -4256,5 +4256,8 @@ transformPartitionBoundValue(ParseState *pstate, Node *val, \tif (!IsA(value, Const)) \t\telog(ERROR, \"could not evaluate partition bound expression\"); +\t/* Preserve parser location information. */+\t((Const *) value)->location = exprLocation(val);+ \treturn (Const *) value; }This caught my attention and I was wondering whether transformExpr() itself should transfer the location from input expression to the output expression. Some minions of transformExprRecurse() seem to be doing that. The change here may be an indication that some of them are not doing this. In that case may be it's better to find those and fix rather than a white-wash fix here. In what case did we find that location was not set by transformExpr? Sorry for not catching this earlier./* New lower bound is certainly >= bound at offet. */offet/offset? But this comment is implied by the comment just two lines above. So I am not sure it's really needed./* Fetch the problem bound from lower datums list. */This is fetching problematic key value rather than the whole problematic bound. I think the comment would be useful if it explains why cmpval -1 th key is problematic but then that's evident from the prologue of partition_rbound_cmp() so I am not sure if this comment is really required. For example, we aren't adding a comment here+\t\t\t\t\t\t\t\toverlap_location = ((PartitionRangeDatum *)+\t\t\t\t\t\t\t\t\tlist_nth(spec->upperdatums, -cmpval - 1))->location;-- Best Wishes,Ashutosh", "msg_date": "Fri, 18 Sep 2020 16:03:47 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Report error position in partition bound check" }, { "msg_contents": "Thanks Ashutosh.\n\nOn Fri, Sep 18, 2020 at 7:33 PM Ashutosh Bapat\n<ashutosh.bapat@2ndquadrant.com> wrote:\n> Thanks Amit for addressing comments.\n>\n> @@ -4256,5 +4256,8 @@ transformPartitionBoundValue(ParseState *pstate, Node *val,\n> if (!IsA(value, Const))\n> elog(ERROR, \"could not evaluate partition bound expression\");\n>\n> + /* Preserve parser location information. */\n> + ((Const *) value)->location = exprLocation(val);\n> +\n> return (Const *) value;\n> }\n>\n> This caught my attention and I was wondering whether transformExpr() itself should transfer the location from input expression to the output expression. Some minions of transformExprRecurse() seem to be doing that. The change here may be an indication that some of them are not doing this. In that case may be it's better to find those and fix rather than a white-wash fix here. In what case did we find that location was not set by transformExpr? Sorry for not catching this earlier.\n\nAFAICS, transformExpr() is fine. What loses the location value is the\nunconditional evaluate_expr() call which generates a fresh Const node,\npossibly after evaluating a non-Const expression that is passed to it.\nI don't find it very desirable to change evaluate_expr() to accept a\nlocation value, because other callers of it don't seem to care.\nInstead, in the updated patch, I have made calling evaluate_expr()\nconditional on the expression at hand being a non-Const node and\nassign location by hand on return. If the expression is already\nConst, we don't need to update the location field as it should already\nbe correct. Though, I did notice that the evaluate_expr() call has an\nadditional responsibility which is to pass the partition key specified\ncollation to the bound expression, so we should not fail to update an\nalready-Const node's collation likewise.\n\n> /* New lower bound is certainly >= bound at offet. */\n> offet/offset? But this comment is implied by the comment just two lines above. So I am not sure it's really needed.\n\nGiven that cmpval is set all the way in partition_range_bsearch(), I\nthought it would help to clarify why this code can assume it must be\n>= 0. It is because a valid offset returned by\npartition_range_bsearch() must correspond to a bound that it found to\nbe <= the probe bound passed to it.\n\n> /* Fetch the problem bound from lower datums list. */\n> This is fetching problematic key value rather than the whole problematic bound. I think the comment would be useful if it explains why cmpval -1 th key is problematic but then that's evident from the prologue of partition_rbound_cmp() so I am not sure if this comment is really required. For example, we aren't adding a comment here\n> + overlap_location = ((PartitionRangeDatum *)\n> + list_nth(spec->upperdatums, -cmpval - 1))->location;\n\nIn the attached updated patch, I have tried to make the code and\ncomments for different cases consistent. Please have a look.\n\n\n--\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 23 Sep 2020 18:11:24 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report error position in partition bound check" }, { "msg_contents": "On Wed, 23 Sep 2020 at 14:41, Amit Langote <amitlangote09@gmail.com> wrote:\n\n> Thanks Ashutosh.\n>\n> On Fri, Sep 18, 2020 at 7:33 PM Ashutosh Bapat\n> <ashutosh.bapat@2ndquadrant.com> wrote:\n> > Thanks Amit for addressing comments.\n> >\n> > @@ -4256,5 +4256,8 @@ transformPartitionBoundValue(ParseState *pstate,\n> Node *val,\n> > if (!IsA(value, Const))\n> > elog(ERROR, \"could not evaluate partition bound expression\");\n> >\n> > + /* Preserve parser location information. */\n> > + ((Const *) value)->location = exprLocation(val);\n> > +\n> > return (Const *) value;\n> > }\n> >\n> > This caught my attention and I was wondering whether transformExpr()\n> itself should transfer the location from input expression to the output\n> expression. Some minions of transformExprRecurse() seem to be doing that.\n> The change here may be an indication that some of them are not doing this.\n> In that case may be it's better to find those and fix rather than a\n> white-wash fix here. In what case did we find that location was not set by\n> transformExpr? Sorry for not catching this earlier.\n>\n> AFAICS, transformExpr() is fine. What loses the location value is the\n> unconditional evaluate_expr() call which generates a fresh Const node,\n> possibly after evaluating a non-Const expression that is passed to it.\n> I don't find it very desirable to change evaluate_expr() to accept a\n> location value, because other callers of it don't seem to care.\n> Instead, in the updated patch, I have made calling evaluate_expr()\n> conditional on the expression at hand being a non-Const node and\n> assign location by hand on return. If the expression is already\n> Const, we don't need to update the location field as it should already\n> be correct. Though, I did notice that the evaluate_expr() call has an\n> additional responsibility which is to pass the partition key specified\n> collation to the bound expression, so we should not fail to update an\n> already-Const node's collation likewise.\n>\n\nThanks for the detailed explanation. I am not sure whether skipping one\nevaluate_expr() call for a constant is better or reassigning the location.\nThis looks better than the last patch.\n\n\n> > /* New lower bound is certainly >= bound at offet. */\n> > offet/offset? But this comment is implied by the comment just two lines\n> above. So I am not sure it's really needed.\n>\n> Given that cmpval is set all the way in partition_range_bsearch(), I\n> thought it would help to clarify why this code can assume it must be\n> >= 0. It is because a valid offset returned by\n> partition_range_bsearch() must correspond to a bound that it found to\n> be <= the probe bound passed to it.\n>\n\n> > /* Fetch the problem bound from lower datums list. */\n> > This is fetching problematic key value rather than the whole problematic\n> bound. I think the comment would be useful if it explains why cmpval -1 th\n> key is problematic but then that's evident from the prologue of\n> partition_rbound_cmp() so I am not sure if this comment is really required.\n> For example, we aren't adding a comment here\n> > + overlap_location = ((PartitionRangeDatum *)\n> > + list_nth(spec->upperdatums, -cmpval - 1))->location;\n>\n> In the attached updated patch, I have tried to make the code and\n> comments for different cases consistent. Please have a look.\n>\n>\n\nThe comments look okay to me. I don't see a way to keep them short and yet\navoid reading the prologue of partition_range_bsearch(). And there is no\npoint in repeating a portion of that prologue at multiple places. So I am\nfine with these set of comments.\n\nSetting this CF entry as \"RFC\". Thanks.\n\n-- \nBest Wishes,\nAshutosh\n\nOn Wed, 23 Sep 2020 at 14:41, Amit Langote <amitlangote09@gmail.com> wrote:Thanks Ashutosh.\n\nOn Fri, Sep 18, 2020 at 7:33 PM Ashutosh Bapat\n<ashutosh.bapat@2ndquadrant.com> wrote:\n> Thanks Amit for addressing comments.\n>\n> @@ -4256,5 +4256,8 @@ transformPartitionBoundValue(ParseState *pstate, Node *val,\n>   if (!IsA(value, Const))\n>   elog(ERROR, \"could not evaluate partition bound expression\");\n>\n> + /* Preserve parser location information. */\n> + ((Const *) value)->location = exprLocation(val);\n> +\n>   return (Const *) value;\n>  }\n>\n> This caught my attention and I was wondering whether transformExpr() itself should transfer the location from input expression to the output expression. Some minions of transformExprRecurse() seem to be doing that. The change here may be an indication that some of them are not doing this. In that case may be it's better to find those and fix rather than a white-wash fix here. In what case did we find that location was not set by transformExpr? Sorry for not catching this earlier.\n\nAFAICS, transformExpr() is fine.  What loses the location value is the\nunconditional evaluate_expr() call which generates a fresh Const node,\npossibly after evaluating a non-Const expression that is passed to it.\nI don't find it very desirable to change evaluate_expr() to accept a\nlocation value, because other callers of it don't seem to care.\nInstead, in the updated patch, I have made calling evaluate_expr()\nconditional on the expression at hand being a non-Const node and\nassign location by hand on return.  If the expression is already\nConst, we don't need to update the location field as it should already\nbe correct.  Though, I did notice that the evaluate_expr() call has an\nadditional responsibility which is to pass the partition key specified\ncollation to the bound expression, so we should not fail to update an\nalready-Const node's collation likewise.Thanks for the detailed explanation. I am not sure whether skipping one evaluate_expr() call for a constant is better or reassigning the location. This looks better than the last patch.\n\n> /* New lower bound is certainly >= bound at offet. */\n> offet/offset? But this comment is implied by the comment just two lines above. So I am not sure it's really needed.\n\nGiven that cmpval is set all the way in partition_range_bsearch(), I\nthought it would help to clarify why this code can assume it must be\n>= 0.  It is because a valid offset returned by\npartition_range_bsearch() must correspond to a bound that it found to\nbe <= the probe bound passed to it.\n\n> /* Fetch the problem bound from lower datums list. */\n> This is fetching problematic key value rather than the whole problematic bound. I think the comment would be useful if it explains why cmpval -1 th key is problematic but then that's evident from the prologue of partition_rbound_cmp() so I am not sure if this comment is really required. For example, we aren't adding a comment here\n> + overlap_location = ((PartitionRangeDatum *)\n> + list_nth(spec->upperdatums, -cmpval - 1))->location;\n\nIn the attached updated patch, I have tried to make the code and\ncomments for different cases consistent.  Please have a look.\nThe comments look okay to me. I don't see a way to keep them short and yet avoid reading the prologue of partition_range_bsearch(). And there is no point in repeating a portion of that prologue at multiple places. So I am fine with these set of comments.Setting this CF entry as \"RFC\". Thanks.-- Best Wishes,Ashutosh", "msg_date": "Wed, 23 Sep 2020 18:51:51 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Report error position in partition bound check" }, { "msg_contents": "On Wed, Sep 23, 2020 at 10:22 PM Ashutosh Bapat\n<ashutosh.bapat@2ndquadrant.com> wrote:\n> On Wed, 23 Sep 2020 at 14:41, Amit Langote <amitlangote09@gmail.com> wrote:\n> Setting this CF entry as \"RFC\". Thanks.\n\nGreat, thanks for your time on this.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 23 Sep 2020 22:37:07 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report error position in partition bound check" }, { "msg_contents": "I looked this over and pushed it with some minor adjustments.\n\nHowever, while I was looking at it I couldn't help noticing that\ntransformPartitionBoundValue's handling of collation concerns seems\nless than sane. There are two things bugging me:\n\n1. Why does it care about the expression's collation only when there's\na top-level CollateExpr? For example, that means we get an error for\n\nregression=# create table p (f1 text collate \"C\") partition by list(f1);\nCREATE TABLE\nregression=# create table c1 partition of p for values in ('a' collate \"POSIX\");\nERROR: collation of partition bound value for column \"f1\" does not match partition key collation \"C\"\n\nbut not this:\n\nregression=# create table c2 partition of p for values in ('a' || 'b' collate \"POSIX\");\nCREATE TABLE\n\nGiven that we will override the expression's collation with the partition\ncolumn's collation anyway, I don't see why we have this check at all,\nso my preference is to just rip out the entire stanza beginning with\n\"if (IsA(value, CollateExpr))\". If we keep it, though, I think it needs\nto do something else that is more general.\n\n2. Nothing is doing assign_expr_collations() on the partition expression.\nThis can trivially be shown to cause problems:\n\nregression=# create table p (f1 bool) partition by list(f1);\nCREATE TABLE\nregression=# create table cf partition of p for values in ('a' < 'b');\nERROR: could not determine which collation to use for string comparison\nHINT: Use the COLLATE clause to set the collation explicitly.\n\n\nIf we want to rip out the collation mismatch error altogether, then\nfixing #2 would just require inserting assign_expr_collations() before\nthe expression_planner() call. The other direction that would make\nsense to me is to perform assign_expr_collations() after\ncoerce_to_target_type(), and then to complain if exprCollation()\nisn't default and doesn't match the partition collation. In any\ncase a specific test for a CollateExpr seems quite wrong.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 23 Sep 2020 18:19:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Report error position in partition bound check" }, { "msg_contents": "On Thu, Sep 24, 2020 at 7:19 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I looked this over and pushed it with some minor adjustments.\n\nThank you.\n\n> However, while I was looking at it I couldn't help noticing that\n> transformPartitionBoundValue's handling of collation concerns seems\n> less than sane. There are two things bugging me:\n>\n> 1. Why does it care about the expression's collation only when there's\n> a top-level CollateExpr? For example, that means we get an error for\n>\n> regression=# create table p (f1 text collate \"C\") partition by list(f1);\n> CREATE TABLE\n> regression=# create table c1 partition of p for values in ('a' collate \"POSIX\");\n> ERROR: collation of partition bound value for column \"f1\" does not match partition key collation \"C\"\n>\n> but not this:\n>\n> regression=# create table c2 partition of p for values in ('a' || 'b' collate \"POSIX\");\n> CREATE TABLE\n>\n> Given that we will override the expression's collation with the partition\n> column's collation anyway, I don't see why we have this check at all,\n> so my preference is to just rip out the entire stanza beginning with\n> \"if (IsA(value, CollateExpr))\". If we keep it, though, I think it needs\n> to do something else that is more general.\n>\n> 2. Nothing is doing assign_expr_collations() on the partition expression.\n> This can trivially be shown to cause problems:\n>\n> regression=# create table p (f1 bool) partition by list(f1);\n> CREATE TABLE\n> regression=# create table cf partition of p for values in ('a' < 'b');\n> ERROR: could not determine which collation to use for string comparison\n> HINT: Use the COLLATE clause to set the collation explicitly.\n>\n>\n> If we want to rip out the collation mismatch error altogether, then\n> fixing #2 would just require inserting assign_expr_collations() before\n> the expression_planner() call. The other direction that would make\n> sense to me is to perform assign_expr_collations() after\n> coerce_to_target_type(), and then to complain if exprCollation()\n> isn't default and doesn't match the partition collation. In any\n> case a specific test for a CollateExpr seems quite wrong.\n\nI tried implementing that as attached and one test failed:\n\ncreate table test_part_coll_posix (a text) partition by range (a\ncollate \"POSIX\");\n...\ncreate table test_part_coll_cast2 partition of test_part_coll_posix\nfor values from (name 's') to ('z');\n+ERROR: collation of partition bound value for column \"a\" does not\nmatch partition key collation \"POSIX\"\n+LINE 1: ...ion of test_part_coll_posix for values from (name 's') to ('...\n\nI dug up the discussion which resulted in this test being added and\nfound that Peter E had opined that this failure should not occur [1].\nMaybe that is why I put that half-baked guard consisting of checking\nif the erroneous collation comes from an explicit COLLATE clause. Now\nI think maybe giving an error is alright but we should tell in the\nDETAIL message what the expression's collation is, like as follows:\n\ncreate table test_part_coll_cast2 partition of test_part_coll_posix\nfor values from (name 's') to ('z');\n+ERROR: collation of partition bound value for column \"a\" does not\nmatch partition key collation \"POSIX\"\n+LINE 1: ...ion of test_part_coll_posix for values from (name 's') to ('...\n+ ^\n+DETAIL: The collation of partition bound value is \"C\".\n\n--\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/message-id/04661508-b6f5-177e-6f6b-c4cb8426b9f0%402ndquadrant.com", "msg_date": "Thu, 24 Sep 2020 20:41:56 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report error position in partition bound check" }, { "msg_contents": "[ cc'ing Peter, since his opinion seems to have got us here in the first place ]\n\nAmit Langote <amitlangote09@gmail.com> writes:\n> On Thu, Sep 24, 2020 at 7:19 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> However, while I was looking at it I couldn't help noticing that\n>> transformPartitionBoundValue's handling of collation concerns seems\n>> less than sane. There are two things bugging me:\n>> \n>> 1. Why does it care about the expression's collation only when there's\n>> a top-level CollateExpr? For example, that means we get an error for\n>> \n>> regression=# create table p (f1 text collate \"C\") partition by list(f1);\n>> CREATE TABLE\n>> regression=# create table c1 partition of p for values in ('a' collate \"POSIX\");\n>> ERROR: collation of partition bound value for column \"f1\" does not match partition key collation \"C\"\n>> \n>> but not this:\n>> \n>> regression=# create table c2 partition of p for values in ('a' || 'b' collate \"POSIX\");\n>> CREATE TABLE\n>> \n>> Given that we will override the expression's collation with the partition\n>> column's collation anyway, I don't see why we have this check at all,\n>> so my preference is to just rip out the entire stanza beginning with\n>> \"if (IsA(value, CollateExpr))\". If we keep it, though, I think it needs\n>> to do something else that is more general.\n\n> I dug up the discussion which resulted in this test being added and\n> found that Peter E had opined that this failure should not occur [1].\n\nWell, I agree with Peter to that extent, but my opinion is that *none*\nof these cases ought to be errors. What we're doing here is performing\nan implicit assignment-level coercion of the expression to the type of\nthe column, and changing the collation is allowed as part of that:\n\nregression=# create table foo (f1 text collate \"C\");\nCREATE TABLE\nregression=# insert into foo values ('a' COLLATE \"POSIX\");\nINSERT 0 1\nregression=# update foo set f1 = 'b' COLLATE \"POSIX\";\nUPDATE 1\n\nSo I find it completely inconsistent that the partitioning logic\ncomplains about equivalent cases. I think we should just rip the\nwhole thing out, as per the attached draft. This causes several\nregression test results to change, but AFAICS those are only there\nto exercise the error tests that I think we should get rid of.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 24 Sep 2020 11:02:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Report error position in partition bound check" }, { "msg_contents": "On Fri, Sep 25, 2020 at 12:02 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> [ cc'ing Peter, since his opinion seems to have got us here in the first place ]\n>\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > On Thu, Sep 24, 2020 at 7:19 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> However, while I was looking at it I couldn't help noticing that\n> >> transformPartitionBoundValue's handling of collation concerns seems\n> >> less than sane. There are two things bugging me:\n> >>\n> >> 1. Why does it care about the expression's collation only when there's\n> >> a top-level CollateExpr? For example, that means we get an error for\n> >>\n> >> regression=# create table p (f1 text collate \"C\") partition by list(f1);\n> >> CREATE TABLE\n> >> regression=# create table c1 partition of p for values in ('a' collate \"POSIX\");\n> >> ERROR: collation of partition bound value for column \"f1\" does not match partition key collation \"C\"\n> >>\n> >> but not this:\n> >>\n> >> regression=# create table c2 partition of p for values in ('a' || 'b' collate \"POSIX\");\n> >> CREATE TABLE\n> >>\n> >> Given that we will override the expression's collation with the partition\n> >> column's collation anyway, I don't see why we have this check at all,\n> >> so my preference is to just rip out the entire stanza beginning with\n> >> \"if (IsA(value, CollateExpr))\". If we keep it, though, I think it needs\n> >> to do something else that is more general.\n>\n> > I dug up the discussion which resulted in this test being added and\n> > found that Peter E had opined that this failure should not occur [1].\n>\n> Well, I agree with Peter to that extent, but my opinion is that *none*\n> of these cases ought to be errors. What we're doing here is performing\n> an implicit assignment-level coercion of the expression to the type of\n> the column, and changing the collation is allowed as part of that:\n>\n> regression=# create table foo (f1 text collate \"C\");\n> CREATE TABLE\n> regression=# insert into foo values ('a' COLLATE \"POSIX\");\n> INSERT 0 1\n> regression=# update foo set f1 = 'b' COLLATE \"POSIX\";\n> UPDATE 1\n>\n> So I find it completely inconsistent that the partitioning logic\n> complains about equivalent cases.\n\nMy perhaps wrong impression was that the bound expression that is\nspecified when creating a partition is not as such being *assigned* to\nthe key column, but now that I think about it some more, that doesn't\nmatter.\n\n> I think we should just rip the\n> whole thing out, as per the attached draft. This causes several\n> regression test results to change, but AFAICS those are only there\n> to exercise the error tests that I think we should get rid of.\n\nYeah, I can see no other misbehavior resulting from this.\n\n--\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 28 Sep 2020 15:49:53 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report error position in partition bound check" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Fri, Sep 25, 2020 at 12:02 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Well, I agree with Peter to that extent, but my opinion is that *none*\n>> of these cases ought to be errors. What we're doing here is performing\n>> an implicit assignment-level coercion of the expression to the type of\n>> the column, and changing the collation is allowed as part of that:\n>> \n>> regression=# create table foo (f1 text collate \"C\");\n>> CREATE TABLE\n>> regression=# insert into foo values ('a' COLLATE \"POSIX\");\n>> INSERT 0 1\n>> regression=# update foo set f1 = 'b' COLLATE \"POSIX\";\n>> UPDATE 1\n>> \n>> So I find it completely inconsistent that the partitioning logic\n>> complains about equivalent cases.\n\n> My perhaps wrong impression was that the bound expression that is\n> specified when creating a partition is not as such being *assigned* to\n> the key column, but now that I think about it some more, that doesn't\n> matter.\n\n>> I think we should just rip the\n>> whole thing out, as per the attached draft. This causes several\n>> regression test results to change, but AFAICS those are only there\n>> to exercise the error tests that I think we should get rid of.\n\n> Yeah, I can see no other misbehavior resulting from this.\n\nOK, I'll clean up the regression test cases and push that.\n\n(Although this could be claimed to be a bug, I do not feel\na need to back-patch the behavioral change.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 28 Sep 2020 13:01:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Report error position in partition bound check" }, { "msg_contents": "On Tue, Sep 29, 2020 at 2:01 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > On Fri, Sep 25, 2020 at 12:02 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Well, I agree with Peter to that extent, but my opinion is that *none*\n> >> of these cases ought to be errors. What we're doing here is performing\n> >> an implicit assignment-level coercion of the expression to the type of\n> >> the column, and changing the collation is allowed as part of that:\n> >>\n> >> regression=# create table foo (f1 text collate \"C\");\n> >> CREATE TABLE\n> >> regression=# insert into foo values ('a' COLLATE \"POSIX\");\n> >> INSERT 0 1\n> >> regression=# update foo set f1 = 'b' COLLATE \"POSIX\";\n> >> UPDATE 1\n> >>\n> >> So I find it completely inconsistent that the partitioning logic\n> >> complains about equivalent cases.\n>\n> > My perhaps wrong impression was that the bound expression that is\n> > specified when creating a partition is not as such being *assigned* to\n> > the key column, but now that I think about it some more, that doesn't\n> > matter.\n>\n> >> I think we should just rip the\n> >> whole thing out, as per the attached draft. This causes several\n> >> regression test results to change, but AFAICS those are only there\n> >> to exercise the error tests that I think we should get rid of.\n>\n> > Yeah, I can see no other misbehavior resulting from this.\n>\n> OK, I'll clean up the regression test cases and push that.\n\nThanks.\n\n> (Although this could be claimed to be a bug, I do not feel\n> a need to back-patch the behavioral change.)\n\nAgreed. The assign_expr_collations() omission was indeed a bug.\n\n--\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 29 Sep 2020 10:14:06 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report error position in partition bound check" } ]
[ { "msg_contents": "Hi hackers,\r\n \r\nWe believe we’re seeing a problem with how physical slot’s restart_lsn is advanced leading to the replicas needing to restore from archive in order for replication to resume. \r\nThe logs below are from reproductions against 10.13. I’m still working on reproducing it for 12.3.\r\n \r\nWAL write spans two WAL segments . \r\nWrite to first WAL segment is complete but not to the second segment. \r\nWrite to first WAL segment is acknowledged as flushed from the Postgres replica.\r\nPrimary restarts before the write to second segment completes. It also means the complete WAL was never written. \r\nCrash recovery finishes at a record before the incomplete WAL write. \r\nThough now replica start the slot at the next WAL segment, since the previous WAL was already acknowledged as flushed.\r\n \r\nPrimary crashes because it ran out of space:\r\n2020-07-10 01:23:31.399 UTC:10.15.4.83(56430):replication_user@[unknown]:[4554]:DEBUG: write 0/2C000000 flush 0/2BFD2000 apply 0/216ACCD0\r\n2020-07-10 01:23:31.401 UTC:10.15.4.83(56430):replication_user@[unknown]:[4554]:DEBUG: write 0/2C000000 flush 0/2C000000 apply 0/216AE728\r\n2020-07-10 01:23:31.504 UTC::@:[4548]:DEBUG: creating and filling new WAL file\r\n2020-07-10 01:23:31.511 UTC::@:[4548]:PANIC: could not write to file \"pg_wal/xlogtemp.4548\": No space left on device\r\n2020-07-10 01:23:31.518 UTC::@:[4543]:DEBUG: reaping dead processes\r\n \r\nCrash recovery beings:\r\n2020-07-10 01:23:36.074 UTC::@:[8677]:DEBUG: checkpoint record is at 0/2B51B030\r\n2020-07-10 01:23:36.074 UTC::@:[8677]:DEBUG: redo record is at 0/2A65AE08; shutdown FALSE\r\n..\r\n2020-07-10 01:23:36.076 UTC::@:[8677]:DEBUG: starting up replication slots\r\n2020-07-10 01:23:36.076 UTC::@:[8677]:DEBUG: restoring replication slot from \"pg_replslot/physical_slot_1/state\"\r\n2020-07-10 01:23:36.078 UTC::@:[8677]:LOG: restart_lsn for cp slot physical_slot_1: 0/2BF12000 (extra debug logs I added)\r\n2020-07-10 01:23:36.081 UTC::@:[8677]:LOG: redo starts at 0/2A65AE08\r\n...\r\n2020-07-10 01:23:36.325 UTC::@:[8677]:LOG: redo done at 0/2BFFFFB0\r\n...\r\n2020-07-10 01:23:36.330 UTC::@:[8677]:LOG: checkpoint starting: end-of-recovery immediate\r\n2020-07-10 01:23:36.332 UTC::@:[8677]:DEBUG: performing replication slot checkpoint\r\n...\r\n2020-07-10 01:23:36.380 UTC::@:[8677]:DEBUG: checkpoint sync: number=13 file=base/13934/2662 time=0.001 msec\r\n2020-07-10 01:23:36.380 UTC::@:[8677]:DEBUG: checkpoint sync: number=14 file=base/13934/2663 time=0.001 msec\r\n2020-07-10 01:23:36.380 UTC::@:[8677]:DEBUG: checkpoint sync: number=15 file=base/13934/24586 time=0.001 msec\r\n2020-07-10 01:23:36.385 UTC::@:[8677]:LOG: could not signal for checkpoint: checkpointer is not running\r\n2020-07-10 01:23:36.385 UTC::@:[8677]:DEBUG: creating and filling new WAL file\r\n2020-07-10 01:23:36.397 UTC::@:[8677]:PANIC: could not write to file \"pg_wal/xlogtemp.8677\": No space left on device\r\n \r\nPrimary runs out of space during crash recovery. Space is freed up afterwards and crash recovery beings again.\r\n \r\n2020-07-10 01:32:45.804 UTC::@:[16329]:DEBUG: checkpoint record is at 0/2B51B030\r\n2020-07-10 01:32:45.805 UTC::@:[16329]:DEBUG: redo record is at 0/2A65AE08; shutdown FALSE\r\n...\r\n2020-07-10 01:32:45.805 UTC::@:[16329]:DEBUG: starting up replication slots\r\n2020-07-10 01:32:45.805 UTC::@:[16329]:DEBUG: restoring replication slot from \"pg_replslot/physical_slot_1/state\"\r\n2020-07-10 01:32:45.806 UTC::@:[16329]:LOG: restart_lsn for cp slot physical_slot_1: 0/2BF12000\r\n...\r\n2020-07-10 01:32:45.809 UTC::@:[16329]:LOG: redo starts at 0/2A65AE08\r\n2020-07-10 01:32:46.043 UTC::@:[16329]:DEBUG: could not open file \"pg_wal/00000001000000000000002C\": No such file or directory\r\n2020-07-10 01:32:46.043 UTC::@:[16329]:LOG: redo done at 0/2BFFFFB0\r\n \r\nRedo finishes at 0/2BFFFFB0 even though the flush we received from the replica is already at 0/2C000000.\r\n \r\nThis is problematic because the replica reconnects to the slot telling it to start past the new redo point.\r\n \r\n2020-07-10 01:32:50.641 UTC:10.15.4.83(56698):replication_user@[unknown]:[16572]:DEBUG: received replication command: START_REPLICATION SLOT \"physical_slot_1\" 0/2C000000 TIMELINE 1\r\n2020-07-10 01:32:50.641 UTC:10.15.4.83(56698):replication_user@[unknown]:[16572]:DEBUG: \"walreceiver\" has now caught up with upstream server\r\n2020-07-10 01:32:50.774 UTC:10.15.4.83(56698):replication_user@[unknown]:[16572]:DEBUG: write 0/2C000B80 flush 0/2BFFFFF0 apply 0/2BFFFFF0\r\n2020-07-10 01:32:50.775 UTC:10.15.4.83(56698):replication_user@[unknown]:[16572]:DEBUG: write 0/2C000B80 flush 0/2C000B80 apply 0/2BFFFFF0\r\n \r\nThis leads to a mismatch between at the end of 0/2B and what was streamed to the replica.\r\n \r\nReplica logs:\r\n2020-07-10 01:32:50.671 UTC::@:[24899]:LOG: started streaming WAL from primary at 0/2C000000 on timeline 1\r\n...\r\n2020-07-10 01:39:32.251 UTC::@:[11703]:DEBUG: could not restore file \"00000001000000000000002C\" from archive: child process exited with exit code 1\r\n2020-07-10 01:39:32.251 UTC::@:[11703]:DEBUG: invalid contrecord length 90 at 0/2BFFFFF0\r\n2020-07-10 01:39:32.251 UTC::@:[11703]:DEBUG: switched WAL source from archive to stream after failure\r\n \r\nNow the physical slot has advanced past the 0/2B which is what the replica actually needs.\r\n \r\npostgres=> select * from pg_replication_slots;\r\nslot_name | plugin | slot_type | datoid | database | temporary | active | active_pid | xmin | catalog_xmin | restart_lsn | confirmed_flush_lsn\r\n---------------------------------------------+--------+-----------+--------+----------+-----------+--------+------------+------+--------------+-------------+---------------------\r\nphysical_slot_1 | | physical | | | f | f | | | | 0/2C000B80 |\r\n(1 row)\r\n \r\n \r\n0/2B can now rotate out on the primary and requires restoring from archive in order for replication to resume.\r\n \r\nThe attached patch (against 10) attempts to address this by keeping track of the first flushLsn in the current segNo, and wait until we receive one after that before updating. This prevents the WAL from rotating out of the primary and a reboot from the replica will fix it instead of needing to restore from archive.\r\n \r\nWith the patch:\r\n \r\nPrimary goes into crash recovery and we avoid updating the restart_lsn of the slot:\r\n \r\n2020-07-10 18:50:12.686 UTC::@:[6160]:LOG: redo starts at 0/2D417108\r\n2020-07-10 18:50:12.965 UTC::@:[6160]:DEBUG: could not open file \"pg_wal/00000001000000000000002F\": No such file or directory\r\n2020-07-10 18:50:12.965 UTC::@:[6160]:LOG: redo done at 0/2EFFFF90\r\n...\r\n2020-07-10 18:59:32.987 UTC:10.15.0.240(9056):replication_user@[unknown]:[19623]:DEBUG: received replication command: START_REPLICATION SLOT \"physical_slot_2\" 0/2F000000 TIMELINE 1\r\n2020-07-10 18:59:33.937 UTC:10.15.0.240(9056):replication_user@[unknown]:[19623]:DEBUG: write 0/2F020000 flush 0/2EFFFFD0 apply 0/2EFFFFD0\r\n2020-07-10 18:59:33.937 UTC:10.15.0.240(9056):replication_user@[unknown]:[19623]:LOG: lsn is not in restartSegNo, update to match\r\n2020-07-10 18:59:33.938 UTC:10.15.0.240(9056):replication_user@[unknown]:[19623]:DEBUG: write 0/2F020000 flush 0/2F020000 apply 0/2EFFFFD0\r\n2020-07-10 18:59:33.938 UTC:10.15.0.240(9056):replication_user@[unknown]:[19623]:LOG: lsn is not in restartSegNo, update to match\r\n \r\nReplica logs:\r\n2020-07-10 18:59:54.040 UTC::@:[12873]:DEBUG: could not restore file \"00000001000000000000002F\" from archive: child process exited with exit code 1\r\n2020-07-10 18:59:54.040 UTC::@:[12873]:DEBUG: invalid contrecord length 58 at 0/2EFFFFD0\r\n \r\n \r\nSince the flushLSN hasn't advanced past the first one in the restartSegNo, it doesn't get updated in future checkpoints.\r\n \r\npostgres=> select * from pg_replication_slots;\r\nslot_name | plugin | slot_type | datoid | database | temporary | active | active_pid | xmin | catalog_xmin | restart_lsn | confirmed_flush_lsn\r\n---------------------------------------------+--------+-----------+--------+----------+-----------+--------+------------+------+--------------+-------------+---------------------\r\nphysical_slot_2 | | physical | | | f | f | | | | 0/2DFF8000 |\r\n(1 row)\r\n \r\n \r\nRebooting the replica allows replication to resume from the slot and restart_lsn advances normally.\r\n\r\n Thanks,\r\n\r\n John H", "msg_date": "Fri, 10 Jul 2020 20:44:30 +0000", "msg_from": "\"Hsu, John\" <hsuchen@amazon.com>", "msg_from_op": true, "msg_subject": "Physical slot restart_lsn advances incorrectly requiring restore from\n archive" }, { "msg_contents": "Hello, John.\r\n\r\nAt Fri, 10 Jul 2020 20:44:30 +0000, \"Hsu, John\" <hsuchen@amazon.com> wrote in \r\n> Hi hackers,\r\n> \r\n> We believe we’re seeing a problem with how physical slot’s restart_lsn is advanced leading to the replicas needing to restore from archive in order for replication to resume. \r\n> The logs below are from reproductions against 10.13. I’m still working on reproducing it for 12.3.\r\n> \r\n> WAL write spans two WAL segments . \r\n> Write to first WAL segment is complete but not to the second segment. \r\n> Write to first WAL segment is acknowledged as flushed from the Postgres replica.\r\n> Primary restarts before the write to second segment completes. It also means the complete WAL was never written. \r\n> Crash recovery finishes at a record before the incomplete WAL write. \r\n> Though now replica start the slot at the next WAL segment, since the previous WAL was already acknowledged as flushed.\r\n...\r\n> Redo finishes at 0/2BFFFFB0 even though the flush we received from\r\n> the replica is already at 0/2C000000.\r\n> This is problematic because the replica reconnects to the slot\r\n> telling it to start past the new redo point.\r\n \r\nYeah, that is a problem not only related to restart_lsn. The same\r\ncause leads to aother issue of inconsistent archive as discussed in\r\n[1].\r\n\r\n1: https://www.postgresql.org/message-id/CBDDFA01-6E40-46BB-9F98-9340F4379505%40amazon.com\r\n\r\n> The attached patch (against 10) attempts to address this by keeping\r\n> track of the first flushLsn in the current segNo, and wait until we\r\n> receive one after that before updating. This prevents the WAL from\r\n> rotating out of the primary and a reboot from the replica will fix\r\n> it instead of needing to restore from archive.\r\n\r\nOn the other hand we can and should advance restart_lsn when we know\r\nthat the last record is complete. I think a patch in the thread [2]\r\nwould fix your issue. With the patch primary doesn't send a\r\ncontinuation record at the end of a segment until the whole record is\r\nflushed into WAL file.\r\n\r\n2: https://www.postgresql.org/message-id/20200625.153532.379700510444980240.horikyota.ntt%40gmail.com\r\n\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n", "msg_date": "Mon, 13 Jul 2020 10:48:19 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Physical slot restart_lsn advances incorrectly requiring\n restore from archive" }, { "msg_contents": "Hi Horiguchi-san,\r\n\r\nI'll take a look at that thread and see if I can reproduce with the attached patch.\r\nIt seems like it would directly address this issue. Thanks for taking a look. \r\n\r\nCheers,\r\nJohn H\r\n\r\nOn Thu, Jul 16, 2020 at 11:00 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\r\nHello, John.\r\n\r\nAt Fri, 10 Jul 2020 20:44:30 +0000, \"Hsu, John\" <hsuchen@amazon.com> wrote in \r\n> Hi hackers,\r\n>  \r\n> We believe we’re seeing a problem with how physical slot’s restart_lsn is advanced leading to the replicas needing to restore from archive in order for replication to resume. \r\n> The logs below are from reproductions against 10.13. I’m still working on reproducing it for 12.3.\r\n>  \r\n> WAL write spans two WAL segments . \r\n> Write to first WAL segment is complete but not to the second segment. \r\n> Write to first WAL segment is acknowledged as flushed from the Postgres replica.\r\n> Primary restarts before the write to second segment completes. It also means the complete WAL was never written. \r\n> Crash recovery finishes at a record before the incomplete WAL write. \r\n> Though now replica start the slot at the next WAL segment, since the previous WAL was already acknowledged as flushed.\r\n...\r\n> Redo finishes at 0/2BFFFFB0 even though the flush we received from\r\n> the replica is already at 0/2C000000.\r\n> This is problematic because the replica reconnects to the slot\r\n> telling it to start past the new redo point.\r\n\r\nYeah, that is a problem not only related to restart_lsn. The same\r\ncause leads to aother issue of inconsistent archive as discussed in\r\n[1].\r\n\r\n1: https://www.postgresql.org/message-id/CBDDFA01-6E40-46BB-9F98-9340F4379505%40amazon.com\r\n\r\n> The attached patch (against 10) attempts to address this by keeping\r\n> track of the first flushLsn in the current segNo, and wait until we\r\n> receive one after that before updating. This prevents the WAL from\r\n> rotating out of the primary and a reboot from the replica will fix\r\n> it instead of needing to restore from archive.\r\n\r\nOn the other hand we can and should advance restart_lsn when we know\r\nthat the last record is complete. I think a patch in the thread [2]\r\nwould fix your issue. With the patch primary doesn't send a\r\ncontinuation record at the end of a segment until the whole record is\r\nflushed into WAL file.\r\n\r\n2: https://www.postgresql.org/message-id/20200625.153532.379700510444980240.horikyota.ntt%40gmail.com\r\n\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n\r\n", "msg_date": "Thu, 16 Jul 2020 22:54:49 +0000", "msg_from": "\"Hsu, John\" <hsuchen@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Physical slot restart_lsn advances incorrectly requiring restore\n from\n archive" } ]
[ { "msg_contents": "I have just notice that the parallelism is off even for the select\npart of the query mentioned in the $subject. I see the only reason it\nis not getting parallel because we block the parallelism if the query\ntype is not SELECT. I don't see any reason for not selecting the\nparallelism for this query. I have quickly hacked the code to enable\nthe parallelism for this query. I can see there is a significant\nimprovement if we can enable the parallelism in this case. For an\nexperiment, I have just relaxed a couple of checks, maybe if we think\nthat it's good to enable the parallelism for this case we can try to\nput better checks which are specific for this quey.\n\nNo parallel:\npostgres[36635]=# explain analyze insert into t2 select * from t where a < 100;\n Insert on t2 (cost=0.00..29742.00 rows=100 width=105) (actual\ntime=278.300..278.300 rows=0 loops=1)\n -> Seq Scan on t (cost=0.00..29742.00 rows=100 width=105) (actual\ntime=0.061..277.330 rows=99 loops=1)\n Filter: (a < 100)\n Rows Removed by Filter: 999901\n Planning Time: 0.093 ms\n Execution Time: 278.330 ms\n(6 rows)\n\nWith parallel\n Insert on t2 (cost=1000.00..23460.33 rows=100 width=105) (actual\ntime=108.410..108.410 rows=0 loops=1)\n -> Gather (cost=1000.00..23460.33 rows=100 width=105) (actual\ntime=0.306..108.973 rows=99 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Parallel Seq Scan on t (cost=0.00..22450.33 rows=42\nwidth=105) (actual time=66.396..101.979 rows=33 loops=3)\n Filter: (a < 100)\n Rows Removed by Filter: 333300\n Planning Time: 0.154 ms\n Execution Time: 110.158 ms\n(9 rows)\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sat, 11 Jul 2020 18:07:28 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "INSERT INTO SELECT, Why Parallelism is not selected?" }, { "msg_contents": "On Sat, Jul 11, 2020 at 6:07 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> I have just notice that the parallelism is off even for the select\n> part of the query mentioned in the $subject. I see the only reason it\n> is not getting parallel because we block the parallelism if the query\n> type is not SELECT. I don't see any reason for not selecting the\n> parallelism for this query. I have quickly hacked the code to enable\n> the parallelism for this query. I can see there is a significant\n> improvement if we can enable the parallelism in this case. For an\n> experiment, I have just relaxed a couple of checks, maybe if we think\n> that it's good to enable the parallelism for this case we can try to\n> put better checks which are specific for this quey.\n>\n\n+1. I also don't see any problem with this idea considering we will\nfind a better way to enable the parallelism for this case because we\ncan already use parallelism for statements like \"create table\n<tbl_name> as select ...\". I think we can do more than this by\nparallelizing the Insert part of this query as well as we have lifted\ngroup locking restrictions related to RelationExtension and Page lock\nin PG13. It would be really cool to do that unless we see any\nfundamental problems with it.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 13 Jul 2020 16:23:12 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: INSERT INTO SELECT, Why Parallelism is not selected?" }, { "msg_contents": "On Mon, Jul 13, 2020 at 4:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Jul 11, 2020 at 6:07 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > I have just notice that the parallelism is off even for the select\n> > part of the query mentioned in the $subject. I see the only reason it\n> > is not getting parallel because we block the parallelism if the query\n> > type is not SELECT. I don't see any reason for not selecting the\n> > parallelism for this query. I have quickly hacked the code to enable\n> > the parallelism for this query. I can see there is a significant\n> > improvement if we can enable the parallelism in this case. For an\n> > experiment, I have just relaxed a couple of checks, maybe if we think\n> > that it's good to enable the parallelism for this case we can try to\n> > put better checks which are specific for this quey.\n> >\n>\n> +1. I also don't see any problem with this idea considering we will\n> find a better way to enable the parallelism for this case because we\n> can already use parallelism for statements like \"create table\n> <tbl_name> as select ...\".\n\nOkay, thanks for the feedback.\n\n I think we can do more than this by\n> parallelizing the Insert part of this query as well as we have lifted\n> group locking restrictions related to RelationExtension and Page lock\n> in PG13. It would be really cool to do that unless we see any\n> fundamental problems with it.\n\n+1, I think it will be cool to support for insert part here as well as\ninsert part in 'Create Table As Select..' as well.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 14 Jul 2020 13:20:08 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: INSERT INTO SELECT, Why Parallelism is not selected?" }, { "msg_contents": "On Sat, Jul 11, 2020 at 6:07 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> I have just notice that the parallelism is off even for the select\n> part of the query mentioned in the $subject. I see the only reason it\n> is not getting parallel because we block the parallelism if the query\n> type is not SELECT. I don't see any reason for not selecting the\n> parallelism for this query. I have quickly hacked the code to enable\n> the parallelism for this query. I can see there is a significant\n> improvement if we can enable the parallelism in this case. For an\n> experiment, I have just relaxed a couple of checks, maybe if we think\n> that it's good to enable the parallelism for this case we can try to\n> put better checks which are specific for this quey.\n>\n>\n+1 for the idea. For the given example also it shows a good performance\ngain and I also don't any reason on restrict the parallel case for INSERT\nINTO SELECT.\n\n\n> No parallel:\n> postgres[36635]=# explain analyze insert into t2 select * from t where a <\n> 100;\n> Insert on t2 (cost=0.00..29742.00 rows=100 width=105) (actual\n> time=278.300..278.300 rows=0 loops=1)\n> -> Seq Scan on t (cost=0.00..29742.00 rows=100 width=105) (actual\n> time=0.061..277.330 rows=99 loops=1)\n> Filter: (a < 100)\n> Rows Removed by Filter: 999901\n> Planning Time: 0.093 ms\n> Execution Time: 278.330 ms\n> (6 rows)\n>\n> With parallel\n> Insert on t2 (cost=1000.00..23460.33 rows=100 width=105) (actual\n> time=108.410..108.410 rows=0 loops=1)\n> -> Gather (cost=1000.00..23460.33 rows=100 width=105) (actual\n> time=0.306..108.973 rows=99 loops=1)\n> Workers Planned: 2\n> Workers Launched: 2\n> -> Parallel Seq Scan on t (cost=0.00..22450.33 rows=42\n> width=105) (actual time=66.396..101.979 rows=33 loops=3)\n> Filter: (a < 100)\n> Rows Removed by Filter: 333300\n> Planning Time: 0.154 ms\n> Execution Time: 110.158 ms\n> (9 rows)\n>\n> --\n> Regards,\n> Dilip Kumar\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\n\n-- \nRushabh Lathia\n\nOn Sat, Jul 11, 2020 at 6:07 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:I have just notice that the parallelism is off even for the select\npart of the query mentioned in the $subject.  I see the only reason it\nis not getting parallel because we block the parallelism if the query\ntype is not SELECT.  I don't see any reason for not selecting the\nparallelism for this query.  I have quickly hacked the code to enable\nthe parallelism for this query.  I can see there is a significant\nimprovement if we can enable the parallelism in this case.  For an\nexperiment, I have just relaxed a couple of checks, maybe if we think\nthat it's good to enable the parallelism for this case we can try to\nput better checks which are specific for this quey.\n+1 for the idea.  For the given example also it shows a good performancegain and I also don't any reason on restrict the parallel case for INSERT INTO SELECT. \nNo parallel:\npostgres[36635]=# explain analyze insert into t2 select * from t where a < 100;\n Insert on t2  (cost=0.00..29742.00 rows=100 width=105) (actual\ntime=278.300..278.300 rows=0 loops=1)\n   ->  Seq Scan on t  (cost=0.00..29742.00 rows=100 width=105) (actual\ntime=0.061..277.330 rows=99 loops=1)\n         Filter: (a < 100)\n         Rows Removed by Filter: 999901\n Planning Time: 0.093 ms\n Execution Time: 278.330 ms\n(6 rows)\n\nWith parallel\n Insert on t2  (cost=1000.00..23460.33 rows=100 width=105) (actual\ntime=108.410..108.410 rows=0 loops=1)\n   ->  Gather  (cost=1000.00..23460.33 rows=100 width=105) (actual\ntime=0.306..108.973 rows=99 loops=1)\n         Workers Planned: 2\n         Workers Launched: 2\n         ->  Parallel Seq Scan on t  (cost=0.00..22450.33 rows=42\nwidth=105) (actual time=66.396..101.979 rows=33 loops=3)\n               Filter: (a < 100)\n               Rows Removed by Filter: 333300\n Planning Time: 0.154 ms\n Execution Time: 110.158 ms\n(9 rows)\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n-- Rushabh Lathia", "msg_date": "Tue, 14 Jul 2020 13:32:06 +0530", "msg_from": "Rushabh Lathia <rushabh.lathia@gmail.com>", "msg_from_op": false, "msg_subject": "Re: INSERT INTO SELECT, Why Parallelism is not selected?" }, { "msg_contents": "On Sat, Jul 11, 2020 at 8:37 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> I have just notice that the parallelism is off even for the select\n> part of the query mentioned in the $subject. I see the only reason it\n> is not getting parallel because we block the parallelism if the query\n> type is not SELECT. I don't see any reason for not selecting the\n> parallelism for this query.\n\nThere's a relevant comment near the top of heap_prepare_insert().\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 14 Jul 2020 14:55:23 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: INSERT INTO SELECT, Why Parallelism is not selected?" }, { "msg_contents": "On Wed, Jul 15, 2020 at 12:32 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Sat, Jul 11, 2020 at 8:37 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > I have just notice that the parallelism is off even for the select\n> > part of the query mentioned in the $subject. I see the only reason it\n> > is not getting parallel because we block the parallelism if the query\n> > type is not SELECT. I don't see any reason for not selecting the\n> > parallelism for this query.\n>\n> There's a relevant comment near the top of heap_prepare_insert().\n>\n\nI think that is no longer true after commits 85f6b49c2c and 3ba59ccc89\nwhere we have allowed relation extension and page locks to conflict\namong group members. We have accordingly changed comments at a few\nplaces but forgot to update this one. I will check and see if any\nother similar comments are there which needs to be updated.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 15 Jul 2020 08:06:35 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: INSERT INTO SELECT, Why Parallelism is not selected?" }, { "msg_contents": "On Wed, Jul 15, 2020 at 8:06 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jul 15, 2020 at 12:32 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Sat, Jul 11, 2020 at 8:37 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > I have just notice that the parallelism is off even for the select\n> > > part of the query mentioned in the $subject. I see the only reason it\n> > > is not getting parallel because we block the parallelism if the query\n> > > type is not SELECT. I don't see any reason for not selecting the\n> > > parallelism for this query.\n> >\n> > There's a relevant comment near the top of heap_prepare_insert().\n> >\n>\n> I think that is no longer true after commits 85f6b49c2c and 3ba59ccc89\n> where we have allowed relation extension and page locks to conflict\n> among group members. We have accordingly changed comments at a few\n> places but forgot to update this one. I will check and see if any\n> other similar comments are there which needs to be updated.\n>\n\nThe attached patch fixes the comments. Let me know if you think I\nhave missed anything or any other comments.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 16 Jul 2020 08:44:05 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: INSERT INTO SELECT, Why Parallelism is not selected?" }, { "msg_contents": "On Thu, Jul 16, 2020 at 8:44 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jul 15, 2020 at 8:06 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Jul 15, 2020 at 12:32 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > >\n> > > On Sat, Jul 11, 2020 at 8:37 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > I have just notice that the parallelism is off even for the select\n> > > > part of the query mentioned in the $subject. I see the only reason it\n> > > > is not getting parallel because we block the parallelism if the query\n> > > > type is not SELECT. I don't see any reason for not selecting the\n> > > > parallelism for this query.\n> > >\n> > > There's a relevant comment near the top of heap_prepare_insert().\n> > >\n> >\n> > I think that is no longer true after commits 85f6b49c2c and 3ba59ccc89\n> > where we have allowed relation extension and page locks to conflict\n> > among group members. We have accordingly changed comments at a few\n> > places but forgot to update this one. I will check and see if any\n> > other similar comments are there which needs to be updated.\n> >\n>\n> The attached patch fixes the comments. Let me know if you think I\n> have missed anything or any other comments.\n\nYour comments look good to me.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 16 Jul 2020 15:23:28 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: INSERT INTO SELECT, Why Parallelism is not selected?" }, { "msg_contents": "On Wed, Jul 15, 2020 at 11:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> The attached patch fixes the comments. Let me know if you think I\n> have missed anything or any other comments.\n\nIf it's safe now, why not remove the error check?\n\n(Is it safe? Could there be other problems?)\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 16 Jul 2020 09:13:21 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: INSERT INTO SELECT, Why Parallelism is not selected?" }, { "msg_contents": "On Thu, Jul 16, 2020 at 6:43 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Jul 15, 2020 at 11:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > The attached patch fixes the comments. Let me know if you think I\n> > have missed anything or any other comments.\n>\n> If it's safe now, why not remove the error check?\n>\n\nI think it is not safe for all kind of Inserts (see my response later\nin email), so we need some smarts to identify un-safe inserts before\nwe can open this check.\n\n> (Is it safe? Could there be other problems?)\n>\n\nI think we need to be careful of two things: (a) Do we want to enable\nparallel inserts where tuple locks are involved, forex. in statements\nlike \"Insert into primary_tbl Select * from secondary_tbl Where col <\n10 For Update;\"? In such statements, I don't see any problem because\neach worker will operate on a separate page and even if the leader\nalready has a lock on the tuple, it will be granted to the worker as\nit is taken in the same transaction. (b) The insert statements that\ncan generate 'CommandIds' which can happen while insert into tables\nwith foreign keys, see below test:\n\nCREATE TABLE primary_tbl(index INTEGER PRIMARY KEY, height real, weight real);\ninsert into primary_tbl values(1, 1.1, 100);\ninsert into primary_tbl values(2, 1.2, 100);\ninsert into primary_tbl values(3, 1.3, 100);\n\nCREATE TABLE secondary_tbl(index INTEGER REFERENCES\nprimary_tbl(index), height real, weight real);\n\ninsert into secondary_tbl values(generate_series(1,3),1.2,200);\n\nHere we can't parallelise statements like \"insert into secondary_tbl\nvalues(generate_series(1,3),1.2,200);\" as they will generate\n'CommandIds' for each row insert into table with foreign key. The new\ncommand id is generated while performing a foreign key check. Now, it\nis a separate question whether generating a command id for each row\ninsert is required or not but as of now we can't parallelize such\nstatements.\n\nDo you have something else in mind?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 17 Jul 2020 11:24:54 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: INSERT INTO SELECT, Why Parallelism is not selected?" }, { "msg_contents": "On Fri, Jul 17, 2020 at 11:24 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Do you have something else in mind?\n>\n\nI am planning to commit the comments change patch attached in the\nabove email [1] next week sometime (probably Monday or Tuesday) unless\nyou have something more to add?\n\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1%2BRL7c_s%3D%2BTwAE6DJ1MmupbEiGCFLt97US%2BDMm6UxAjTA%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 24 Jul 2020 17:29:11 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: INSERT INTO SELECT, Why Parallelism is not selected?" }, { "msg_contents": "On Fri, Jul 24, 2020 at 7:59 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Fri, Jul 17, 2020 at 11:24 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Do you have something else in mind?\n>\n> I am planning to commit the comments change patch attached in the\n> above email [1] next week sometime (probably Monday or Tuesday) unless\n> you have something more to add?\n\nWell, I think the comments could be more clear - for the insert case\nspecifically - about which cases you think are and are not safe.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 24 Jul 2020 09:53:50 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: INSERT INTO SELECT, Why Parallelism is not selected?" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Well, I think the comments could be more clear - for the insert case\n> specifically - about which cases you think are and are not safe.\n\nYeah, the proposed comment changes don't actually add much. Also\nplease try to avoid inserting non-ASCII &nbsp; into the source code;\nat least in my mail reader, that attachment has some.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 24 Jul 2020 10:06:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: INSERT INTO SELECT, Why Parallelism is not selected?" }, { "msg_contents": "On Fri, Jul 24, 2020 at 7:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > Well, I think the comments could be more clear - for the insert case\n> > specifically - about which cases you think are and are not safe.\n>\n\nOkay, I'll update the patch accordingly.\n\n> Yeah, the proposed comment changes don't actually add much. Also\n> please try to avoid inserting non-ASCII &nbsp; into the source code;\n> at least in my mail reader, that attachment has some.\n>\n\nI don't see any non-ASCII characters in the patch. I have applied and\nchecked (via vi editor) the patch as well but don't see any non-ASCII\ncharacters. How can I verify that?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 25 Jul 2020 09:18:10 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: INSERT INTO SELECT, Why Parallelism is not selected?" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Fri, Jul 24, 2020 at 7:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Yeah, the proposed comment changes don't actually add much. Also\n>> please try to avoid inserting non-ASCII &nbsp; into the source code;\n>> at least in my mail reader, that attachment has some.\n\n> I don't see any non-ASCII characters in the patch. I have applied and\n> checked (via vi editor) the patch as well but don't see any non-ASCII\n> characters. How can I verify that?\n\nThey're definitely there:\n\n$ od -c 0001-Fix-comments-in-heapam.c.patch\n...\n0002740 h e \\n + \\t * l e a d e r c\n0002760 a n p e r f o r m t h e i\n0003000 n s e r t . 302 240 T h i s r e\n0003020 s t r i c t i o n c a n b e\n0003040 u p l i f t e d o n c e w\n0003060 e \\n + \\t * a l l o w t h e\n0003100 302 240 p l a n n e r t o g e n\n0003120 e r a t e p a r a l l e l p\n0003140 l a n s f o r i n s e r t s\n0003160 . \\n \\t * / \\n \\t i f ( I s\n...\n\nI'm not sure if \"git diff --check\" would whine about this.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 25 Jul 2020 11:11:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: INSERT INTO SELECT, Why Parallelism is not selected?" }, { "msg_contents": "On Sat, Jul 25, 2020 at 8:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > On Fri, Jul 24, 2020 at 7:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Yeah, the proposed comment changes don't actually add much. Also\n> >> please try to avoid inserting non-ASCII &nbsp; into the source code;\n> >> at least in my mail reader, that attachment has some.\n>\n> > I don't see any non-ASCII characters in the patch. I have applied and\n> > checked (via vi editor) the patch as well but don't see any non-ASCII\n> > characters. How can I verify that?\n>\n> They're definitely there:\n>\n> $ od -c 0001-Fix-comments-in-heapam.c.patch\n> ...\n> 0002740 h e \\n + \\t * l e a d e r c\n> 0002760 a n p e r f o r m t h e i\n> 0003000 n s e r t . 302 240 T h i s r e\n> 0003020 s t r i c t i o n c a n b e\n> 0003040 u p l i f t e d o n c e w\n> 0003060 e \\n + \\t * a l l o w t h e\n> 0003100 302 240 p l a n n e r t o g e n\n> 0003120 e r a t e p a r a l l e l p\n> 0003140 l a n s f o r i n s e r t s\n> 0003160 . \\n \\t * / \\n \\t i f ( I s\n> ...\n>\n\nThanks, I could see that.\n\n> I'm not sure if \"git diff --check\" would whine about this.\n>\n\nNo, \"git diff --check\" doesn't help. I have tried pgindent but that\nalso doesn't help neither was I expecting it to help. I am still not\nable to figure out how I goofed up this but will spend some more time\non this. In the meantime, I have updated the patch to improve the\ncomments as suggested by Robert. Do let me know if you want to\nedit/add something more?\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Sun, 26 Jul 2020 16:54:07 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: INSERT INTO SELECT, Why Parallelism is not selected?" }, { "msg_contents": "On Sun, Jul 26, 2020 at 4:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Jul 25, 2020 at 8:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n>\n> No, \"git diff --check\" doesn't help. I have tried pgindent but that\n> also doesn't help neither was I expecting it to help. I am still not\n> able to figure out how I goofed up this but will spend some more time\n> on this.\n>\n\nI think I have figured out how the patch ended up having &nbsp. Some\neditors add it when we use two spaces after a period (.) and I could\nsee that my Gmail client does that for such a pattern. Normally, I\nuse one of MSVC, vi, or NetBeans IDE to develop code/patch but\nsometimes I check the comments by pasting in Gmail client to find\ntypos or such and then fix them manually. I guess in this case I have\nused Gmail client to write this comment and then copy/pasted it for\nthe patch. I will be careful not to do this in the future.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 27 Jul 2020 08:57:34 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: INSERT INTO SELECT, Why Parallelism is not selected?" }, { "msg_contents": "On Sun, Jul 26, 2020 at 7:24 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> No, \"git diff --check\" doesn't help. I have tried pgindent but that\n> also doesn't help neither was I expecting it to help. I am still not\n> able to figure out how I goofed up this but will spend some more time\n> on this. In the meantime, I have updated the patch to improve the\n> comments as suggested by Robert. Do let me know if you want to\n> edit/add something more?\n\nI still don't agree with this as proposed.\n\n+ * For now, we don't allow parallel inserts of any form not even where the\n+ * leader can perform the insert. This restriction can be uplifted once\n+ * we allow the planner to generate parallel plans for inserts. We can\n\nIf I'm understanding this correctly, this logic is completely\nbackwards. We don't prohibit inserts here because we know the planner\ncan't generate them. We prohibit inserts here because, if the planner\nsomehow did generate them, it wouldn't be safe. You're saying that\nit's not allowed because we don't try to do it yet, but actually it's\nnot allowed because we want to make sure that we don't accidentally\ntry to do it. That's very different.\n\n+ * parallelize inserts unless they generate a new commandid (ex. inserts\n+ * into a table having foreign key column) or lock tuples (ex. statements\n+ * like Insert .. Select For Update).\n\nI understand the part about generating new command IDs, but not the\npart about locking tuples. Why would that be a problem? Can it better\nexplained here?\n\nExamples in comments are typically introduced with e.g., not ex.\n\n+ * We should be able to parallelize\n+ * the later case if we can ensure that no two parallel processes can ever\n+ * operate on the same page.\n\nI don't know whether this is talking about two processes operating on\nthe same page at the same time, or ever within a single query\nexecution. If it's the former, perhaps we need to explain why that's a\nconcern for parallel query but not otherwise; if it's the latter, that\nseems impossible to guarantee and imagining that we'll ever be able to\ndo so seems like wishful thinking.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 29 Jul 2020 09:47:54 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: INSERT INTO SELECT, Why Parallelism is not selected?" }, { "msg_contents": "On Wed, Jul 29, 2020 at 7:18 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> I still don't agree with this as proposed.\n>\n> + * For now, we don't allow parallel inserts of any form not even where the\n> + * leader can perform the insert. This restriction can be uplifted once\n> + * we allow the planner to generate parallel plans for inserts. We can\n>\n> If I'm understanding this correctly, this logic is completely\n> backwards. We don't prohibit inserts here because we know the planner\n> can't generate them. We prohibit inserts here because, if the planner\n> somehow did generate them, it wouldn't be safe. You're saying that\n> it's not allowed because we don't try to do it yet, but actually it's\n> not allowed because we want to make sure that we don't accidentally\n> try to do it. That's very different.\n>\n\nRight, so how about something like: \"To allow parallel inserts, we\nneed to ensure that they are safe to be performed in workers. We have\nthe infrastructure to allow parallel inserts in general except for the\ncase where inserts generate a new commandid (eg. inserts into a table\nhaving a foreign key column).\" We can extend this for tuple locking\nif required as per the below discussion. Kindly suggest if you prefer\na different wording here.\n\n>\n> + * We should be able to parallelize\n> + * the later case if we can ensure that no two parallel processes can ever\n> + * operate on the same page.\n>\n> I don't know whether this is talking about two processes operating on\n> the same page at the same time, or ever within a single query\n> execution. If it's the former, perhaps we need to explain why that's a\n> concern for parallel query but not otherwise;\n>\n\nI am talking about the former case and I know that as per current\ndesign it is not possible that two worker processes try to operate on\nthe same page but I was trying to be pessimistic so that we can ensure\nthat via some form of Assert. I don't know whether it is important to\nmention this case or not but for the sake of extra safety, I have\nmentioned it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 30 Jul 2020 12:02:46 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: INSERT INTO SELECT, Why Parallelism is not selected?" }, { "msg_contents": "On Thu, Jul 30, 2020 at 12:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jul 29, 2020 at 7:18 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > I still don't agree with this as proposed.\n> >\n> > + * For now, we don't allow parallel inserts of any form not even where the\n> > + * leader can perform the insert. This restriction can be uplifted once\n> > + * we allow the planner to generate parallel plans for inserts. We can\n> >\n> > If I'm understanding this correctly, this logic is completely\n> > backwards. We don't prohibit inserts here because we know the planner\n> > can't generate them. We prohibit inserts here because, if the planner\n> > somehow did generate them, it wouldn't be safe. You're saying that\n> > it's not allowed because we don't try to do it yet, but actually it's\n> > not allowed because we want to make sure that we don't accidentally\n> > try to do it. That's very different.\n> >\n>\n> Right, so how about something like: \"To allow parallel inserts, we\n> need to ensure that they are safe to be performed in workers. We have\n> the infrastructure to allow parallel inserts in general except for the\n> case where inserts generate a new commandid (eg. inserts into a table\n> having a foreign key column).\" We can extend this for tuple locking\n> if required as per the below discussion. Kindly suggest if you prefer\n> a different wording here.\n>\n> >\n> > + * We should be able to parallelize\n> > + * the later case if we can ensure that no two parallel processes can ever\n> > + * operate on the same page.\n> >\n> > I don't know whether this is talking about two processes operating on\n> > the same page at the same time, or ever within a single query\n> > execution. If it's the former, perhaps we need to explain why that's a\n> > concern for parallel query but not otherwise;\n> >\n>\n> I am talking about the former case and I know that as per current\n> design it is not possible that two worker processes try to operate on\n> the same page but I was trying to be pessimistic so that we can ensure\n> that via some form of Assert.\n>\n\nI think the two worker processes can operate on the same page for a\nparallel index scan case but it won't be for same tuple. I am not able\nto think of any case where we should be worried about tuple locking\nfor Insert's case, so we can probably skip writing anything about it\nunless someone else can think of such a case.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 30 Jul 2020 18:42:44 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: INSERT INTO SELECT, Why Parallelism is not selected?" }, { "msg_contents": "On Tue, Jul 14, 2020 at 1:20 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Jul 13, 2020 at 4:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > I think we can do more than this by\n> > parallelizing the Insert part of this query as well as we have lifted\n> > group locking restrictions related to RelationExtension and Page lock\n> > in PG13. It would be really cool to do that unless we see any\n> > fundamental problems with it.\n>\n> +1, I think it will be cool to support for insert part here as well as\n> insert part in 'Create Table As Select..' as well.\n>\n\n+1 to parallelize inserts. Currently, ExecInsert() and CTAS use\ntable_tuple_insert(), if we parallelize these parts, each worker will\nbe inserting it's tuples(one tuple at a time) into the same data page,\nuntil space is available, if not a new data page can be obtained by\nany of the worker, others might start inserting into it. This way,\nwill there be lock contention on data pages?. Do we also need to make\ninserts to use table_multi_insert() (like the way \"COPY\" uses) instead\nof table_tuple_insert()?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 18 Aug 2020 13:36:47 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: INSERT INTO SELECT, Why Parallelism is not selected?" }, { "msg_contents": "On Tue, Aug 18, 2020 at 1:37 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, Jul 14, 2020 at 1:20 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Mon, Jul 13, 2020 at 4:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > > I think we can do more than this by\n> > > parallelizing the Insert part of this query as well as we have lifted\n> > > group locking restrictions related to RelationExtension and Page lock\n> > > in PG13. It would be really cool to do that unless we see any\n> > > fundamental problems with it.\n> >\n> > +1, I think it will be cool to support for insert part here as well as\n> > insert part in 'Create Table As Select..' as well.\n> >\n>\n> +1 to parallelize inserts. Currently, ExecInsert() and CTAS use\n> table_tuple_insert(), if we parallelize these parts, each worker will\n> be inserting it's tuples(one tuple at a time) into the same data page,\n> until space is available, if not a new data page can be obtained by\n> any of the worker, others might start inserting into it. This way,\n> will there be lock contention on data pages?\n>\n\nIt is possible but we need to check how much that is a bottleneck\nbecause that should not be a big part of the operation. And, it won't\nbe any worse than inserts via multiple backends. I think it is\nimportant to do that way, otherwise, some of the pages can remain\nhalf-empty.\n\nRight now, the plan for Insert ... Select is like\nInsert on <tbl_x>\n -> Seq Scan on <tbl_y>\n ....\n\nIn the above the scan could be index scan as well. What we want is:\nGather\n -> Insert on <tbl_x>\n -> Seq Scan on <tbl_y>\n ....\n\n>. Do we also need to make\n> inserts to use table_multi_insert() (like the way \"COPY\" uses) instead\n> of table_tuple_insert()?\n>\n\nI am not sure at this stage but if it turns out to be a big problem\nthen we might think of inventing some way to allow individual workers\nto operate on different pages. I think even without that we should be\nable to make a big gain as reads, filtering, etc can be done in\nparallel.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 18 Aug 2020 19:08:55 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: INSERT INTO SELECT, Why Parallelism is not selected?" }, { "msg_contents": "On Thu, Jul 30, 2020 at 6:42 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jul 30, 2020 at 12:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Jul 29, 2020 at 7:18 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > >\n> > > I still don't agree with this as proposed.\n> > >\n> > > + * For now, we don't allow parallel inserts of any form not even where the\n> > > + * leader can perform the insert. This restriction can be uplifted once\n> > > + * we allow the planner to generate parallel plans for inserts. We can\n> > >\n> > > If I'm understanding this correctly, this logic is completely\n> > > backwards. We don't prohibit inserts here because we know the planner\n> > > can't generate them. We prohibit inserts here because, if the planner\n> > > somehow did generate them, it wouldn't be safe. You're saying that\n> > > it's not allowed because we don't try to do it yet, but actually it's\n> > > not allowed because we want to make sure that we don't accidentally\n> > > try to do it. That's very different.\n> > >\n> >\n> > Right, so how about something like: \"To allow parallel inserts, we\n> > need to ensure that they are safe to be performed in workers. We have\n> > the infrastructure to allow parallel inserts in general except for the\n> > case where inserts generate a new commandid (eg. inserts into a table\n> > having a foreign key column).\"\n\nRobert, Dilip, do you see any problem if we change the comment on the\nabove lines? Feel free to suggest if you have something better in\nmind.\n\n> > We can extend this for tuple locking\n> > if required as per the below discussion. Kindly suggest if you prefer\n> > a different wording here.\n> >\n\nI feel we can leave this based on the reasoning provided below.\n\n> > >\n> > > + * We should be able to parallelize\n> > > + * the later case if we can ensure that no two parallel processes can ever\n> > > + * operate on the same page.\n> > >\n> > > I don't know whether this is talking about two processes operating on\n> > > the same page at the same time, or ever within a single query\n> > > execution. If it's the former, perhaps we need to explain why that's a\n> > > concern for parallel query but not otherwise;\n> > >\n> >\n> > I am talking about the former case and I know that as per current\n> > design it is not possible that two worker processes try to operate on\n> > the same page but I was trying to be pessimistic so that we can ensure\n> > that via some form of Assert.\n> >\n>\n> I think the two worker processes can operate on the same page for a\n> parallel index scan case but it won't be for same tuple. I am not able\n> to think of any case where we should be worried about tuple locking\n> for Insert's case, so we can probably skip writing anything about it\n> unless someone else can think of such a case.\n>\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 9 Sep 2020 10:20:40 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: INSERT INTO SELECT, Why Parallelism is not selected?" }, { "msg_contents": "On Wed, Sep 9, 2020 at 10:20 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jul 30, 2020 at 6:42 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Jul 30, 2020 at 12:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Wed, Jul 29, 2020 at 7:18 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > >\n> > > > I still don't agree with this as proposed.\n> > > >\n> > > > + * For now, we don't allow parallel inserts of any form not even where the\n> > > > + * leader can perform the insert. This restriction can be uplifted once\n> > > > + * we allow the planner to generate parallel plans for inserts. We can\n> > > >\n> > > > If I'm understanding this correctly, this logic is completely\n> > > > backwards. We don't prohibit inserts here because we know the planner\n> > > > can't generate them. We prohibit inserts here because, if the planner\n> > > > somehow did generate them, it wouldn't be safe. You're saying that\n> > > > it's not allowed because we don't try to do it yet, but actually it's\n> > > > not allowed because we want to make sure that we don't accidentally\n> > > > try to do it. That's very different.\n> > > >\n> > >\n> > > Right, so how about something like: \"To allow parallel inserts, we\n> > > need to ensure that they are safe to be performed in workers. We have\n> > > the infrastructure to allow parallel inserts in general except for the\n> > > case where inserts generate a new commandid (eg. inserts into a table\n> > > having a foreign key column).\"\n>\n> Robert, Dilip, do you see any problem if we change the comment on the\n> above lines? Feel free to suggest if you have something better in\n> mind.\n>\n\nHearing no further comments, I have pushed the changes as discussed above.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 18 Sep 2020 11:45:08 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: INSERT INTO SELECT, Why Parallelism is not selected?" } ]
[ { "msg_contents": "Hi,\n\nI wrote a Postgres client and in it I allow the user to specify arbitrary\nStartupMessage parameters (Map<string,string>). This is convenient because\nthe user can for example set search_path without issuing a separate SET\nquery or encoding things into the \"options\" parameter. The protocol\ndocumentation also says that the latter is deprecated and what I'm doing\n(if I understand it right) is preferred.\n\nA fellow author of a driver for a different language reminds me that libpq\nexplicitly enumerates the supported parameters in the docs, and I checked\nthe code, and indeed there is a whitelist and others are rejected. So\ntechnically, he's correct: it's nowhere documented that sending e.g.\nsearch_path in StartupMessage parameters will work, and for that matter\nwhether everything that you can set using SET you can also send there.\n\nWhat is the proper behavior for a driver here:\n 1. Whitelist parameters like libpq does, or\n 2. Allow the user to send anything, with the understanding it'll work the\nsame as SET\n\nThanks!\nJaka\n\nHi,I wrote a Postgres client and in it I allow the user to specify arbitrary StartupMessage parameters (Map<string,string>). This is convenient because the user can for example set search_path without issuing a separate SET query or encoding things into the \"options\" parameter. The protocol documentation also says that the latter is deprecated and what I'm doing (if I understand it right) is preferred.A fellow author of a driver for a different language reminds me that libpq explicitly enumerates the supported parameters in the docs, and I checked the code, and indeed there is a whitelist and others are rejected. So technically, he's correct: it's nowhere documented that sending e.g. search_path in StartupMessage parameters will work, and for that matter whether everything that you can set using SET you can also send there.What is the proper behavior for a driver here: 1. Whitelist parameters like libpq does, or 2. Allow the user to send anything, with the understanding it'll work the same as SETThanks!Jaka", "msg_date": "Sat, 11 Jul 2020 20:14:27 -0400", "msg_from": "=?UTF-8?B?SmFrYSBKYW7EjWFy?= <jaka@kubje.org>", "msg_from_op": true, "msg_subject": "StartupMessage parameters - free-form or not?" }, { "msg_contents": "=?UTF-8?B?SmFrYSBKYW7EjWFy?= <jaka@kubje.org> writes:\n> I wrote a Postgres client and in it I allow the user to specify arbitrary\n> StartupMessage parameters (Map<string,string>). This is convenient because\n> the user can for example set search_path without issuing a separate SET\n> query or encoding things into the \"options\" parameter. The protocol\n> documentation also says that the latter is deprecated and what I'm doing\n> (if I understand it right) is preferred.\n\nSure.\n\n> A fellow author of a driver for a different language reminds me that libpq\n> explicitly enumerates the supported parameters in the docs, and I checked\n> the code, and indeed there is a whitelist and others are rejected.\n\nNot sure what you're looking at, but the issue for libpq is that the set\nof \"options\" that it accepts in connection strings is independent of the\nset of backend GUC names (and relatively few of them actually correspond\ndirectly to backend GUCs, either). I suppose we could make it pass\nthrough unrecognized options, but that would be an unmaintainable mess,\nbecause both sets of names are constantly evolving.\n\nIt's a bit of a hack that the backend accepts GUC names directly in\nstartup messages, but the set of \"fixed\" parameter names in that context\nis very short and has barely changed in decades, so we haven't had\nconflict problems.\n\n> technically, he's correct: it's nowhere documented that sending e.g.\n> search_path in StartupMessage parameters will work, and for that matter\n> whether everything that you can set using SET you can also send there.\n\nprotocol.sgml saith (under Message Formats)\n\n In addition to the above, other parameters may be listed. Parameter\n names beginning with _pq_. are reserved for use as protocol\n extensions, while others are treated as run-time parameters to be set\n at backend start time. Such settings will be applied during backend\n start (after parsing the command-line arguments if any) and will act\n as session defaults.\n\nAdmittedly, that doesn't directly define what it means by \"run-time\nparameter\", but what it means is any settable GUC.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 11 Jul 2020 20:43:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: StartupMessage parameters - free-form or not?" }, { "msg_contents": "Excellent, thanks!\n\nOn Sat, Jul 11, 2020 at 8:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> =?UTF-8?B?SmFrYSBKYW7EjWFy?= <jaka@kubje.org> writes:\n> > I wrote a Postgres client and in it I allow the user to specify arbitrary\n> > StartupMessage parameters (Map<string,string>). This is convenient\n> because\n> > the user can for example set search_path without issuing a separate SET\n> > query or encoding things into the \"options\" parameter. The protocol\n> > documentation also says that the latter is deprecated and what I'm doing\n> > (if I understand it right) is preferred.\n>\n> Sure.\n>\n> > A fellow author of a driver for a different language reminds me that\n> libpq\n> > explicitly enumerates the supported parameters in the docs, and I checked\n> > the code, and indeed there is a whitelist and others are rejected.\n>\n> Not sure what you're looking at, but the issue for libpq is that the set\n> of \"options\" that it accepts in connection strings is independent of the\n> set of backend GUC names (and relatively few of them actually correspond\n> directly to backend GUCs, either). I suppose we could make it pass\n> through unrecognized options, but that would be an unmaintainable mess,\n> because both sets of names are constantly evolving.\n>\n> It's a bit of a hack that the backend accepts GUC names directly in\n> startup messages, but the set of \"fixed\" parameter names in that context\n> is very short and has barely changed in decades, so we haven't had\n> conflict problems.\n>\n> > technically, he's correct: it's nowhere documented that sending e.g.\n> > search_path in StartupMessage parameters will work, and for that matter\n> > whether everything that you can set using SET you can also send there.\n>\n> protocol.sgml saith (under Message Formats)\n>\n> In addition to the above, other parameters may be listed. Parameter\n> names beginning with _pq_. are reserved for use as protocol\n> extensions, while others are treated as run-time parameters to be set\n> at backend start time. Such settings will be applied during backend\n> start (after parsing the command-line arguments if any) and will act\n> as session defaults.\n>\n> Admittedly, that doesn't directly define what it means by \"run-time\n> parameter\", but what it means is any settable GUC.\n>\n> regards, tom lane\n>\n\nExcellent, thanks!On Sat, Jul 11, 2020 at 8:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:=?UTF-8?B?SmFrYSBKYW7EjWFy?= <jaka@kubje.org> writes:\n> I wrote a Postgres client and in it I allow the user to specify arbitrary\n> StartupMessage parameters (Map<string,string>). This is convenient because\n> the user can for example set search_path without issuing a separate SET\n> query or encoding things into the \"options\" parameter. The protocol\n> documentation also says that the latter is deprecated and what I'm doing\n> (if I understand it right) is preferred.\n\nSure.\n\n> A fellow author of a driver for a different language reminds me that libpq\n> explicitly enumerates the supported parameters in the docs, and I checked\n> the code, and indeed there is a whitelist and others are rejected.\n\nNot sure what you're looking at, but the issue for libpq is that the set\nof \"options\" that it accepts in connection strings is independent of the\nset of backend GUC names (and relatively few of them actually correspond\ndirectly to backend GUCs, either).  I suppose we could make it pass\nthrough unrecognized options, but that would be an unmaintainable mess,\nbecause both sets of names are constantly evolving.\n\nIt's a bit of a hack that the backend accepts GUC names directly in\nstartup messages, but the set of \"fixed\" parameter names in that context\nis very short and has barely changed in decades, so we haven't had\nconflict problems.\n\n> technically, he's correct: it's nowhere documented that sending e.g.\n> search_path in StartupMessage parameters will work, and for that matter\n> whether everything that you can set using SET you can also send there.\n\nprotocol.sgml saith (under Message Formats)\n\n    In addition to the above, other parameters may be listed. Parameter\n    names beginning with _pq_. are reserved for use as protocol\n    extensions, while others are treated as run-time parameters to be set\n    at backend start time. Such settings will be applied during backend\n    start (after parsing the command-line arguments if any) and will act\n    as session defaults.\n\nAdmittedly, that doesn't directly define what it means by \"run-time\nparameter\", but what it means is any settable GUC.\n\n                        regards, tom lane", "msg_date": "Sat, 11 Jul 2020 20:48:08 -0400", "msg_from": "=?UTF-8?B?SmFrYSBKYW7EjWFy?= <jaka@kubje.org>", "msg_from_op": true, "msg_subject": "Re: StartupMessage parameters - free-form or not?" } ]
[ { "msg_contents": "Hi,\n\nCurrently, getTableAttrs() always retrieves info about columns defaults and\ncheck constraints, while this will never be used if --data-only option if used.\nThis seems like a waste of resources, so here's a patch to skip those parts\nwhen the DDL won't be generated.", "msg_date": "Sun, 12 Jul 2020 07:48:50 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Avoid useless retrieval of defaults and check constraints in pg_dump\n -a" }, { "msg_contents": "On Sun, Jul 12, 2020 at 07:48:50AM +0200, Julien Rouhaud wrote:\n> Currently, getTableAttrs() always retrieves info about columns defaults and\n> check constraints, while this will never be used if --data-only option if used.\n> This seems like a waste of resources, so here's a patch to skip those parts\n> when the DDL won't be generated.\n\nNote that the speed of default and constraint handling has come up before:\nhttps://www.postgresql.org/message-id/flat/CAMkU%3D1xPqHP%3D7YPeChq6n1v_qd4WGf%2BZvtnR-b%2BgyzFqtJqMMQ%40mail.gmail.com\nhttps://www.postgresql.org/message-id/CAMkU=1x-e+maqefhM1yMeSiJ8J9Z+SJHgW7c9bqo3E3JMG4iJA@mail.gmail.com\n\nI'd pointed out that a significant fraction of our pg_upgrade time was in\npg_dump, due to having wide tables with many child tables, and \"default 0\" on\nevery column. (I've since dropped our defaults so this is no longer an issue\nhere).\n\nIt appears your patch would avoid doing unnecessary work in the --data-only\ncase, but it wouldn't help the pg_upgrade case.\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 12 Jul 2020 09:29:21 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Avoid useless retrieval of defaults and check constraints in\n pg_dump -a" }, { "msg_contents": "On Sun, Jul 12, 2020 at 4:29 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Sun, Jul 12, 2020 at 07:48:50AM +0200, Julien Rouhaud wrote:\n> > Currently, getTableAttrs() always retrieves info about columns defaults and\n> > check constraints, while this will never be used if --data-only option if used.\n> > This seems like a waste of resources, so here's a patch to skip those parts\n> > when the DDL won't be generated.\n>\n> Note that the speed of default and constraint handling has come up before:\n> https://www.postgresql.org/message-id/flat/CAMkU%3D1xPqHP%3D7YPeChq6n1v_qd4WGf%2BZvtnR-b%2BgyzFqtJqMMQ%40mail.gmail.com\n> https://www.postgresql.org/message-id/CAMkU=1x-e+maqefhM1yMeSiJ8J9Z+SJHgW7c9bqo3E3JMG4iJA@mail.gmail.com\n\nOh, I wasn't aware of that.\n\n> I'd pointed out that a significant fraction of our pg_upgrade time was in\n> pg_dump, due to having wide tables with many child tables, and \"default 0\" on\n> every column. (I've since dropped our defaults so this is no longer an issue\n> here).\n>\n> It appears your patch would avoid doing unnecessary work in the --data-only\n> case, but it wouldn't help the pg_upgrade case.\n\nIndeed. Making the schema part faster would probably require a bigger\nrefactoring. I'm wondering if we could introduce some facility to\ntemporarily deny any DDL change, so that the initial pg_dump -s done\nby pg_upgrade can be performed before shutting down the instance.\n\nNote that those extraneous queries were found while trying to dump\ndata out of a corrupted database. The issue wasn't an excessive\nruntime but corrupted catalog entries, so bypassing this code (since I\nwas only interested in the data anyway) was simpler than trying to\nrecover yet other corrupted rows.\n\n\n", "msg_date": "Tue, 14 Jul 2020 11:14:50 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid useless retrieval of defaults and check constraints in\n pg_dump -a" }, { "msg_contents": "> On 12 Jul 2020, at 07:48, Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> Currently, getTableAttrs() always retrieves info about columns defaults and\n> check constraints, while this will never be used if --data-only option if used.\n> This seems like a waste of resources, so here's a patch to skip those parts\n> when the DDL won't be generated.\n\nGiven how unintrusive this optimization is, +1 from me to go ahead with this\npatch. pg_dump tests pass. Personally I would've updated the nearby comments\nto reflect why the check for dataOnly is there, but MMV there. I'm moving this\npatch to Ready for Committer.\n\nI'm fairly sure there is a lot more we can do to improve the performance of\ndata-only dumps, but this nicely chips away at the problem.\n\ncheers ./daniel\n\n", "msg_date": "Thu, 10 Sep 2020 14:31:32 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Avoid useless retrieval of defaults and check constraints in\n pg_dump -a" }, { "msg_contents": "On Tue, Jul 14, 2020 at 11:14:50AM +0200, Julien Rouhaud wrote:\n> Note that those extraneous queries were found while trying to dump\n> data out of a corrupted database. The issue wasn't an excessive\n> runtime but corrupted catalog entries, so bypassing this code (since I\n> was only interested in the data anyway) was simpler than trying to\n> recover yet other corrupted rows.\n\nYeah, I don't see actually why this argument can prevent us from doing\na micro optimization if it proves to work correctly.\n--\nMichael", "msg_date": "Tue, 15 Sep 2020 11:33:20 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Avoid useless retrieval of defaults and check constraints in\n pg_dump -a" }, { "msg_contents": "On Thu, Sep 10, 2020 at 02:31:32PM +0200, Daniel Gustafsson wrote:\n> Given how unintrusive this optimization is, +1 from me to go ahead with this\n> patch. pg_dump tests pass. Personally I would've updated the nearby comments\n> to reflect why the check for dataOnly is there, but MMV there. I'm moving this\n> patch to Ready for Committer.\n\nWe need two comments here. I would suggest something like:\n\"Skip default/check for a data-only dump, as this is only needed for\ndumps of the table schema.\"\n\nBetter wording is of course welcome.\n\n> I'm fairly sure there is a lot more we can do to improve the performance of\n> data-only dumps, but this nicely chips away at the problem.\n\nI was looking at that, and wondered about cases like the following,\nartistic, thing:\nCREATE FUNCTION check_data_zz() RETURNS boolean\n LANGUAGE sql STABLE STRICT\n AS $$select count(a) > 0 from zz$$;\nCREATE TABLE public.yy (\n a integer,\n CONSTRAINT yy_check CHECK (check_data_zz())\n);\nCREATE TABLE zz (a integer);\nINSERT INTO zz VALUES (1);\nINSERT INTO yy VALUES (1);\n\nEven on HEAD, this causes the data load to fail because yy's data is\ninserted before zz, so keeping track of the CHECK dependency could\nhave made sense for --data-only if we were to make a better work at\ndetecting the dependency between both tables and made sure that zz\ndata needs to appear before yy, but it is not like this would happen\neasily in pg_dump, and we document it this way (see the warning about\ndump/reload in ddl.sgml for check constraints). In short, I think\nthat this patch looks like a good thing to have.\n--\nMichael", "msg_date": "Tue, 15 Sep 2020 11:48:39 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Avoid useless retrieval of defaults and check constraints in\n pg_dump -a" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Tue, Jul 14, 2020 at 11:14:50AM +0200, Julien Rouhaud wrote:\n>> Note that those extraneous queries were found while trying to dump\n>> data out of a corrupted database. The issue wasn't an excessive\n>> runtime but corrupted catalog entries, so bypassing this code (since I\n>> was only interested in the data anyway) was simpler than trying to\n>> recover yet other corrupted rows.\n\n> Yeah, I don't see actually why this argument can prevent us from doing\n> a micro optimization if it proves to work correctly.\n\nThe main thing I'm wondering about is whether not fetching these objects\ncould lead to failing to detect an important dependency chain. IIRC,\npg_dump simply ignores pg_depend entries that mention objects it has not\nloaded, so there is a possible mechanism for that. However, it's hard to\nsee how a --data-only dump could end up choosing an invalid dump order on\nthat basis. It doesn't seem like safe load orders for the table data\nobjects could depend on what is referenced by defaults or CHECK\nconstraints.\n\nBut ... I've only spent a few minutes thinking about it, so maybe\nI'm missing something.\n\n(Note that we disallow sub-queries in CHECK constraints, and also\ndisclaim responsibility for what happens if you cheat by hiding\nthe subquery in a function. So while it's certainly possible to\nbuild CHECK constraints that only work if table X is loaded before\ntable Y, pg_dump already doesn't guarantee that'll work, --data-only\nor otherwise.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 14 Sep 2020 22:56:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Avoid useless retrieval of defaults and check constraints in\n pg_dump -a" }, { "msg_contents": "On Mon, Sep 14, 2020 at 10:56:01PM -0400, Tom Lane wrote:\n> (Note that we disallow sub-queries in CHECK constraints, and also\n> disclaim responsibility for what happens if you cheat by hiding\n> the subquery in a function. So while it's certainly possible to\n> build CHECK constraints that only work if table X is loaded before\n> table Y, pg_dump already doesn't guarantee that'll work, --data-only\n> or otherwise.)\n\nYep, exactly what I was thinking upthread by cheating with a schema\nhaving cross-table references in a check constraint.\n--\nMichael", "msg_date": "Tue, 15 Sep 2020 12:08:34 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Avoid useless retrieval of defaults and check constraints in\n pg_dump -a" }, { "msg_contents": "On Tue, Sep 15, 2020 at 4:48 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Sep 10, 2020 at 02:31:32PM +0200, Daniel Gustafsson wrote:\n> > Given how unintrusive this optimization is, +1 from me to go ahead with this\n> > patch. pg_dump tests pass. Personally I would've updated the nearby comments\n> > to reflect why the check for dataOnly is there, but MMV there. I'm moving this\n> > patch to Ready for Committer.\n>\n> We need two comments here. I would suggest something like:\n> \"Skip default/check for a data-only dump, as this is only needed for\n> dumps of the table schema.\"\n>\n> Better wording is of course welcome.\n\nFWIW I'm fine with those news comments!\n\n\n", "msg_date": "Tue, 15 Sep 2020 10:46:56 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid useless retrieval of defaults and check constraints in\n pg_dump -a" }, { "msg_contents": "On Tue, Sep 15, 2020 at 10:46:56AM +0200, Julien Rouhaud wrote:\n> FWIW I'm fine with those news comments!\n\nOkay, I got again on this one today and finished by committing the\npatch as of 5423853.\n--\nMichael", "msg_date": "Wed, 16 Sep 2020 22:06:01 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Avoid useless retrieval of defaults and check constraints in\n pg_dump -a" }, { "msg_contents": "On Wed, Sep 16, 2020 at 3:06 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Sep 15, 2020 at 10:46:56AM +0200, Julien Rouhaud wrote:\n> > FWIW I'm fine with those news comments!\n>\n> Okay, I got again on this one today and finished by committing the\n> patch as of 5423853.\n\nThanks Michael!\n\n\n", "msg_date": "Wed, 16 Sep 2020 15:34:12 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid useless retrieval of defaults and check constraints in\n pg_dump -a" } ]
[ { "msg_contents": "commit b36805f3c54fe0e50e58bb9e6dad66daca46fbf6\nAuthor: Heikki Linnakangas <heikki.linnakangas@iki.fi>\nDate: Sun Jun 28 21:35:51 2015 +0300\n\n...\n\n|@@ -175,22 +175,31 @@ libpqProcessFileList(void)\n| pg_fatal(\"unexpected result set while fetching file list\\n\");\n| \n| /* Read result to local variables */\n| for (i = 0; i < PQntuples(res); i++)\n| {\n| char *path = PQgetvalue(res, i, 0);\n| int filesize = atoi(PQgetvalue(res, i, 1));\n| bool isdir = (strcmp(PQgetvalue(res, i, 2), \"t\") == 0);\n| char *link_target = PQgetvalue(res, i, 3);\n| file_type_t type;\n| \n|+ if (PQgetisnull(res, 0, 1))\n...\n|+ continue;\n\nEvery other access to \"res\" in this loop is to res(i), which I believe is what\nwas intended here, too. Currently, it will dumbly loop but skip *every* row if\nthe 2nd column (1: size) of the first row (0) is null.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 13 Jul 2020 01:10:10 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: Don't choke on files that are removed while pg_rewind runs." }, { "msg_contents": "> On 13 Jul 2020, at 08:10, Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> Every other access to \"res\" in this loop is to res(i), which I believe is what\n> was intended here, too. Currently, it will dumbly loop but skip *every* row if\n> the 2nd column (1: size) of the first row (0) is null.\n\nYeah, I agree with that, seems like the call should've been PQgetisnull(res, i, 1);\nto match the loop.\n\ncheers ./daniel\n\n", "msg_date": "Mon, 13 Jul 2020 08:34:06 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Don't choke on files that are removed while pg_rewind runs." }, { "msg_contents": "On Mon, 13 Jul 2020 at 15:34, Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 13 Jul 2020, at 08:10, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> > Every other access to \"res\" in this loop is to res(i), which I believe is what\n> > was intended here, too. Currently, it will dumbly loop but skip *every* row if\n> > the 2nd column (1: size) of the first row (0) is null.\n>\n> Yeah, I agree with that, seems like the call should've been PQgetisnull(res, i, 1);\n> to match the loop.\n\n+1\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 13 Jul 2020 15:59:56 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Don't choke on files that are removed while pg_rewind runs." }, { "msg_contents": "On Mon, Jul 13, 2020 at 03:59:56PM +0900, Masahiko Sawada wrote:\n> On Mon, 13 Jul 2020 at 15:34, Daniel Gustafsson <daniel@yesql.se> wrote:\n>> Yeah, I agree with that, seems like the call should've been PQgetisnull(res, i, 1);\n>> to match the loop.\n> \n> +1\n\nGood catch, Justin. There is a second thing here. The second column\nmatches with the file size, so if its value is NULL then atol() would\njust crash first. I think that it would be more simple to first check\nif the file size is NULL (isdir and link_target would be also NULL,\nbut just checking for the file size is fine), and then assign the\nresult values, like in the attached. Any thoughts?\n--\nMichael", "msg_date": "Mon, 13 Jul 2020 16:56:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Don't choke on files that are removed while pg_rewind runs." }, { "msg_contents": "> On 13 Jul 2020, at 09:56, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Mon, Jul 13, 2020 at 03:59:56PM +0900, Masahiko Sawada wrote:\n>> On Mon, 13 Jul 2020 at 15:34, Daniel Gustafsson <daniel@yesql.se> wrote:\n>>> Yeah, I agree with that, seems like the call should've been PQgetisnull(res, i, 1);\n>>> to match the loop.\n>> \n>> +1\n> \n> Good catch, Justin. There is a second thing here. The second column\n> matches with the file size, so if its value is NULL then atol() would\n> just crash first.\n\nDoes it? PGgetvalue will return an empty string and not NULL, so atol will\nconvert that to zero wont it? It can be argued whether zero is the right size\nfor a missing file, but it shouldn't crash at least.\n\n> I think that it would be more simple to first check\n> if the file size is NULL (isdir and link_target would be also NULL,\n> but just checking for the file size is fine), and then assign the\n> result values, like in the attached. Any thoughts?\n\nIt does convey the meaning of code to do it after, since the data isn't useful\nin case the filesize is zero, but I don't have strong feelings for/against.\nQuestion is, rather than discard rows pulled from the server, should the query\nbe tweaked to not include it in the first place instead?\n\ncheers ./daniel\n\n", "msg_date": "Mon, 13 Jul 2020 10:12:54 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Don't choke on files that are removed while pg_rewind runs." }, { "msg_contents": "On Mon, Jul 13, 2020 at 10:12:54AM +0200, Daniel Gustafsson wrote:\n> Does it? PGgetvalue will return an empty string and not NULL, so atol will\n> convert that to zero wont it? It can be argued whether zero is the right size\n> for a missing file, but it shouldn't crash at least.\n\nNay, you are right. Thanks.\n\n> It does convey the meaning of code to do it after, since the data isn't useful\n> in case the filesize is zero, but I don't have strong feelings for/against.\n> Question is, rather than discard rows pulled from the server, should the query\n> be tweaked to not include it in the first place instead?\n\nThat sounds like a good idea with an extra qual in the first part of\nthe inner CTE, if coupled with a check to make sure that we never\nget a NULL result. Now there is IMO an argument to not complicate\nmore this query as it is not like a lot of tuples would get filtered\nout anyway because of a NULL set of values? I don't have strong\nfeelings for one approach or the other, but if I were to choose, I\nwould just let the code as-is, without the change in the CTE.\n--\nMichael", "msg_date": "Mon, 13 Jul 2020 21:18:24 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Don't choke on files that are removed while pg_rewind runs." }, { "msg_contents": "> On 13 Jul 2020, at 14:18, Michael Paquier <michael@paquier.xyz> wrote:\n\n> That sounds like a good idea with an extra qual in the first part of\n> the inner CTE, if coupled with a check to make sure that we never\n> get a NULL result. Now there is IMO an argument to not complicate\n> more this query as it is not like a lot of tuples would get filtered\n> out anyway because of a NULL set of values? I don't have strong\n> feelings for one approach or the other, but if I were to choose, I\n> would just let the code as-is, without the change in the CTE.\n\nI don't have strong opinions either, both approaches will work, so feel free to\ngo ahead with the proposed change.\n\ncheers ./daniel\n\n", "msg_date": "Tue, 14 Jul 2020 12:18:41 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Don't choke on files that are removed while pg_rewind runs." }, { "msg_contents": "On Tue, Jul 14, 2020 at 12:18:41PM +0200, Daniel Gustafsson wrote:\n> I don't have strong opinions either, both approaches will work, so feel free to\n> go ahead with the proposed change.\n\nThanks. I have just gone with the solution to not change the query,\nand applied it down to 9.5. Please note that I also maintain an older\nversion for 9.4 and 9.3, that has the same problem:\nhttps://github.com/vmware/pg_rewind\n\nSo I'll go fix it.\n--\nMichael", "msg_date": "Wed, 15 Jul 2020 15:25:51 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Don't choke on files that are removed while pg_rewind runs." } ]
[ { "msg_contents": "Hi,\n\nSome comments in tableam.h and heapam.c contain three old function names \nalthough these have been renamed by this commit \n73b8c3bd2889fed986044e15aefd0911f96ccdd3.\n\nOld: table_insert, table_fetch_row_version, table_get_latest_tid.\n\nNew: table_tuple_insert, table_tuple_fetch_row_version, \ntable_tuple_get_latest_tid.\n\nI think these are editing errors. PG 12 also has the same errors.\n\n\nBest regards,", "msg_date": "Mon, 13 Jul 2020 14:25:39 +0200", "msg_from": "Hironobu SUZUKI <hironobu@interdb.jp>", "msg_from_op": true, "msg_subject": "Editing errors in the comments of tableam.h and heapam.c" }, { "msg_contents": "On Mon, Jul 13, 2020 at 02:25:39PM +0200, Hironobu SUZUKI wrote:\n> Some comments in tableam.h and heapam.c contain three old function names\n> although these have been renamed by this commit\n> 73b8c3bd2889fed986044e15aefd0911f96ccdd3.\n> \n> Old: table_insert, table_fetch_row_version, table_get_latest_tid.\n> \n> New: table_tuple_insert, table_tuple_fetch_row_version,\n> table_tuple_get_latest_tid.\n\nThanks. That looks right.\n\n> I think these are editing errors. PG 12 also has the same errors.\n\nYeah. We also recommend to look at tableam.h in the docs, so while\nusually I just bother fixing comments on HEAD, it would be better to\nback-patch this one.\n--\nMichael", "msg_date": "Mon, 13 Jul 2020 21:52:12 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Editing errors in the comments of tableam.h and heapam.c" }, { "msg_contents": "On Mon, Jul 13, 2020 at 09:52:12PM +0900, Michael Paquier wrote:\n> Yeah. We also recommend to look at tableam.h in the docs, so while\n> usually I just bother fixing comments on HEAD, it would be better to\n> back-patch this one.\n\nCommitted and back-patched down to 12. Thanks, Suzuki-san.\n--\nMichael", "msg_date": "Tue, 14 Jul 2020 13:21:57 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Editing errors in the comments of tableam.h and heapam.c" } ]
[ { "msg_contents": "We are fast approaching mid-July, and with it Mid-commitfest. As has been the\ncase with most commitfests for a while, this CF had a record number of entries\nwith 246 patches. As of this writing, the status breakdown looks like this:\n\n Needs review: 139\n Waiting on Author: 34\n\n Ready for Committer: 8\n Committed: 43\n Returned with Feedback: 11\n Rejected: 1\n Withdrawn: 9\n\n Moved to next CF: 1\n\nwhich means we have reached closure on ~26% of the patches. Let's collectively\ntry to get that number closer to 50% to really jumpstart v14!\n\nAll patches which didn't apply when the CF started were notified on their\nrespective threads, if no new version is posted by the last week of the CF they\nwill be considered stalled and returned with feedback.\n\ncheers ./daniel\n\n", "msg_date": "Mon, 13 Jul 2020 16:07:32 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Commitfest 2020-07 almost halfway" } ]
[ { "msg_contents": "Hi hackers,\n\nPFA a patch that fixes up the identification for 4 header files.\n\nI did a little archaeology trying to find plausible reasons for why we\ncommitted the wrong identification in the first place, and here's what I\ncame up with:\n\njsonfuncs.h was created in ce0425b162d0a to house backend-only json\nfunction declarations previously in jsonapi.h, and the identification\nwas a copy-pasta from jsonapi.h (then in src/include/utils).\n\njsonapi.h was wholesale moved to common in beb4699091e9f without\nchanging identification.\n\npartdesc.h was created in 1bb5e78218107 but I can't find a good excuse\nwhy we made a mistake then, except for (maybe) the prototype for\nRelationBuildPartitionDesc() was moved from partcache.h and so we may\nhave also taken its identification line, although that sounds arbitrary.\n\nllvmjit_emit.h was created with the wrong identification, probably due\nto a personal template...\n\nCheers,\nJesse", "msg_date": "Mon, 13 Jul 2020 09:31:58 -0700", "msg_from": "Jesse Zhang <sbjesse@gmail.com>", "msg_from_op": true, "msg_subject": "Fix header identification" }, { "msg_contents": "On Mon, Jul 13, 2020 at 09:31:58AM -0700, Jesse Zhang wrote:\n> PFA a patch that fixes up the identification for 4 header files.\n\nThanks, Jesse. Applied.\n--\nMichaelx", "msg_date": "Tue, 14 Jul 2020 13:41:43 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix header identification" } ]
[ { "msg_contents": "Hi,\n\nThe PostgreSQL 13 Release Management Team is pleased to announce the\nrelease date of PostgreSQL 13 Beta 3 is set to 2020-08-13, which is the\nsame day as the cumulative update release[1]. Please be sure to have\nyour patches committed for PostgreSQL 13 no latter than Sunday,\n2020-08-09 AOE[2].\n\nWe thank everyone for your continued testing and resolution of open\nitems[3] on the list.\n\nThanks!\n\nThe PostgreSQL 13 RMT\n\n[1] https://www.postgresql.org/developer/roadmap/\n[2] https://en.wikipedia.org/wiki/Anywhere_on_Earth\n[3] https://wiki.postgresql.org/wiki/PostgreSQL_13_Open_Items", "msg_date": "Mon, 13 Jul 2020 12:53:48 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "PostgreSQL 13 Beta 3 Release Date" }, { "msg_contents": "Hi,\n\nOn 7/13/20 12:53 PM, Jonathan S. Katz wrote:\n> Hi,\n> \n> The PostgreSQL 13 Release Management Team is pleased to announce the\n> release date of PostgreSQL 13 Beta 3 is set to 2020-08-13, which is the\n> same day as the cumulative update release[1]. Please be sure to have\n> your patches committed for PostgreSQL 13 no latter than Sunday,\n> 2020-08-09 AOE[2].\n\nJust a reminder that 2020-08-09 AOE[1] is nigh -- if you are working on\nany open items for this release, please have them committed by then.\n\nThis also coincides with the scheduled August 2020 update release, so if\nyou are planning to get in bug fixes for said release, please also be\nsure to have them committed by then :)\n\nThanks,\n\nJonathan\n\n[1] https://en.wikipedia.org/wiki/Anywhere_on_Earth", "msg_date": "Thu, 6 Aug 2020 17:04:35 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 13 Beta 3 Release Date (+ Update Release)" } ]
[ { "msg_contents": "Hi Hackers,\n\nThe idea of achieving Postgres scaling via sharding using postgres_fdw + \npartitioning got a lot of attention last years. Many optimisations have \nbeen done in this direction: partition pruning, partition-wise \naggregates / joins, postgres_fdw push-down of LIMIT, GROUP BY, etc. In \nmany cases they work really nice.\n\nHowever, still there is a vast case, where postgres_fdw + native \npartitioning doesn't perform so good — Multi-tenant architecture. From \nthe database perspective it is presented well in this Citus tutorial \n[1]. The main idea is that there is a number of tables and all of them \nare sharded / partitioned by the same key, e.g. company_id. That way, if \nevery company mostly works within its own data, then every query may be \neffectively executed on a single node without a need for an internode \ncommunication.\n\nI built a simple two node multi-tenant schema for tests, which can be \neasily set up with attached scripts. It creates three tables (companies, \nusers, documents) distributed over two nodes. Everything can be found in \nthis Gist [2] as well.\n\nSome real-life test queries show, that all single-node queries aren't \npushed-down to the required node. For example:\n\nSELECT\n *\nFROM\n documents\n INNER JOIN users ON documents.user_id = users.id\nWHERE\n documents.company_id = 5\n AND users.company_id = 5;\n\nexecuted as following\n\n QUERY PLAN\n-------------------------------------------------------\n Nested Loop\n Join Filter: (documents.user_id = users.id)\n -> Foreign Scan on users_node2 users\n -> Materialize\n -> Foreign Scan on documents_node2 documents\n\ni.e. it uses two foreign scans and does the final join locally. However, \nonce I specify target partitions explicitly, then the entire query is \npushed down to the foreign node:\n\n QUERY PLAN\n---------------------------------------------------------\n Foreign Scan\n Relations: (documents_node2) INNER JOIN (users_node2)\n\nExecution time is dropped significantly as well — by more than 3 times \neven for this small test database. Situation for simple queries with \naggregates or joins and aggregates followed by the sharding key filter \nis the same. Something similar was briefly discussed in this thread [3].\n\nIIUC, it means that push-down of queries through the postgres_fdw works \nperfectly well, the problem is with partition-wise operation detection \nat the planning time. Currently, partition-wise aggregate routines, \ne.g., looks for a GROUP BY and checks whether sharding key exists there \nor not. After that PARTITIONWISE_AGGREGATE_* flag is set. However, it \ndoesn't look for a content of WHERE clause, so frankly speaking it isn't \na problem, this functionality is not yet implemented.\n\nActually, sometimes I was able to push down queries with aggregate \nsimply by adding an additional GROUP BY with sharding key, like this:\n\nSELECT\n count(*)\nFROM\n documents\nWHERE\n company_id = 5\nGROUP BY company_id;\n\nwhere this GROUP BY obviously doesn't change a results, it just allows \nplanner to choose from more possible paths.\n\nAlso, I have tried to hack it a bit and forcedly set \nPARTITIONWISE_AGGREGATE_FULL for this particular query. Everything \nexecuted fine and returned result was correct, which means that all \nunderlying machinery is ready.\n\nThat way, I propose a change to the planner, which will check whether \npartitioning key exist in the WHERE clause and will set \nPARTITIONWISE_AGGREGATE_* flags if appropriate. The whole logic may look \nlike:\n\n1. If the only one condition by partitioning key is used (like above), \nthen it is PARTITIONWISE_AGGREGATE_FULL.\n2. If several conditions are used, then it should be \nPARTITIONWISE_AGGREGATE_PARTIAL.\n\nI'm aware that WHERE clause may be extremely complex in general, but we \ncould narrow this possible optimisation to the same restrictions as \npostgres_fdw push-down \"only WHERE clauses using built-in operators and \nfunctions will be considered for execution on the remote server\".\n\nAlthough it seems that it will be easier to start with aggregates, \nprobably we should initially plan a more general solution? For example, \ncheck that all involved tables are filtered by partitioning key and push \ndown the entire query if all of them target the same foreign server.\n\nAny thoughts?\n\n\n[1] \nhttps://docs.citusdata.com/en/v9.3/get_started/tutorial_multi_tenant.html\n[2] https://gist.github.com/ololobus/8fba33241f68be2e3765d27bf04882a3\n[3] \nhttps://www.postgresql.org/message-id/flat/CAFT%2BaqL1Tt0qfYqjHH%2BshwPoW8qdFjpJ8vBR5ABoXJDUcHyN1w%40mail.gmail.com\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company", "msg_date": "Mon, 13 Jul 2020 22:18:00 +0300", "msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Partitioning and postgres_fdw optimisations for multi-tenancy" }, { "msg_contents": "On Tue, Jul 14, 2020 at 12:48 AM Alexey Kondratov\n<a.kondratov@postgrespro.ru> wrote:\n>\n> Hi Hackers,\n>\n> The idea of achieving Postgres scaling via sharding using postgres_fdw +\n> partitioning got a lot of attention last years. Many optimisations have\n> been done in this direction: partition pruning, partition-wise\n> aggregates / joins, postgres_fdw push-down of LIMIT, GROUP BY, etc. In\n> many cases they work really nice.\n>\n> However, still there is a vast case, where postgres_fdw + native\n> partitioning doesn't perform so good — Multi-tenant architecture. From\n> the database perspective it is presented well in this Citus tutorial\n> [1]. The main idea is that there is a number of tables and all of them\n> are sharded / partitioned by the same key, e.g. company_id. That way, if\n> every company mostly works within its own data, then every query may be\n> effectively executed on a single node without a need for an internode\n> communication.\n>\n> I built a simple two node multi-tenant schema for tests, which can be\n> easily set up with attached scripts. It creates three tables (companies,\n> users, documents) distributed over two nodes. Everything can be found in\n> this Gist [2] as well.\n>\n> Some real-life test queries show, that all single-node queries aren't\n> pushed-down to the required node. For example:\n>\n> SELECT\n> *\n> FROM\n> documents\n> INNER JOIN users ON documents.user_id = users.id\n> WHERE\n> documents.company_id = 5\n> AND users.company_id = 5;\n\nThere are a couple of things happening here\n1. the clauses on company_id in WHERE clause are causing partition\npruning. Partition-wise join is disabled with partition pruning before\nPG13. In PG13 we have added advanced partition matching algorithm\nwhich will allow partition-wise join with partition pruning.\n2. the query has no equality condition on the partition key of the\ntables being joined. Partitionwise join is possible only when there's\nan equality condition on the partition keys (company_id) of the\njoining tables. PostgreSQL's optimizer is not smart enough to convert\nthe equality conditions in WHERE clause into equality conditions on\npartition keys. So having those conditions just in WHERE clause does\nnot help. Instead please add equality conditions on partition keys in\nJOIN .. ON clause or WHERE clause (only for INNER join).\n\n>\n> executed as following\n>\n> QUERY PLAN\n> -------------------------------------------------------\n> Nested Loop\n> Join Filter: (documents.user_id = users.id)\n> -> Foreign Scan on users_node2 users\n> -> Materialize\n> -> Foreign Scan on documents_node2 documents\n>\n> i.e. it uses two foreign scans and does the final join locally. However,\n> once I specify target partitions explicitly, then the entire query is\n> pushed down to the foreign node:\n>\n> QUERY PLAN\n> ---------------------------------------------------------\n> Foreign Scan\n> Relations: (documents_node2) INNER JOIN (users_node2)\n>\n> Execution time is dropped significantly as well — by more than 3 times\n> even for this small test database. Situation for simple queries with\n> aggregates or joins and aggregates followed by the sharding key filter\n> is the same. Something similar was briefly discussed in this thread [3].\n>\n> IIUC, it means that push-down of queries through the postgres_fdw works\n> perfectly well, the problem is with partition-wise operation detection\n> at the planning time. Currently, partition-wise aggregate routines,\n> e.g., looks for a GROUP BY and checks whether sharding key exists there\n> or not. After that PARTITIONWISE_AGGREGATE_* flag is set. However, it\n> doesn't look for a content of WHERE clause, so frankly speaking it isn't\n> a problem, this functionality is not yet implemented.\n>\n> Actually, sometimes I was able to push down queries with aggregate\n> simply by adding an additional GROUP BY with sharding key, like this:\n>\n> SELECT\n> count(*)\n> FROM\n> documents\n> WHERE\n> company_id = 5\n> GROUP BY company_id;\n\nThis gets pushed down since GROUP BY clause is on the partition key.\n\n>\n> where this GROUP BY obviously doesn't change a results, it just allows\n> planner to choose from more possible paths.\n>\n> Also, I have tried to hack it a bit and forcedly set\n> PARTITIONWISE_AGGREGATE_FULL for this particular query. Everything\n> executed fine and returned result was correct, which means that all\n> underlying machinery is ready.\n>\n> That way, I propose a change to the planner, which will check whether\n> partitioning key exist in the WHERE clause and will set\n> PARTITIONWISE_AGGREGATE_* flags if appropriate. The whole logic may look\n> like:\n>\n> 1. If the only one condition by partitioning key is used (like above),\n> then it is PARTITIONWISE_AGGREGATE_FULL.\n> 2. If several conditions are used, then it should be\n> PARTITIONWISE_AGGREGATE_PARTIAL.\n>\n> I'm aware that WHERE clause may be extremely complex in general, but we\n> could narrow this possible optimisation to the same restrictions as\n> postgres_fdw push-down \"only WHERE clauses using built-in operators and\n> functions will be considered for execution on the remote server\".\n>\n> Although it seems that it will be easier to start with aggregates,\n> probably we should initially plan a more general solution? For example,\n> check that all involved tables are filtered by partitioning key and push\n> down the entire query if all of them target the same foreign server.\n>\n> Any thoughts?\n\nI think adding just equality conditions on the partition key will be\nenough. No need for any code change.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 14 Jul 2020 17:57:49 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Partitioning and postgres_fdw optimisations for multi-tenancy" }, { "msg_contents": "On 2020-07-14 15:27, Ashutosh Bapat wrote:\n> On Tue, Jul 14, 2020 at 12:48 AM Alexey Kondratov\n> <a.kondratov@postgrespro.ru> wrote:\n>> I built a simple two node multi-tenant schema for tests, which can be\n>> easily set up with attached scripts. It creates three tables \n>> (companies,\n>> users, documents) distributed over two nodes. Everything can be found \n>> in\n>> this Gist [2] as well.\n>> \n>> Some real-life test queries show, that all single-node queries aren't\n>> pushed-down to the required node. For example:\n>> \n>> SELECT\n>> *\n>> FROM\n>> documents\n>> INNER JOIN users ON documents.user_id = users.id\n>> WHERE\n>> documents.company_id = 5\n>> AND users.company_id = 5;\n> \n> There are a couple of things happening here\n> 1. the clauses on company_id in WHERE clause are causing partition\n> pruning. Partition-wise join is disabled with partition pruning before\n> PG13. In PG13 we have added advanced partition matching algorithm\n> which will allow partition-wise join with partition pruning.\n> \n\nI forgot to mention that I use a recent master (991c444e7a) for tests \nwith\n\nenable_partitionwise_join = 'on'\nenable_partitionwise_aggregate = 'on'\n\nof course. I've also tried postgres_fdw.use_remote_estimate = true \nfollowed by ANALYSE on both nodes (it is still used in setup.sh script).\n\nBTW, can you, please, share a link to commit / thread about allowing \npartition-wise join and partition pruning to work together in PG13?\n\n> \n> 2. the query has no equality condition on the partition key of the\n> tables being joined. Partitionwise join is possible only when there's\n> an equality condition on the partition keys (company_id) of the\n> joining tables. PostgreSQL's optimizer is not smart enough to convert\n> the equality conditions in WHERE clause into equality conditions on\n> partition keys. So having those conditions just in WHERE clause does\n> not help. Instead please add equality conditions on partition keys in\n> JOIN .. ON clause or WHERE clause (only for INNER join).\n> \n\nWith adding documents.company_id = users.company_id\n\nSELECT *\nFROM\n documents\n INNER JOIN users ON (documents.company_id = users.company_id\n AND documents.user_id = users.id)\nWHERE\n documents.company_id = 5\n AND users.company_id = 5;\n\nquery plan remains the same.\n\n>> \n>> executed as following\n>> \n>> QUERY PLAN\n>> -------------------------------------------------------\n>> Nested Loop\n>> Join Filter: (documents.user_id = users.id)\n>> -> Foreign Scan on users_node2 users\n>> -> Materialize\n>> -> Foreign Scan on documents_node2 documents\n>> \n>> i.e. it uses two foreign scans and does the final join locally. \n>> However,\n>> once I specify target partitions explicitly, then the entire query is\n>> pushed down to the foreign node:\n>> \n>> QUERY PLAN\n>> ---------------------------------------------------------\n>> Foreign Scan\n>> Relations: (documents_node2) INNER JOIN (users_node2)\n>> \n>> Execution time is dropped significantly as well — by more than 3 times\n>> even for this small test database. Situation for simple queries with\n>> aggregates or joins and aggregates followed by the sharding key filter\n>> is the same. Something similar was briefly discussed in this thread \n>> [3].\n>> \n>> IIUC, it means that push-down of queries through the postgres_fdw \n>> works\n>> perfectly well, the problem is with partition-wise operation detection\n>> at the planning time. Currently, partition-wise aggregate routines,\n>> e.g., looks for a GROUP BY and checks whether sharding key exists \n>> there\n>> or not. After that PARTITIONWISE_AGGREGATE_* flag is set. However, it\n>> doesn't look for a content of WHERE clause, so frankly speaking it \n>> isn't\n>> a problem, this functionality is not yet implemented.\n>> \n>> Actually, sometimes I was able to push down queries with aggregate\n>> simply by adding an additional GROUP BY with sharding key, like this:\n>> \n>> SELECT\n>> count(*)\n>> FROM\n>> documents\n>> WHERE\n>> company_id = 5\n>> GROUP BY company_id;\n> \n> This gets pushed down since GROUP BY clause is on the partition key.\n> \n\nSure, but it only works *sometimes*, I've never seen most of such simple \nqueries with aggregates to be pushed down, e.g.:\n\nSELECT\n sum(id)\nFROM\n documents_node2\nWHERE\n company_id = 5\nGROUP BY\n company_id;\n\nwhether 'GROUP BY company_id' is used or not.\n\n>> \n>> Although it seems that it will be easier to start with aggregates,\n>> probably we should initially plan a more general solution? For \n>> example,\n>> check that all involved tables are filtered by partitioning key and \n>> push\n>> down the entire query if all of them target the same foreign server.\n>> \n>> Any thoughts?\n> \n> I think adding just equality conditions on the partition key will be\n> enough. No need for any code change.\n\nSo, it hasn't helped. Maybe I could modify some costs to verify that \npush-down of such joins is ever possible?\n\nAnyway, what about aggregates? Partition-wise aggregates work fine for \nqueries like\n\nSELECT\n count(*)\nFROM\n documents\nGROUP BY\n company_id;\n\nbut once I narrow it to a single partition with 'WHERE company_id = 5', \nthen it is being executed in a very inefficient way — takes all rows \nfrom remote partition / node and performs aggregate locally. It doesn't \nseem like a problem with query itself.\n\nIn my experience, both partition-wise joins and aggregates work well \nwith simple GROUP or JOIN by the partitioning key, which corresponds to \nmassive multi-partition OLAP queries. However, both stop working for a \nsingle-partition queries with WHERE, when postgres_fdw and partitioning \nare used. I'd be glad if you share any new guesses of how to make them \nworking without code modification.\n\n\nThanks\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n", "msg_date": "Tue, 14 Jul 2020 18:12:09 +0300", "msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Partitioning and postgres_fdw optimisations for multi-tenancy" }, { "msg_contents": "On Wed, Jul 15, 2020 at 12:12 AM Alexey Kondratov\n<a.kondratov@postgrespro.ru> wrote:\n> On 2020-07-14 15:27, Ashutosh Bapat wrote:\n> > On Tue, Jul 14, 2020 at 12:48 AM Alexey Kondratov\n> > <a.kondratov@postgrespro.ru> wrote:\n> >> Some real-life test queries show, that all single-node queries aren't\n> >> pushed-down to the required node. For example:\n> >>\n> >> SELECT\n> >> *\n> >> FROM\n> >> documents\n> >> INNER JOIN users ON documents.user_id = users.id\n> >> WHERE\n> >> documents.company_id = 5\n> >> AND users.company_id = 5;\n> >\n> > There are a couple of things happening here\n> > 1. the clauses on company_id in WHERE clause are causing partition\n> > pruning. Partition-wise join is disabled with partition pruning before\n> > PG13.\n\nMore precisely, PWJ cannot be applied when there are no matched\npartitions on the nullable side due to partition pruning before PG13.\nBut the join is an inner join, so I think PWJ can still be applied for\nthe join.\n\n> > In PG13 we have added advanced partition matching algorithm\n> > which will allow partition-wise join with partition pruning.\n\n> BTW, can you, please, share a link to commit / thread about allowing\n> partition-wise join and partition pruning to work together in PG13?\n\nI think the link would be this:\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=c8434d64ce03c32e0029417a82ae937f2055268f\n\nUnfortunately, advanced PWJ added by the commit only allows PWJ and\npartition pruning to work together for list/range partitioned tables,\nnot for hash partitioned tables. However, I think the commit would\nhave nothing to do with the issue here, because 1) the tables involved\nin the join have the same partition bounds, and 2) the commit doesn't\nchange the behavior of such a join.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Wed, 15 Jul 2020 21:02:09 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Partitioning and postgres_fdw optimisations for multi-tenancy" }, { "msg_contents": "On Wed, Jul 15, 2020 at 9:02 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Wed, Jul 15, 2020 at 12:12 AM Alexey Kondratov\n> <a.kondratov@postgrespro.ru> wrote:\n> > On 2020-07-14 15:27, Ashutosh Bapat wrote:\n> > > On Tue, Jul 14, 2020 at 12:48 AM Alexey Kondratov\n> > > <a.kondratov@postgrespro.ru> wrote:\n> > >> Some real-life test queries show, that all single-node queries aren't\n> > >> pushed-down to the required node. For example:\n> > >>\n> > >> SELECT\n> > >> *\n> > >> FROM\n> > >> documents\n> > >> INNER JOIN users ON documents.user_id = users.id\n> > >> WHERE\n> > >> documents.company_id = 5\n> > >> AND users.company_id = 5;\n> > >\n> > > There are a couple of things happening here\n> > > 1. the clauses on company_id in WHERE clause are causing partition\n> > > pruning. Partition-wise join is disabled with partition pruning before\n> > > PG13.\n>\n> More precisely, PWJ cannot be applied when there are no matched\n> partitions on the nullable side due to partition pruning before PG13.\n\nOn reflection, I think I was wrong: the limitation applies to PG13,\neven with advanced PWJ.\n\n> But the join is an inner join, so I think PWJ can still be applied for\n> the join.\n\nI think I was wrong in this point as well :-(. PWJ cannot be applied\nto the join due to the limitation of the PWJ matching logic. See the\ndiscussion started in [1]. I think the patch in [2] would address\nthis issue as well, though the patch is under review.\n\nBest regards,\nEtsuro Fujita\n\n[1] https://www.postgresql.org/message-id/CAN_9JTzo_2F5dKLqXVtDX5V6dwqB0Xk%2BihstpKEt3a1LT6X78A%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/502.1586032678@sss.pgh.pa.us\n\n\n", "msg_date": "Thu, 16 Jul 2020 13:55:48 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Partitioning and postgres_fdw optimisations for multi-tenancy" }, { "msg_contents": "\n\nOn 7/16/20 9:55 AM, Etsuro Fujita wrote:\n> On Wed, Jul 15, 2020 at 9:02 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n>> On Wed, Jul 15, 2020 at 12:12 AM Alexey Kondratov\n>> <a.kondratov@postgrespro.ru> wrote:\n>>> On 2020-07-14 15:27, Ashutosh Bapat wrote:\n>>>> On Tue, Jul 14, 2020 at 12:48 AM Alexey Kondratov\n>>>> <a.kondratov@postgrespro.ru> wrote:\n>>>>> Some real-life test queries show, that all single-node queries aren't\n>>>>> pushed-down to the required node. For example:\n>>>>>\n>>>>> SELECT\n>>>>> *\n>>>>> FROM\n>>>>> documents\n>>>>> INNER JOIN users ON documents.user_id = users.id\n>>>>> WHERE\n>>>>> documents.company_id = 5\n>>>>> AND users.company_id = 5;\n>>>>\n>>>> There are a couple of things happening here\n>>>> 1. the clauses on company_id in WHERE clause are causing partition\n>>>> pruning. Partition-wise join is disabled with partition pruning before\n>>>> PG13.\n>>\n>> More precisely, PWJ cannot be applied when there are no matched\n>> partitions on the nullable side due to partition pruning before PG13.\n> \n> On reflection, I think I was wrong: the limitation applies to PG13,\n> even with advanced PWJ.\n> \n>> But the join is an inner join, so I think PWJ can still be applied for\n>> the join.\n> \n> I think I was wrong in this point as well :-(. PWJ cannot be applied\n> to the join due to the limitation of the PWJ matching logic. See the\n> discussion started in [1]. I think the patch in [2] would address\n> this issue as well, though the patch is under review.\n> \n\nI think, discussion [1] is little relevant to the current task. Here we \njoin not on partition attribute and PWJ can't be used at all. Here we \ncan use push-down join of two foreign relations.\nWe can analyze baserestrictinfo's of outer and inner RelOptInfo's and \nmay detect that only one partition from outer and inner need to be joined.\nNext, we will create joinrel from RelOptInfo's of these partitions and \nreplace joinrel of partitioned tables. But it is only rough outline of a \npossible solution...\n\n> \n> [1] https://www.postgresql.org/message-id/CAN_9JTzo_2F5dKLqXVtDX5V6dwqB0Xk%2BihstpKEt3a1LT6X78A%40mail.gmail.com\n> [2] https://www.postgresql.org/message-id/502.1586032678@sss.pgh.pa.us\n> \n> \n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Thu, 16 Jul 2020 16:56:54 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Partitioning and postgres_fdw optimisations for multi-tenancy" }, { "msg_contents": "On Thu, Jul 16, 2020 at 8:56 PM Andrey Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> On 7/16/20 9:55 AM, Etsuro Fujita wrote:\n\n> >>>> On Tue, Jul 14, 2020 at 12:48 AM Alexey Kondratov\n> >>>> <a.kondratov@postgrespro.ru> wrote:\n> >>>>> Some real-life test queries show, that all single-node queries aren't\n> >>>>> pushed-down to the required node. For example:\n> >>>>>\n> >>>>> SELECT\n> >>>>> *\n> >>>>> FROM\n> >>>>> documents\n> >>>>> INNER JOIN users ON documents.user_id = users.id\n> >>>>> WHERE\n> >>>>> documents.company_id = 5\n> >>>>> AND users.company_id = 5;\n\n> > PWJ cannot be applied\n> > to the join due to the limitation of the PWJ matching logic. See the\n> > discussion started in [1]. I think the patch in [2] would address\n> > this issue as well, though the patch is under review.\n\n> I think, discussion [1] is little relevant to the current task. Here we\n> join not on partition attribute and PWJ can't be used at all.\n\nThe main point of the discussion is to determine whether PWJ can be\nused for a join between partitioned tables, based on\nEquivalenceClasses, not just join clauses created by\nbuild_joinrel_restrictlist(). For the above join, for example, the\npatch in [2] would derive a join clause \"documents.company_id =\nusers.company_id\" from an EquivalenceClass that recorded the knowledge\n\"documents.company_id = 5\" and \"users.company_id = 5\", and then the\nplanner would consider from it that PWJ can be used for the join.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Fri, 17 Jul 2020 01:35:11 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Partitioning and postgres_fdw optimisations for multi-tenancy" }, { "msg_contents": "On 2020-07-16 14:56, Andrey Lepikhov wrote:\n> On 7/16/20 9:55 AM, Etsuro Fujita wrote:\n>> On Wed, Jul 15, 2020 at 9:02 PM Etsuro Fujita \n>> <etsuro.fujita@gmail.com> wrote:\n>>> On Wed, Jul 15, 2020 at 12:12 AM Alexey Kondratov\n>>> <a.kondratov@postgrespro.ru> wrote:\n>>>> On 2020-07-14 15:27, Ashutosh Bapat wrote:\n>>>>> On Tue, Jul 14, 2020 at 12:48 AM Alexey Kondratov\n>>>>> <a.kondratov@postgrespro.ru> wrote:\n>>>>>> Some real-life test queries show, that all single-node queries \n>>>>>> aren't\n>>>>>> pushed-down to the required node. For example:\n>>>>>> \n>>>>>> SELECT\n>>>>>> *\n>>>>>> FROM\n>>>>>> documents\n>>>>>> INNER JOIN users ON documents.user_id = users.id\n>>>>>> WHERE\n>>>>>> documents.company_id = 5\n>>>>>> AND users.company_id = 5;\n>>>>> \n>>>>> There are a couple of things happening here\n>>>>> 1. the clauses on company_id in WHERE clause are causing partition\n>>>>> pruning. Partition-wise join is disabled with partition pruning \n>>>>> before\n>>>>> PG13.\n>>> \n>>> More precisely, PWJ cannot be applied when there are no matched\n>>> partitions on the nullable side due to partition pruning before PG13.\n>> \n>> On reflection, I think I was wrong: the limitation applies to PG13,\n>> even with advanced PWJ.\n>> \n>>> But the join is an inner join, so I think PWJ can still be applied \n>>> for\n>>> the join.\n>> \n>> I think I was wrong in this point as well :-(. PWJ cannot be applied\n>> to the join due to the limitation of the PWJ matching logic. See the\n>> discussion started in [1]. I think the patch in [2] would address\n>> this issue as well, though the patch is under review.\n>> \n\nThanks for sharing the links, Fujita-san.\n\n> \n> I think, discussion [1] is little relevant to the current task. Here\n> we join not on partition attribute and PWJ can't be used at all. Here\n> we can use push-down join of two foreign relations.\n> We can analyze baserestrictinfo's of outer and inner RelOptInfo's and\n> may detect that only one partition from outer and inner need to be\n> joined.\n> Next, we will create joinrel from RelOptInfo's of these partitions and\n> replace joinrel of partitioned tables. But it is only rough outline of\n> a possible solution...\n> \n\nI was a bit skeptical after eyeballing the thread [1], but still tried \nv3 patch with the current master and my test setup. Surprisingly, it \njust worked, though it isn't clear for me how. With this patch \naforementioned simple join is completely pushed down to the foreign \nserver. And speedup is approximately the same (~3 times) as when \nrequired partitions are explicitly used in the query.\n\nAs a side-effected it also affected join + aggregate queries like:\n\nSELECT\n user_id,\n count(*) AS documents_count\nFROM\n documents\n INNER JOIN users ON documents.user_id = users.id\nWHERE\n documents.company_id = 5\n AND users.company_id = 5\nGROUP BY\n user_id;\n\nWith patch it is executed as:\n\n GroupAggregate\n Group Key: documents.user_id\n -> Sort\n Sort Key: documents.user_id\n -> Foreign Scan\n Relations: (documents_node2 documents)\n INNER JOIN (users_node2 users)\n\nWithout patch its plan was:\n\n GroupAggregate\n Group Key: documents.user_id\n -> Sort\n Sort Key: documents.user_id\n -> Hash Join\n Hash Cond: (documents.user_id = users.id)\n -> Foreign Scan on documents_node2 documents\n -> Hash\n -> Foreign Scan on users_node2 users\n\nI cannot say that it is most efficient plan in that case, since the \nentire query could be pushed down to the foreign server, but still it \ngives a 5-10% speedup on my setup.\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n", "msg_date": "Thu, 16 Jul 2020 19:40:05 +0300", "msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Partitioning and postgres_fdw optimisations for multi-tenancy" }, { "msg_contents": "On 2020-07-16 19:35, Etsuro Fujita wrote:\n> On Thu, Jul 16, 2020 at 8:56 PM Andrey Lepikhov\n> <a.lepikhov@postgrespro.ru> wrote:\n>> On 7/16/20 9:55 AM, Etsuro Fujita wrote:\n> \n>> >>>> On Tue, Jul 14, 2020 at 12:48 AM Alexey Kondratov\n>> >>>> <a.kondratov@postgrespro.ru> wrote:\n>> >>>>> Some real-life test queries show, that all single-node queries aren't\n>> >>>>> pushed-down to the required node. For example:\n>> >>>>>\n>> >>>>> SELECT\n>> >>>>> *\n>> >>>>> FROM\n>> >>>>> documents\n>> >>>>> INNER JOIN users ON documents.user_id = users.id\n>> >>>>> WHERE\n>> >>>>> documents.company_id = 5\n>> >>>>> AND users.company_id = 5;\n> \n>> > PWJ cannot be applied\n>> > to the join due to the limitation of the PWJ matching logic. See the\n>> > discussion started in [1]. I think the patch in [2] would address\n>> > this issue as well, though the patch is under review.\n> \n>> I think, discussion [1] is little relevant to the current task. Here \n>> we\n>> join not on partition attribute and PWJ can't be used at all.\n> \n> The main point of the discussion is to determine whether PWJ can be\n> used for a join between partitioned tables, based on\n> EquivalenceClasses, not just join clauses created by\n> build_joinrel_restrictlist(). For the above join, for example, the\n> patch in [2] would derive a join clause \"documents.company_id =\n> users.company_id\" from an EquivalenceClass that recorded the knowledge\n> \"documents.company_id = 5\" and \"users.company_id = 5\", and then the\n> planner would consider from it that PWJ can be used for the join.\n> \n\nYes, it really worked well. Thank you for the explanation, it wasn't so \nobvious for me as well. That way, I think that the patch from [1] covers \nmany cases of joins targeting a single partition / foreign server.\n\nHowever, there is an issue with aggregates as well. For a query like:\n\nSELECT\n count(*)\nFROM\n documents\nWHERE\n company_id = 5;\n\nIt would be great to teach planner to understand, that it's a \npartition-wise aggregate as well, even without GROUP BY company_id, \nwhich doesn't always help as well. I'll try to look closer on this \nproblem, but if you have any thoughts about it, then I'd be glad to \nknow.\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n", "msg_date": "Thu, 16 Jul 2020 19:56:35 +0300", "msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Partitioning and postgres_fdw optimisations for multi-tenancy" }, { "msg_contents": "On 7/16/20 9:35 PM, Etsuro Fujita wrote:\n> On Thu, Jul 16, 2020 at 8:56 PM Andrey Lepikhov\n> <a.lepikhov@postgrespro.ru> wrote:\n>> On 7/16/20 9:55 AM, Etsuro Fujita wrote:\n> \n>>>>>> On Tue, Jul 14, 2020 at 12:48 AM Alexey Kondratov\n>>>>>> <a.kondratov@postgrespro.ru> wrote:\n>>>>>>> Some real-life test queries show, that all single-node queries aren't\n>>>>>>> pushed-down to the required node. For example:\n>>>>>>>\n>>>>>>> SELECT\n>>>>>>> *\n>>>>>>> FROM\n>>>>>>> documents\n>>>>>>> INNER JOIN users ON documents.user_id = users.id\n>>>>>>> WHERE\n>>>>>>> documents.company_id = 5\n>>>>>>> AND users.company_id = 5;\n> \n>>> PWJ cannot be applied\n>>> to the join due to the limitation of the PWJ matching logic. See the\n>>> discussion started in [1]. I think the patch in [2] would address\n>>> this issue as well, though the patch is under review.\n> \n>> I think, discussion [1] is little relevant to the current task. Here we\n>> join not on partition attribute and PWJ can't be used at all.\n> \n> The main point of the discussion is to determine whether PWJ can be\n> used for a join between partitioned tables, based on\n> EquivalenceClasses, not just join clauses created by\n> build_joinrel_restrictlist(). For the above join, for example, the\n> patch in [2] would derive a join clause \"documents.company_id =\n> users.company_id\" from an EquivalenceClass that recorded the knowledge\n> \"documents.company_id = 5\" and \"users.company_id = 5\", and then the\n> planner would consider from it that PWJ can be used for the join.\n> \nOk, this patch works and you solved a part of the problem with this \ninteresting approach.\nBut you can see that modification of the query:\n\nSELECT * FROM documents, users WHERE documents.company_id = 5 AND \nusers.company_id = 7;\n\nalso can be pushed into node2 and joined there but not.\nMy point is that we can try to solve the whole problem.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Fri, 17 Jul 2020 09:23:19 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Partitioning and postgres_fdw optimisations for multi-tenancy" }, { "msg_contents": "On Fri, Jul 17, 2020 at 1:56 AM Alexey Kondratov\n<a.kondratov@postgrespro.ru> wrote:\n> However, there is an issue with aggregates as well. For a query like:\n>\n> SELECT\n> count(*)\n> FROM\n> documents\n> WHERE\n> company_id = 5;\n>\n> It would be great to teach planner to understand, that it's a\n> partition-wise aggregate as well, even without GROUP BY company_id,\n> which doesn't always help as well. I'll try to look closer on this\n> problem, but if you have any thoughts about it, then I'd be glad to\n> know.\n\nThe reason why the aggregation count(*) isn't pushed down to the\nremote side is: 1) we allow the FDW to push the aggregation down only\nwhen the input relation to the aggregation is a foreign (base or join)\nrelation (see create_grouping_paths()), but 2) for your case the input\nrelation would be an append relation that contains the foreign\npartition as only one child relation, NOT just the foreign partition.\nThe resulting Append path would be removed in the postprocessing (see\n[1]), but that would be too late for the FDW to do the push-down work.\nI have no idea what to do about this issue.\n\nBest regards,\nEtsuro Fujita\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=8edd0e79460b414b1d971895312e549e95e12e4f;hp=f21668f328c864c6b9290f39d41774cb2422f98e\n\n\n", "msg_date": "Fri, 17 Jul 2020 23:55:09 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Partitioning and postgres_fdw optimisations for multi-tenancy" }, { "msg_contents": "On Fri, Jul 17, 2020 at 8:24 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n>\n> On Fri, Jul 17, 2020 at 1:56 AM Alexey Kondratov\n> <a.kondratov@postgrespro.ru> wrote:\n> > However, there is an issue with aggregates as well. For a query like:\n> >\n> > SELECT\n> > count(*)\n> > FROM\n> > documents\n> > WHERE\n> > company_id = 5;\n> >\n> > It would be great to teach planner to understand, that it's a\n> > partition-wise aggregate as well, even without GROUP BY company_id,\n> > which doesn't always help as well. I'll try to look closer on this\n> > problem, but if you have any thoughts about it, then I'd be glad to\n> > know.\n>\n> The reason why the aggregation count(*) isn't pushed down to the\n> remote side is: 1) we allow the FDW to push the aggregation down only\n> when the input relation to the aggregation is a foreign (base or join)\n> relation (see create_grouping_paths()), but 2) for your case the input\n> relation would be an append relation that contains the foreign\n> partition as only one child relation, NOT just the foreign partition.\n> The resulting Append path would be removed in the postprocessing (see\n> [1]), but that would be too late for the FDW to do the push-down work.\n> I have no idea what to do about this issue.\n\nWon't partitionwise aggregate push aggregate down to partition and\nthen from there to the foreign server through FDW? Something else must\nbe stopping it. May be whole-var expression?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 17 Jul 2020 21:14:19 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Partitioning and postgres_fdw optimisations for multi-tenancy" }, { "msg_contents": "On Sat, Jul 18, 2020 at 12:44 AM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> On Fri, Jul 17, 2020 at 8:24 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > On Fri, Jul 17, 2020 at 1:56 AM Alexey Kondratov\n> > <a.kondratov@postgrespro.ru> wrote:\n> > > However, there is an issue with aggregates as well. For a query like:\n> > >\n> > > SELECT\n> > > count(*)\n> > > FROM\n> > > documents\n> > > WHERE\n> > > company_id = 5;\n> > >\n> > > It would be great to teach planner to understand, that it's a\n> > > partition-wise aggregate as well, even without GROUP BY company_id,\n> > > which doesn't always help as well. I'll try to look closer on this\n> > > problem, but if you have any thoughts about it, then I'd be glad to\n> > > know.\n> >\n> > The reason why the aggregation count(*) isn't pushed down to the\n> > remote side is: 1) we allow the FDW to push the aggregation down only\n> > when the input relation to the aggregation is a foreign (base or join)\n> > relation (see create_grouping_paths()), but 2) for your case the input\n> > relation would be an append relation that contains the foreign\n> > partition as only one child relation, NOT just the foreign partition.\n> > The resulting Append path would be removed in the postprocessing (see\n> > [1]), but that would be too late for the FDW to do the push-down work.\n> > I have no idea what to do about this issue.\n>\n> Won't partitionwise aggregate push aggregate down to partition and\n> then from there to the foreign server through FDW?\n\nSorry, my words were not clear. The aggregation above is count(*)\n*without GROUP BY*, so we can’t apply PWA to it.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Sat, 18 Jul 2020 01:30:21 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Partitioning and postgres_fdw optimisations for multi-tenancy" }, { "msg_contents": "On Fri, Jul 17, 2020 at 10:00 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n>\n> On Sat, Jul 18, 2020 at 12:44 AM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> > On Fri, Jul 17, 2020 at 8:24 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > > On Fri, Jul 17, 2020 at 1:56 AM Alexey Kondratov\n> > > <a.kondratov@postgrespro.ru> wrote:\n> > > > However, there is an issue with aggregates as well. For a query like:\n> > > >\n> > > > SELECT\n> > > > count(*)\n> > > > FROM\n> > > > documents\n> > > > WHERE\n> > > > company_id = 5;\n> > > >\n> > > > It would be great to teach planner to understand, that it's a\n> > > > partition-wise aggregate as well, even without GROUP BY company_id,\n> > > > which doesn't always help as well. I'll try to look closer on this\n> > > > problem, but if you have any thoughts about it, then I'd be glad to\n> > > > know.\n> > >\n> > > The reason why the aggregation count(*) isn't pushed down to the\n> > > remote side is: 1) we allow the FDW to push the aggregation down only\n> > > when the input relation to the aggregation is a foreign (base or join)\n> > > relation (see create_grouping_paths()), but 2) for your case the input\n> > > relation would be an append relation that contains the foreign\n> > > partition as only one child relation, NOT just the foreign partition.\n> > > The resulting Append path would be removed in the postprocessing (see\n> > > [1]), but that would be too late for the FDW to do the push-down work.\n> > > I have no idea what to do about this issue.\n> >\n> > Won't partitionwise aggregate push aggregate down to partition and\n> > then from there to the foreign server through FDW?\n>\n> Sorry, my words were not clear. The aggregation above is count(*)\n> *without GROUP BY*, so we can’t apply PWA to it.\n\nOk. Thanks for the clarification.\n\nIIRC, if GROUP BY does not contain the partition key, partition-wise\naggregate will collect partial aggregates from each partition and then\ncombine those to form the final aggregate. However, we do not have\ninfrastructure to request partial aggregates from a foreign server (we\nlack SQL level support for it). Hence it's not pushed down to the\nforeign server. For count(*) there is no difference between full and\npartial aggregates so it appears as if we could change PARTIAL to FULL\nto push the aggregate down to the foreign server but that's not true\nin general.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 20 Jul 2020 17:33:26 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Partitioning and postgres_fdw optimisations for multi-tenancy" } ]
[ { "msg_contents": "Hi,\n\nA number of EDB customers have had this error crop on their tables for\nreasons that we have usually not been able to determine. In many\ncases, it's probably down to things like running buggy old releases\nfor a long time before upgrading, or bad backup and recovery\nprocedures. It's more than possible that there are still-unfixed\nserver bugs, but I do not have any compelling evidence of such bugs at\nthis time. Unfortunately, once you're in this situation, it's kind of\nhard to find your way out of it. There are a few problems:\n\n1. There's nothing to identify the tuple that has the problem, and no\nway to know how many more of them there might be. Back-patching\nb61d161c146328ae6ba9ed937862d66e5c8b035a would help with the first\npart of this.\n\n2. In some other, similar situations, e.g. where the tuple data is\ngarbled, it's often possible to get out from under the problem by\ndeleting the tuple at issue. But I think that doesn't necessarily fix\nanything in this case.\n\n3. We've had some success with using a PL/plgsql loop with an\nEXCEPTION block to extract all the accessible tuples from the table.\nThen you can truncate the original table and reinsert the data. But\nthis is slow, so it stinks if the table is big, and it's not a viable\napproach if the table in question is a system catalog table -- at\nleast if it's not if it's something critical like pg_class.\n\nI realize somebody's probably going to say \"well, you shouldn't try to\nrepair a database that's in this state, you shouldn't let it happen in\nthe first place, and if it does happen, you should track the root\ncause to the ends of the earth.\" But I think that's a completely\nimpractical approach. I at least have no idea how I'm supposed to\nfigure out when and how a bad relfrozenxid ended up in the table, and\nby the time the problem is discovered after an upgrade the problem\nthat caused it may be quite old. Moreover, not everyone is as\ninterested in an extended debugging exercise as they are in getting\nthe system working again, and VACUUM failing repeatedly is a pretty\nserious problem.\n\nTherefore, one of my colleagues has - at my request - created a couple\nof functions called heap_force_kill() and heap_force_freeze() which\ntake an array of TIDs. The former truncates them all to dead line\npointers. The latter resets the infomask and xmin to make the xmin\nfrozen. (It should probably handle the xmax too; not sure that the\ncurrent version does that, but it's easily fixed if not.) The\nintention is that you can use these to get either get rid of, or get\naccess to, tuples whose visibility information is corrupted for\nwhatever reason. These are pretty sharp tools; you could corrupt a\nperfectly-good table by incautious use of them, or destroy a large\namount of data. You could, for example, force-freeze a tuple created\nby a transaction which added a column, inserted data, and rolled back;\nthat would likely be disastrous. However, in the cases that I'm\nthinking about, disaster has already struck, and something that you\ncan use to get things back to a saner state is better than just\nleaving the table perpetually broken. Without something like this, the\nbackup plan is probably to shut down the server and try to edit the\npages using a perl script or something, but that seems clearly worse.\n\nSo I have these questions:\n\n- Do people think it would me smart/good/useful to include something\nlike this in PostgreSQL?\n\n- If so, how? I would propose a new contrib module that we back-patch\nall the way, because the VACUUM errors were back-patched all the way,\nand there seems to be no advantage in making people wait 5 years for a\nnew version that has some kind of tooling in this area.\n\n- Any ideas for additional things we should include, or improvements\non the sketch above?\n\nThanks,\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 13 Jul 2020 17:12:18 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> - Do people think it would me smart/good/useful to include something\n> like this in PostgreSQL?\n\nAbsolutely, yes.\n\n> - If so, how? I would propose a new contrib module that we back-patch\n> all the way, because the VACUUM errors were back-patched all the way,\n> and there seems to be no advantage in making people wait 5 years for a\n> new version that has some kind of tooling in this area.\n\nWhile I agree that this would be a good and useful new contrib module to\nhave, I don't think it would be appropriate to back-patch it into PG\nformally.\n\nUnfortunately, that gets into the discussion that's cropped up on a few\nother threads of late- that we don't have a good place to put extensions\nwhich are well maintained/recommended by core PG hackers, and which are\nable to work with lots of different versions of PG, and are versioned\nand released independently of PG (and, ideally, built for all the\nversions of PG that we distribute through our packages).\n\nGiven the lack of such a place today, I'd at least suggest starting with\nproposing it as a new contrib module for v14.\n\n> - Any ideas for additional things we should include, or improvements\n> on the sketch above?\n\nNot right off-hand, but will think about it, there could certainly be a\nlot of very interesting tools in such a toolbox.\n\nThanks!\n\nStephen", "msg_date": "Mon, 13 Jul 2020 17:28:25 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Robert Haas (robertmhaas@gmail.com) wrote:\n>> - If so, how? I would propose a new contrib module that we back-patch\n>> all the way, because the VACUUM errors were back-patched all the way,\n>> and there seems to be no advantage in making people wait 5 years for a\n>> new version that has some kind of tooling in this area.\n\n> While I agree that this would be a good and useful new contrib module to\n> have, I don't think it would be appropriate to back-patch it into PG\n> formally.\n\nYeah, I don't care for that either. That's a pretty huge violation of our\nnormal back-patching rules, and I'm not convinced that it's justified.\n\nNo objection to adding it as a new contrib module.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 Jul 2020 18:15:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Mon, Jul 13, 2020 at 2:12 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> 1. There's nothing to identify the tuple that has the problem, and no\n> way to know how many more of them there might be. Back-patching\n> b61d161c146328ae6ba9ed937862d66e5c8b035a would help with the first\n> part of this.\n\nI am in favor of backpatching such changes in cases where senior\ncommunity members feel that it could help with hypothetical\nundiscovered data corruption issues -- if they're willing to take\nresponsibility for the change. It certainly wouldn't be the first\ntime. A \"defense in depth\" mindset seems like the right one when it\ncomes to data corruption bugs. Early detection is really important.\n\n> Moreover, not everyone is as\n> interested in an extended debugging exercise as they are in getting\n> the system working again, and VACUUM failing repeatedly is a pretty\n> serious problem.\n\nThat's absolutely consistent with my experience. Most users want to\nget back to business as usual now, while letting somebody else do the\nhard work of debugging.\n\n> Therefore, one of my colleagues has - at my request - created a couple\n> of functions called heap_force_kill() and heap_force_freeze() which\n> take an array of TIDs.\n\n> So I have these questions:\n>\n> - Do people think it would me smart/good/useful to include something\n> like this in PostgreSQL?\n\nI'm in favor of it.\n\n> - If so, how? I would propose a new contrib module that we back-patch\n> all the way, because the VACUUM errors were back-patched all the way,\n> and there seems to be no advantage in making people wait 5 years for a\n> new version that has some kind of tooling in this area.\n\nI'm in favor of it being *possible* to backpatch tooling that is\nclearly related to correctness in a fundamental way. Obviously this\nwould mean that we'd be revising our general position on backpatching\nto allow some limited exceptions around corruption. I'm not sure that\nthis meets that standard, though. It's hardly something that we can\nexpect all that many users to be able to use effectively.\n\nI may be biased, but I'd be inclined to permit it in the case of\nsomething like amcheck, or pg_visibility, on the grounds that they're\nmore or less the same as the new VACUUM errcontext instrumentation you\nmentioned. The same cannot be said of something like this new\nheap_force_kill() stuff.\n\n> - Any ideas for additional things we should include, or improvements\n> on the sketch above?\n\nClearly you should work out a way of making it very hard to\naccidentally (mis)use. For example, maybe you make the functions check\nfor the presence of a sentinel file in the data directory.\n\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Mon, 13 Jul 2020 15:28:36 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Hi,\n\nOn 2020-07-13 17:12:18 -0400, Robert Haas wrote:\n> 1. There's nothing to identify the tuple that has the problem, and no\n> way to know how many more of them there might be. Back-patching\n> b61d161c146328ae6ba9ed937862d66e5c8b035a would help with the first\n> part of this.\n\nNot fully, I'm afraid. Afaict it doesn't currently tell you the item\npointer offset, just the block numer, right? We probably should extend\nit to also include the offset...\n\n\n> 2. In some other, similar situations, e.g. where the tuple data is\n> garbled, it's often possible to get out from under the problem by\n> deleting the tuple at issue. But I think that doesn't necessarily fix\n> anything in this case.\n\nHuh, why not? That worked in the cases I saw.\n\n\n> Therefore, one of my colleagues has - at my request - created a couple\n> of functions called heap_force_kill() and heap_force_freeze() which\n> take an array of TIDs. The former truncates them all to dead line\n> pointers. The latter resets the infomask and xmin to make the xmin\n> frozen. (It should probably handle the xmax too; not sure that the\n> current version does that, but it's easily fixed if not.)\n\nxmax is among the problematic cases IIRC, so yes, it'd be good to fix\nthat.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 13 Jul 2020 15:38:22 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Mon, Jul 13, 2020 at 6:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Yeah, I don't care for that either. That's a pretty huge violation of our\n> normal back-patching rules, and I'm not convinced that it's justified.\n\nI think that our normal back-patching rules are based primarily on the\nrisk of breaking things, and a new contrib module carries a pretty\nnegligible risk of breaking anything that works today. I wouldn't\npropose to back-patch something on those grounds just as a way of\ndelivering a new feature more quickly, but that's not the intention\nhere. At least in my experience, un-VACUUM-able tables have gotten\nseveral orders of magnitude more common since Andres put those changes\nin. As far as I can recall, EDB has not had this many instances of\ndifferent customers reporting the same problem since the 9.3-era\nmultixact issues. So far, this does not rise to that level, but it is\nby no means a negligible issue, either. I believe it deserves to be\ntaken quite seriously, especially because the existing options for\nhelping customers with this kind of problem are so limited.\n\nNow, if this goes into v14, we can certainly stick it up on github, or\nput it out there in some other way for users to download,\nself-compile, and install, but that seems noticeably less convenient\nfor people who need it, and I'm not clear what the benefit to the\nproject is.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 13 Jul 2020 20:41:22 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Mon, Jul 13, 2020 at 6:38 PM Andres Freund <andres@anarazel.de> wrote:\n> Not fully, I'm afraid. Afaict it doesn't currently tell you the item\n> pointer offset, just the block numer, right? We probably should extend\n> it to also include the offset...\n\nOh, I hadn't realized that limitation. That would be good to fix. It\nwould be even better, I think, if we could have VACUUM proceed with\nthe rest of vacuuming the table, emitting warnings about each\ninstance, instead of blowing up when it hits the first bad tuple, but\nI think you may have told me sometime that doing so would be, uh, less\nthan straightforward. We probably should refuse to update\nrelfrozenxid/relminmxid when this is happening, but I *think* it would\nbe better to still proceed with dead tuple cleanup as far as we can,\nor at least have an option to enable that behavior. I'm not positive\nabout that, but not being able to complete VACUUM at all is a FAR more\nurgent problem than not being able to freeze, even though in the long\nrun the latter is more severe.\n\n> > 2. In some other, similar situations, e.g. where the tuple data is\n> > garbled, it's often possible to get out from under the problem by\n> > deleting the tuple at issue. But I think that doesn't necessarily fix\n> > anything in this case.\n>\n> Huh, why not? That worked in the cases I saw.\n\nI'm not sure I've seen a case where that didn't work, but I don't see\na reason why it couldn't happen. Do you think the code is structured\nin such a way that a deleted tuple is guaranteed to be pruned even if\nthe XID is old? What if clog has been truncated so that the xmin can't\nbe looked up?\n\n> xmax is among the problematic cases IIRC, so yes, it'd be good to fix\n> that.\n\nThanks for the input.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 13 Jul 2020 20:47:10 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Oh, I hadn't realized that limitation. That would be good to fix. It\n> would be even better, I think, if we could have VACUUM proceed with\n> the rest of vacuuming the table, emitting warnings about each\n> instance, instead of blowing up when it hits the first bad tuple, but\n> I think you may have told me sometime that doing so would be, uh, less\n> than straightforward. We probably should refuse to update\n> relfrozenxid/relminmxid when this is happening, but I *think* it would\n> be better to still proceed with dead tuple cleanup as far as we can,\n> or at least have an option to enable that behavior. I'm not positive\n> about that, but not being able to complete VACUUM at all is a FAR more\n> urgent problem than not being able to freeze, even though in the long\n> run the latter is more severe.\n\n+1 for proceeding in this direction, rather than handing users tools\nthat they *will* hurt themselves with.\n\nThe more that I think about it, the more I think that the proposed\nfunctions are tools for wizards only, and so I'm getting hesitant\nabout having them in contrib at all. We lack a better place to\nput them, but that doesn't mean they should be there.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 Jul 2020 20:58:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Mon, Jul 13, 2020 at 8:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > Oh, I hadn't realized that limitation. That would be good to fix. It\n> > would be even better, I think, if we could have VACUUM proceed with\n> > the rest of vacuuming the table, emitting warnings about each\n> > instance, instead of blowing up when it hits the first bad tuple, but\n> > I think you may have told me sometime that doing so would be, uh, less\n> > than straightforward. We probably should refuse to update\n> > relfrozenxid/relminmxid when this is happening, but I *think* it would\n> > be better to still proceed with dead tuple cleanup as far as we can,\n> > or at least have an option to enable that behavior. I'm not positive\n> > about that, but not being able to complete VACUUM at all is a FAR more\n> > urgent problem than not being able to freeze, even though in the long\n> > run the latter is more severe.\n>\n> +1 for proceeding in this direction, rather than handing users tools\n> that they *will* hurt themselves with.\n>\n> The more that I think about it, the more I think that the proposed\n> functions are tools for wizards only, and so I'm getting hesitant\n> about having them in contrib at all. We lack a better place to\n> put them, but that doesn't mean they should be there.\n\nIt's not an either/or; it's a both/and. To recover from this problem,\nyou need to:\n\n1. Be able to tell which tuples are affected.\n2. Do something about it.\n\nI think there are a number of strategies that we could pursue around\neither of those things, and there are better and worse ways of\naccomplishing them, but having one without the other isn't too great.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 13 Jul 2020 21:03:30 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Hi,\n\nOn 2020-07-13 20:47:10 -0400, Robert Haas wrote:\n> On Mon, Jul 13, 2020 at 6:38 PM Andres Freund <andres@anarazel.de> wrote:\n> > Not fully, I'm afraid. Afaict it doesn't currently tell you the item\n> > pointer offset, just the block numer, right? We probably should extend\n> > it to also include the offset...\n>\n> Oh, I hadn't realized that limitation. That would be good to fix.\n\nYea. And it'd even be good if we were to to end up implementing your\nsuggestion below about continuing vacuuming other tuples.\n\n\n> It would be even better, I think, if we could have VACUUM proceed with\n> the rest of vacuuming the table, emitting warnings about each\n> instance, instead of blowing up when it hits the first bad tuple, but\n> I think you may have told me sometime that doing so would be, uh, less\n> than straightforward.\n\nYea, it's not that simple to implement. Not impossible either.\n\n\n> We probably should refuse to update relfrozenxid/relminmxid when this\n> is happening, but I *think* it would be better to still proceed with\n> dead tuple cleanup as far as we can, or at least have an option to\n> enable that behavior. I'm not positive about that, but not being able\n> to complete VACUUM at all is a FAR more urgent problem than not being\n> able to freeze, even though in the long run the latter is more severe.\n\nI'm hesitant to default to removing tuples once we've figured out that\nsomething is seriously wrong. Could easy enough make us plow ahead and\ndelete valuable data on other tuples, even if we'd already detected\nthere's a problem. But I also see the problem you raise. That's not\nacademic, a number of multixact corruption issues the checks detected\nIIRC weren't guaranteed to be caught.\n\n\n> > > 2. In some other, similar situations, e.g. where the tuple data is\n> > > garbled, it's often possible to get out from under the problem by\n> > > deleting the tuple at issue. But I think that doesn't necessarily fix\n> > > anything in this case.\n> >\n> > Huh, why not? That worked in the cases I saw.\n>\n> I'm not sure I've seen a case where that didn't work, but I don't see\n> a reason why it couldn't happen. Do you think the code is structured\n> in such a way that a deleted tuple is guaranteed to be pruned even if\n> the XID is old?\n\nI think so, leaving aside some temporary situations perhaps.\n\n\n> What if clog has been truncated so that the xmin can't be looked up?\n\nThat's possible, but probably only in cases where xmin actually\ncommitted.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 13 Jul 2020 18:10:11 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Mon, Jul 13, 2020 at 8:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The more that I think about it, the more I think that the proposed\n> functions are tools for wizards only, and so I'm getting hesitant\n> about having them in contrib at all. We lack a better place to\n> put them, but that doesn't mean they should be there.\n\nAlso, I want to clarify that in a typical situation in which a\ncustomer is facing this problem, I don't have any access to their\nsystem. I basically never touch customer systems directly. Typically,\nthe customer sends us log files and a description of the problem and\ntheir goals, and we send them back advice or instructions. So it's\nimpractical to imagine that this can be something where you have to\nknow the secret magic wizard password to get access to it. We'd just\nhave to give the customers who need to use this tool said password,\nand then the jig is up - they can redistribute that password to all\nthe non-wizards on the Internet, if they so choose.\n\nI understand that it's not too great when we give people access to\nsharp tools and they hurt themselves with said tools. But this is open\nsource. That's how it goes.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 13 Jul 2020 21:13:04 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Mon, Jul 13, 2020 at 9:10 PM Andres Freund <andres@anarazel.de> wrote:\n> > What if clog has been truncated so that the xmin can't be looked up?\n>\n> That's possible, but probably only in cases where xmin actually\n> committed.\n\nIsn't that the normal case? I'm imagining something like:\n\n- Tuple gets inserted. Transaction commits.\n- VACUUM processes table.\n- Mischievous fairies mark page all-visible in the visibility map.\n- VACUUM runs lots more times, relfrozenxid advances, but without ever\nlooking at the page in question, because it's all-visible.\n- clog is truncated, rendering xmin no longer accessible.\n- User runs VACUUM disabling page skipping, gets ERROR.\n- User deletes offending tuple.\n- At this point, I think the tuple is both invisible and unprunable?\n- Fairies happy, user sad.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 13 Jul 2020 21:18:10 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Jul 13, 2020 at 8:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The more that I think about it, the more I think that the proposed\n>> functions are tools for wizards only, and so I'm getting hesitant\n>> about having them in contrib at all. We lack a better place to\n>> put them, but that doesn't mean they should be there.\n\n> I understand that it's not too great when we give people access to\n> sharp tools and they hurt themselves with said tools. But this is open\n> source. That's how it goes.\n\nI think you're attacking a straw man. I'm well aware of how open source\nworks, thanks. What I'm saying is that contrib is mostly seen to be\nreasonably harmless stuff. Sure, you can overwrite data you didn't want\nto with adminpack's pg_file_write. But that's the price of having such a\ncapability at all, and in general it's not hard for users to understand\nboth the uses and risks of that function. That statement does not apply\nto the functions being proposed here. It doesn't seem like they could\npossibly be safe to use without very specific expert advice --- and even\nthen, we're talking rather small values of \"safe\". So I wish we had some\nother way to distribute them than via contrib.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 Jul 2020 21:26:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "\n\nOn 2020/07/14 9:41, Robert Haas wrote:\n> On Mon, Jul 13, 2020 at 6:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Yeah, I don't care for that either. That's a pretty huge violation of our\n>> normal back-patching rules, and I'm not convinced that it's justified.\n> \n> I think that our normal back-patching rules are based primarily on the\n> risk of breaking things, and a new contrib module carries a pretty\n> negligible risk of breaking anything that works today. I wouldn't\n> propose to back-patch something on those grounds just as a way of\n> delivering a new feature more quickly, but that's not the intention\n> here. At least in my experience, un-VACUUM-able tables have gotten\n> several orders of magnitude more common since Andres put those changes\n> in. As far as I can recall, EDB has not had this many instances of\n> different customers reporting the same problem since the 9.3-era\n> multixact issues. So far, this does not rise to that level, but it is\n> by no means a negligible issue, either. I believe it deserves to be\n> taken quite seriously, especially because the existing options for\n> helping customers with this kind of problem are so limited.\n> \n> Now, if this goes into v14, we can certainly stick it up on github, or\n> put it out there in some other way for users to download,\n> self-compile, and install, but that seems noticeably less convenient\n> for people who need it, and I'm not clear what the benefit to the\n> project is.\n\nBut updating this tool can fit to the release schedule and\npolicy of PostgreSQL?\n\nWhile investigating the problem by using this tool, we may want to\nadd new feature into the tool because it's necessary for the investigation.\nBut users would need to wait for next minor version release, to use this\nnew feature.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 14 Jul 2020 10:29:10 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Hi!\n\n> 14 июля 2020 г., в 02:12, Robert Haas <robertmhaas@gmail.com> написал(а):\n> \n> So I have these questions:\n> \n> - Do people think it would me smart/good/useful to include something\n> like this in PostgreSQL?\n> \n> - If so, how? I would propose a new contrib module that we back-patch\n> all the way\n\n\nMy 0.05₽.\n\nAt Yandex we used to fix similar corruption things with our pg_dirty_hands extension [0].\nBut then we developed our internal pg_heapcheck module (unfortunately we did not publish it) and incorporated aggressive recovery into heapcheck.\n\nNow when community has official heapcheck I think it worth to keep detection and fixing tools together.\n\nBest regards, Andrey Borodin.\n\n[0] https://github.com/dsarafan/pg_dirty_hands/blob/master/src/pg_dirty_hands.c\n\n\n\n", "msg_date": "Tue, 14 Jul 2020 11:25:17 +0500", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On 2020-07-14 02:41, Robert Haas wrote:\n> I think that our normal back-patching rules are based primarily on the\n> risk of breaking things, and a new contrib module carries a pretty\n> negligible risk of breaking anything that works today.\n\nI think that all feature code ought to go through a beta cycle. So if \nthis code makes it to 14.0 or 14.1, then I'd consider backpatching it.\n\n> Now, if this goes into v14, we can certainly stick it up on github, or\n> put it out there in some other way for users to download,\n> self-compile, and install, but that seems noticeably less convenient\n> for people who need it, and I'm not clear what the benefit to the\n> project is.\n\nIn the meantime, if you're wizard enough to deal with this kind of \nthing, you could also clone the module from the PG14 tree and build it \nagainst older versions manually.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 14 Jul 2020 09:08:22 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Tue, Jul 14, 2020 at 3:26 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Mon, Jul 13, 2020 at 8:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> The more that I think about it, the more I think that the proposed\n> >> functions are tools for wizards only, and so I'm getting hesitant\n> >> about having them in contrib at all. We lack a better place to\n> >> put them, but that doesn't mean they should be there.\n>\n> > I understand that it's not too great when we give people access to\n> > sharp tools and they hurt themselves with said tools. But this is open\n> > source. That's how it goes.\n>\n> I think you're attacking a straw man. I'm well aware of how open source\n> works, thanks. What I'm saying is that contrib is mostly seen to be\n> reasonably harmless stuff. Sure, you can overwrite data you didn't want\n> to with adminpack's pg_file_write. But that's the price of having such a\n> capability at all, and in general it's not hard for users to understand\n> both the uses and risks of that function. That statement does not apply\n> to the functions being proposed here. It doesn't seem like they could\n> possibly be safe to use without very specific expert advice --- and even\n> then, we're talking rather small values of \"safe\". So I wish we had some\n> other way to distribute them than via contrib.\n>\n\nThe countersable of this is pg_resetwal. The number of people who have\nbroken their database with that tool is not small.\n\nThat said, we could have a separate \"class\" of extensions/tools in the\ndistribution, and encourage packagers to pack them up as separate packages\nfor example. Technically they don't have to be in the same source\nrepository at all of course, but I have a feeling some of them might be a\nlot easier to maintain if they are. And then the user would just have to\ninstall something like \"postgresql-14-wizardtools\". They'd still be\navailable to everybody, of course, but at least the knives would be in a\nclosed drawer until intentionally picked up.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Jul 14, 2020 at 3:26 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Jul 13, 2020 at 8:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The more that I think about it, the more I think that the proposed\n>> functions are tools for wizards only, and so I'm getting hesitant\n>> about having them in contrib at all.  We lack a better place to\n>> put them, but that doesn't mean they should be there.\n\n> I understand that it's not too great when we give people access to\n> sharp tools and they hurt themselves with said tools. But this is open\n> source. That's how it goes.\n\nI think you're attacking a straw man.  I'm well aware of how open source\nworks, thanks.  What I'm saying is that contrib is mostly seen to be\nreasonably harmless stuff.  Sure, you can overwrite data you didn't want\nto with adminpack's pg_file_write.  But that's the price of having such a\ncapability at all, and in general it's not hard for users to understand\nboth the uses and risks of that function.  That statement does not apply\nto the functions being proposed here.  It doesn't seem like they could\npossibly be safe to use without very specific expert advice --- and even\nthen, we're talking rather small values of \"safe\".  So I wish we had some\nother way to distribute them than via contrib.The countersable of this is pg_resetwal. The number of people who have broken their database with that tool is not small.That said, we could have a separate \"class\" of extensions/tools in the distribution, and encourage packagers to pack them up as separate packages for example. Technically they don't have to be in the same source repository at all of course, but I have a feeling some of them might be a lot easier to maintain if they are. And then the user would just have to install something like \"postgresql-14-wizardtools\". They'd still be available to everybody, of course, but at least the knives would be in a closed drawer until intentionally picked up. --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Tue, 14 Jul 2020 10:59:45 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Tue, Jul 14, 2020 at 3:26 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I think you're attacking a straw man. I'm well aware of how open source\n> works, thanks. What I'm saying is that contrib is mostly seen to be\n> reasonably harmless stuff. Sure, you can overwrite data you didn't want\n> to with adminpack's pg_file_write. But that's the price of having such a\n> capability at all, and in general it's not hard for users to understand\n> both the uses and risks of that function. That statement does not apply\n> to the functions being proposed here. It doesn't seem like they could\n> possibly be safe to use without very specific expert advice --- and even\n> then, we're talking rather small values of \"safe\".\n\nWould it be possible to make them safe(r)? For instance, truncate\nonly, don't freeze; only tuples whose visibility information is\ncorrupted; and only in non-catalog tables. What exactly is the risk in\nthat case? Foreign keys might not be satisfied, which might make it\nimpossible to restore a dump, but is that worse than what a DBA can do\nanyway? I would think that it is not and would leave the database in a\nstate DBAs are much better equipped to deal with.\nOr would it be possible to create a table like the original table\n(minus any constraints) and copy all tuples with corrupted visibility\nthere before truncating to a dead line pointer?\n\nJochem\n\n\n", "msg_date": "Tue, 14 Jul 2020 12:54:28 +0200", "msg_from": "Jochem van Dieten <jochemd@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Tue, Jul 14, 2020 at 3:08 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> In the meantime, if you're wizard enough to deal with this kind of\n> thing, you could also clone the module from the PG14 tree and build it\n> against older versions manually.\n\nBut what if you are NOT a wizard, and a wizard is giving you\ndirections? Then having to build from source is a real pain. And\nthat's normally the situation I'm in when a customer has this issue.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 14 Jul 2020 07:51:27 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Tue, Jul 14, 2020 at 4:59 AM Magnus Hagander <magnus@hagander.net> wrote:\n> The countersable of this is pg_resetwal. The number of people who have broken their database with that tool is not small.\n\nVery true.\n\n> That said, we could have a separate \"class\" of extensions/tools in the distribution, and encourage packagers to pack them up as separate packages for example. Technically they don't have to be in the same source repository at all of course, but I have a feeling some of them might be a lot easier to maintain if they are. And then the user would just have to install something like \"postgresql-14-wizardtools\". They'd still be available to everybody, of course, but at least the knives would be in a closed drawer until intentionally picked up.\n\nI don't think that does much to help with the immediate problem here,\nbecause people are being bitten by this problem *now* and a packaging\nchange like this will take a long time to happen and become standard\nout there, but I think it's a good idea.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 14 Jul 2020 07:52:35 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Tue, Jul 14, 2020 at 1:52 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Jul 14, 2020 at 4:59 AM Magnus Hagander <magnus@hagander.net>\n> wrote:\n> > The countersable of this is pg_resetwal. The number of people who have\n> broken their database with that tool is not small.\n>\n> Very true.\n>\n> > That said, we could have a separate \"class\" of extensions/tools in the\n> distribution, and encourage packagers to pack them up as separate packages\n> for example. Technically they don't have to be in the same source\n> repository at all of course, but I have a feeling some of them might be a\n> lot easier to maintain if they are. And then the user would just have to\n> install something like \"postgresql-14-wizardtools\". They'd still be\n> available to everybody, of course, but at least the knives would be in a\n> closed drawer until intentionally picked up.\n>\n> I don't think that does much to help with the immediate problem here,\n> because people are being bitten by this problem *now* and a packaging\n> change like this will take a long time to happen and become standard\n> out there, but I think it's a good idea.\n>\n\nI don't think that it necessarily has to be. As long as we're talking about\nadding something and not actually changing their existing packages, getting\nthis into both yum and apt shouldn't be *that* hard, if it's coordinated\nwell with Christoph and Devrim (obviously that's based on my experience and\nthey will have to give a more complete answer themselves). It would be a\nlot more complicated if it involved changing an existing package.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Jul 14, 2020 at 1:52 PM Robert Haas <robertmhaas@gmail.com> wrote:On Tue, Jul 14, 2020 at 4:59 AM Magnus Hagander <magnus@hagander.net> wrote:\n> The countersable of this is pg_resetwal. The number of people who have broken their database with that tool is not small.\n\nVery true.\n\n> That said, we could have a separate \"class\" of extensions/tools in the distribution, and encourage packagers to pack them up as separate packages for example. Technically they don't have to be in the same source repository at all of course, but I have a feeling some of them might be a lot easier to maintain if they are. And then the user would just have to install something like \"postgresql-14-wizardtools\". They'd still be available to everybody, of course, but at least the knives would be in a closed drawer until intentionally picked up.\n\nI don't think that does much to help with the immediate problem here,\nbecause people are being bitten by this problem *now* and a packaging\nchange like this will take a long time to happen and become standard\nout there, but I think it's a good idea.I don't think that it necessarily has to be. As long as we're talking about adding something and not actually changing their existing packages, getting this into both yum and apt shouldn't be *that* hard, if it's coordinated well with Christoph and Devrim (obviously that's based on my experience and they will have to give a more complete answer themselves). It would be a lot more complicated if it involved changing an existing package.--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Tue, 14 Jul 2020 14:25:35 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Mon, Jul 13, 2020 at 9:29 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> But updating this tool can fit to the release schedule and\n> policy of PostgreSQL?\n>\n> While investigating the problem by using this tool, we may want to\n> add new feature into the tool because it's necessary for the investigation.\n> But users would need to wait for next minor version release, to use this\n> new feature.\n\nYeah, that's a point that needs careful thought. I don't think it\nmeans that we shouldn't have something in core; after all, this is a\nproblem that is created in part by the way that PostgreSQL itself\nworks, and I think it would be quite unfriendly if we refused to do\nanything about that in the core distribution. On the other hand, it\nmight be a good reason not to back-patch, which is something most\npeople don't seem enthusiastic about anyway.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 14 Jul 2020 10:09:05 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Tue, Jul 14, 2020 at 8:25 AM Magnus Hagander <magnus@hagander.net> wrote:\n> I don't think that it necessarily has to be. As long as we're talking about adding something and not actually changing their existing packages, getting this into both yum and apt shouldn't be *that* hard, if it's coordinated well with Christoph and Devrim (obviously that's based on my experience and they will have to give a more complete answer themselves). It would be a lot more complicated if it involved changing an existing package.\n\nI mean, you presumably could not move pg_resetwal to this new package\nin existing branches, right?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 14 Jul 2020 10:09:39 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Tue, Jul 14, 2020 at 4:09 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Jul 14, 2020 at 8:25 AM Magnus Hagander <magnus@hagander.net>\n> wrote:\n> > I don't think that it necessarily has to be. As long as we're talking\n> about adding something and not actually changing their existing packages,\n> getting this into both yum and apt shouldn't be *that* hard, if it's\n> coordinated well with Christoph and Devrim (obviously that's based on my\n> experience and they will have to give a more complete answer themselves).\n> It would be a lot more complicated if it involved changing an existing\n> package.\n>\n> I mean, you presumably could not move pg_resetwal to this new package\n> in existing branches, right?\n>\n\nProbably and eventually. But that can be done for 14+ (or 13+ depending on\nhow \"done\" the packaging is there -- we should just make sure that hits the\nbiggest platform in the same release).\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Jul 14, 2020 at 4:09 PM Robert Haas <robertmhaas@gmail.com> wrote:On Tue, Jul 14, 2020 at 8:25 AM Magnus Hagander <magnus@hagander.net> wrote:\n> I don't think that it necessarily has to be. As long as we're talking about adding something and not actually changing their existing packages, getting this into both yum and apt shouldn't be *that* hard, if it's coordinated well with Christoph and Devrim (obviously that's based on my experience and they will have to give a more complete answer themselves). It would be a lot more complicated if it involved changing an existing package.\n\nI mean, you presumably could not move pg_resetwal to this new package\nin existing branches, right?Probably and eventually. But that can be done for 14+ (or 13+ depending on how \"done\" the packaging is there -- we should just make sure that hits the biggest platform in the same release). --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Tue, 14 Jul 2020 16:21:11 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Greetings,\n\n* Magnus Hagander (magnus@hagander.net) wrote:\n> On Tue, Jul 14, 2020 at 4:09 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > On Tue, Jul 14, 2020 at 8:25 AM Magnus Hagander <magnus@hagander.net>\n> > wrote:\n> > > I don't think that it necessarily has to be. As long as we're talking\n> > about adding something and not actually changing their existing packages,\n> > getting this into both yum and apt shouldn't be *that* hard, if it's\n> > coordinated well with Christoph and Devrim (obviously that's based on my\n> > experience and they will have to give a more complete answer themselves).\n> > It would be a lot more complicated if it involved changing an existing\n> > package.\n> >\n> > I mean, you presumably could not move pg_resetwal to this new package\n> > in existing branches, right?\n> \n> Probably and eventually. But that can be done for 14+ (or 13+ depending on\n> how \"done\" the packaging is there -- we should just make sure that hits the\n> biggest platform in the same release).\n\nConsidering we just got rid of the -contrib independent package on at\nleast Debian-based systems, it doesn't really seem likely that the\npackagers are going to be anxious to create a new one- they are not\nwithout costs.\n\nAlso, in such dire straits as this thread is contemplating, I would\nthink we'd *really* like to have access to these tools with as small an\namount of change as absolutely possible to the system: what if\npg_extension itself got munged and we aren't able to install this new\ncontrib module, for example?\n\nI would suggest that, instead, we make this part of core, but have it be\nin a relatively clearly marked special schema that isn't part of\nsearch_path by default- eg: pg_hacks, or pg_dirty_hands (I kinda like\nthe latter though it seems a bit unprofessional for us).\n\nI'd also certainly be in support of having a contrib module with the\nsame functions that's independent from core and available and able to be\ninstalled on pre-v14 systems. I'd further support having another repo\nthat's \"core maintained\" or however we want to phrase it which includes\nthis proposed module (and possibly all of contrib) and which has a\ndifferent release cadence and requirements for what gets into it, has\nits own packages, etc.\n\nThanks,\n\nStephen", "msg_date": "Tue, 14 Jul 2020 10:36:25 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On 2020-Jul-13, Andres Freund wrote:\n\n> Hi,\n> \n> On 2020-07-13 17:12:18 -0400, Robert Haas wrote:\n> > 1. There's nothing to identify the tuple that has the problem, and no\n> > way to know how many more of them there might be. Back-patching\n> > b61d161c146328ae6ba9ed937862d66e5c8b035a would help with the first\n> > part of this.\n> \n> Not fully, I'm afraid. Afaict it doesn't currently tell you the item\n> pointer offset, just the block numer, right? We probably should extend\n> it to also include the offset...\n\nJust having the block number is already a tremendous step forward; with\nthat you can ask the customer to set a pageinspect dump of tuple\nheaders, and then the problem is obvious. Now if you want to add block\nnumber to that, by all means do so.\n\nFWIW I do support the idea of backpatching the vacuum errcontext commit.\n\nOne useful thing to do is to mark a tuple frozen unconditionally if it's\nmarked hinted XMIN_COMMITTED; no need to consult pg_clog in that case.\nThe attached (for 9.6) does that; IIRC it would have helped in a couple\nof cases.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 14 Jul 2020 13:20:25 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Hi,\n\nOn 2020-07-13 21:18:10 -0400, Robert Haas wrote:\n> On Mon, Jul 13, 2020 at 9:10 PM Andres Freund <andres@anarazel.de> wrote:\n> > > What if clog has been truncated so that the xmin can't be looked up?\n> >\n> > That's possible, but probably only in cases where xmin actually\n> > committed.\n> \n> Isn't that the normal case? I'm imagining something like:\n> \n> - Tuple gets inserted. Transaction commits.\n> - VACUUM processes table.\n> - Mischievous fairies mark page all-visible in the visibility map.\n> - VACUUM runs lots more times, relfrozenxid advances, but without ever\n> looking at the page in question, because it's all-visible.\n> - clog is truncated, rendering xmin no longer accessible.\n> - User runs VACUUM disabling page skipping, gets ERROR.\n> - User deletes offending tuple.\n> - At this point, I think the tuple is both invisible and unprunable?\n> - Fairies happy, user sad.\n\nI'm not saying it's impossible that that happens, but the cases I did\ninvestigate didn't look like this. If something just roguely wrote to\nthe VM I'd expect a lot more \"is not marked all-visible but visibility\nmap bit is set in relation\" type WARNINGs, and I've not seen much of\nthose (they're WARNINGs though, so maybe we wouldn't). Presumably this\nwouldn't always just happen with tuples that'd trigger an error first\nduring hot pruning.\n\nI've definitely seen indications of both datfrozenxid and relfrozenxid\ngetting corrupted (in particular vac_update_datfrozenxid being racy as\nhell), xid wraparound, indications of multixact problems (although it's\npossible we've now fixed those) and some signs of corrupted relcache\nentries for shared relations leading to vacuums being skipped.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 14 Jul 2020 12:31:26 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Hi,\n\nOn 2020-07-14 13:20:25 -0400, Alvaro Herrera wrote:\n> On 2020-Jul-13, Andres Freund wrote:\n> \n> > Hi,\n> > \n> > On 2020-07-13 17:12:18 -0400, Robert Haas wrote:\n> > > 1. There's nothing to identify the tuple that has the problem, and no\n> > > way to know how many more of them there might be. Back-patching\n> > > b61d161c146328ae6ba9ed937862d66e5c8b035a would help with the first\n> > > part of this.\n> > \n> > Not fully, I'm afraid. Afaict it doesn't currently tell you the item\n> > pointer offset, just the block numer, right? We probably should extend\n> > it to also include the offset...\n> \n> Just having the block number is already a tremendous step forward; with\n> that you can ask the customer to set a pageinspect dump of tuple\n> headers, and then the problem is obvious. Now if you want to add block\n> number to that, by all means do so.\n\noffset number I assume?\n\n\n> One useful thing to do is to mark a tuple frozen unconditionally if it's\n> marked hinted XMIN_COMMITTED; no need to consult pg_clog in that case.\n> The attached (for 9.6) does that; IIRC it would have helped in a couple\n> of cases.\n\nI think it might also have hidden corruption in at least one case where\nwe subsequently fixed a bug (and helped detect at least one unfixed\nbug). That should only be possible if either required clog has been\nremoved, or if relfrozenxid/datfrozenxid are corrupt, right?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 14 Jul 2020 12:36:55 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Hi,\n\nOn 2020-07-14 07:51:27 -0400, Robert Haas wrote:\n> On Tue, Jul 14, 2020 at 3:08 AM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n> > In the meantime, if you're wizard enough to deal with this kind of\n> > thing, you could also clone the module from the PG14 tree and build it\n> > against older versions manually.\n> \n> But what if you are NOT a wizard, and a wizard is giving you\n> directions? Then having to build from source is a real pain. And\n> that's normally the situation I'm in when a customer has this issue.\n\nThe \"found xmin ... from before relfrozenxid ...\" cases should all be\nfixable without needing such a function, and without it making fixing\nthem significantly easier, no? As far as I understand your suggested\nsolution, you need the tid(s) of these tuples, right? If you have those,\nI don't think it's meaningfully harder to INSERT ... DELETE WHERE ctid =\n.... or something like that.\n\nISTM that the hard part is finding all problematic tuples in an\nefficient manner (i.e. that doesn't require one manual VACUUM for each\nindividual block + parsing VACUUMs error message), not \"fixing\" those\ntuples.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 14 Jul 2020 12:41:59 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On 2020-Jul-14, Andres Freund wrote:\n\n> Hi,\n> \n> On 2020-07-14 13:20:25 -0400, Alvaro Herrera wrote:\n\n> > Just having the block number is already a tremendous step forward; with\n> > that you can ask the customer to set a pageinspect dump of tuple\n> > headers, and then the problem is obvious. Now if you want to add block\n> > number to that, by all means do so.\n> \n> offset number I assume?\n\nEh, yeah, that.\n\n> > One useful thing to do is to mark a tuple frozen unconditionally if it's\n> > marked hinted XMIN_COMMITTED; no need to consult pg_clog in that case.\n> > The attached (for 9.6) does that; IIRC it would have helped in a couple\n> > of cases.\n> \n> I think it might also have hidden corruption in at least one case where\n> we subsequently fixed a bug (and helped detect at least one unfixed\n> bug). That should only be possible if either required clog has been\n> removed, or if relfrozenxid/datfrozenxid are corrupt, right?\n\nYes, that's precisely the reason I never submitted it :-)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 14 Jul 2020 15:54:06 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Tue, Jul 14, 2020 at 3:42 PM Andres Freund <andres@anarazel.de> wrote:\n> The \"found xmin ... from before relfrozenxid ...\" cases should all be\n> fixable without needing such a function, and without it making fixing\n> them significantly easier, no? As far as I understand your suggested\n> solution, you need the tid(s) of these tuples, right? If you have those,\n> I don't think it's meaningfully harder to INSERT ... DELETE WHERE ctid =\n> .... or something like that.\n>\n> ISTM that the hard part is finding all problematic tuples in an\n> efficient manner (i.e. that doesn't require one manual VACUUM for each\n> individual block + parsing VACUUMs error message), not \"fixing\" those\n> tuples.\n\nI haven't tried the INSERT ... DELETE approach, but I've definitely\nseen a case where a straight UPDATE did not fix the problem; VACUUM\ncontinued failing afterwards. In that case, it was a system catalog\nthat was affected, and not one where TRUNCATE + re-INSERT was remotely\npractical. The only solution I could come up with was to drop the\ndatabase and recreate it. Fortunately in that case the affected\ndatabase didn't seem to have any actual data in it, but if it had been\na 1TB database I think we would have been in really bad trouble.\n\nDo you have a reason for believing that INSERT ... DELETE is going to\nbe better than UPDATE? It seems to me that either way you can end up\nwith a deleted and thus invisible tuple that you still can't get rid\nof.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 14 Jul 2020 15:59:21 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Hi,\n\nOn 2020-07-14 15:59:21 -0400, Robert Haas wrote:\n> On Tue, Jul 14, 2020 at 3:42 PM Andres Freund <andres@anarazel.de> wrote:\n> > The \"found xmin ... from before relfrozenxid ...\" cases should all be\n> > fixable without needing such a function, and without it making fixing\n> > them significantly easier, no? As far as I understand your suggested\n> > solution, you need the tid(s) of these tuples, right? If you have those,\n> > I don't think it's meaningfully harder to INSERT ... DELETE WHERE ctid =\n> > .... or something like that.\n> >\n> > ISTM that the hard part is finding all problematic tuples in an\n> > efficient manner (i.e. that doesn't require one manual VACUUM for each\n> > individual block + parsing VACUUMs error message), not \"fixing\" those\n> > tuples.\n> \n> I haven't tried the INSERT ... DELETE approach, but I've definitely\n> seen a case where a straight UPDATE did not fix the problem; VACUUM\n> continued failing afterwards.\n\nThe only way I can see that to happen is for the old tuple's multixact\nbeing copied forward. That'd not happen with INSERT ... DELETE.\n\n\n> In that case, it was a system catalog\n> that was affected, and not one where TRUNCATE + re-INSERT was remotely\n> practical.\n\nFWIW, an rewriting ALTER TABLE would likely also fix it. But obviously\nthat'd require allow_system_table_mods...\n\n\n\n> Do you have a reason for believing that INSERT ... DELETE is going to\n> be better than UPDATE? It seems to me that either way you can end up\n> with a deleted and thus invisible tuple that you still can't get rid\n> of.\n\nNone of the \"new\" checks around freezing would apply to deleted\ntuples. So we shouldn't fail with an error like $subject.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 15 Jul 2020 08:41:18 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Wed, Jul 15, 2020 at 11:41 AM Andres Freund <andres@anarazel.de> wrote:\n> > Do you have a reason for believing that INSERT ... DELETE is going to\n> > be better than UPDATE? It seems to me that either way you can end up\n> > with a deleted and thus invisible tuple that you still can't get rid\n> > of.\n>\n> None of the \"new\" checks around freezing would apply to deleted\n> tuples. So we shouldn't fail with an error like $subject.\n\nIt can definitely happen at least transiently:\n\nS1:\nrhaas=# create table wubble (a int, b text);\nCREATE TABLE\nrhaas=# insert into wubble values (1, 'glumpf');\nINSERT 0 1\n\nS2:\nrhaas=# begin transaction isolation level repeatable read;\nBEGIN\nrhaas=*# select * from wubble;\n a | b\n---+--------\n 1 | glumpf\n(1 row)\n\nS1:\nrhaas=# delete from wubble;\nDELETE 1\nrhaas=# update pg_class set relfrozenxid =\n(relfrozenxid::text::integer + 1000000)::text::xid where relname =\n'wubble';\nUPDATE 1\nrhaas=# vacuum verbose wubble;\nINFO: vacuuming \"public.wubble\"\nERROR: found xmin 528 from before relfrozenxid 1000527\nCONTEXT: while scanning block 0 of relation \"public.wubble\"\n\nS2:\nrhaas=*# commit;\nCOMMIT\n\nS1:\nrhaas=# vacuum verbose wubble;\nINFO: vacuuming \"public.wubble\"\nINFO: \"wubble\": removed 1 row versions in 1 pages\nINFO: \"wubble\": found 1 removable, 0 nonremovable row versions in 1\nout of 1 pages\nDETAIL: 0 dead row versions cannot be removed yet, oldest xmin: 531\nThere were 0 unused item identifiers.\nSkipped 0 pages due to buffer pins, 0 frozen pages.\n0 pages are entirely empty.\nCPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.\nINFO: \"wubble\": truncated 1 to 0 pages\nDETAIL: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\nINFO: vacuuming \"pg_toast.pg_toast_16415\"\nINFO: index \"pg_toast_16415_index\" now contains 0 row versions in 1 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.\nINFO: \"pg_toast_16415\": found 0 removable, 0 nonremovable row\nversions in 0 out of 0 pages\nDETAIL: 0 dead row versions cannot be removed yet, oldest xmin: 532\nThere were 0 unused item identifiers.\nSkipped 0 pages due to buffer pins, 0 frozen pages.\n0 pages are entirely empty.\nCPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.\nVACUUM\n\nI see your point, though: the tuple has to be able to survive\nHOT-pruning in order to cause a problem when we check whether it needs\nfreezing.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 16 Jul 2020 10:00:40 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Thu, Jul 16, 2020 at 10:00 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I see your point, though: the tuple has to be able to survive\n> HOT-pruning in order to cause a problem when we check whether it needs\n> freezing.\n\nHere's an example where the new sanity checks fail on an invisible\ntuple without any concurrent transactions:\n\n$ initdb\n$ pg_ctl start -l ~/logfile\n$ createdb\n$ psql\n\ncreate table simpsons (a int, b text);\nvacuum freeze;\n\n$ cat > txid.sql\nselect txid_current();\n$ pgbench -t 131072 -c 8 -j 8 -n -f txid.sql\n$ psql\n\ninsert into simpsons values (1, 'homer');\n\n$ pg_ctl stop\n$ pg_resetwal -x 1000 $PGDATA\n$ pg_ctl start -l ~/logfile\n$ psql\n\nupdate pg_class set relfrozenxid = (relfrozenxid::text::integer +\n2000000)::text::xid where relname = 'simpsons';\n\nrhaas=# select * from simpsons;\n a | b\n---+---\n(0 rows)\n\nrhaas=# vacuum simpsons;\nERROR: found xmin 1049082 from before relfrozenxid 2000506\nCONTEXT: while scanning block 0 of relation \"public.simpsons\"\n\nThis is a fairly insane situation, because we should have relfrozenxid\n< tuple xid < xid counter, but instead we have xid counter < tuple xid\n< relfrozenxid, but it demonstrates that it's possible to have a\ndatabase which is sufficiently corrupt that you can't escape from the\nnew sanity checks using only INSERT, UPDATE, and DELETE.\n\nNow, an even easier way to create a table with a tuple that prevents\nvacuuming and also can't just be deleted is to simply remove a\nrequired pg_clog file (and maybe restart the server to clear out any\ncached data in the SLRUs). What we typically do with customers who\nneed to recover from that situation today is give them a script to\nfabricate a bogus CLOG file that shows all transactions as committed\n(or, perhaps, aborted). But I think that the tools proposed on this\nthread might be a better approach in certain cases. If the problem is\nthat a pg_clog file vanished, then recreating it with whatever content\nyou think is closest to what was probably there before is likely the\nbest you can do. But if you've got some individual tuples with crazy\nxmin values, you don't really want to drop matching files in pg_clog;\nit's better to fix the tuples.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 16 Jul 2020 12:14:35 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Hi All,\n\nAttached is the patch that adds heap_force_kill(regclass, tid[]) and\nheap_force_freeze(regclass, tid[]) functions which Robert mentioned in the\nfirst email in this thread. The patch basically adds an extension named\npg_surgery that contains these functions. Please have a look and let me\nknow your feedback. Thank you.\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\nOn Thu, Jul 16, 2020 at 9:44 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Thu, Jul 16, 2020 at 10:00 AM Robert Haas <robertmhaas@gmail.com>\n> wrote:\n> > I see your point, though: the tuple has to be able to survive\n> > HOT-pruning in order to cause a problem when we check whether it needs\n> > freezing.\n>\n> Here's an example where the new sanity checks fail on an invisible\n> tuple without any concurrent transactions:\n>\n> $ initdb\n> $ pg_ctl start -l ~/logfile\n> $ createdb\n> $ psql\n>\n> create table simpsons (a int, b text);\n> vacuum freeze;\n>\n> $ cat > txid.sql\n> select txid_current();\n> $ pgbench -t 131072 -c 8 -j 8 -n -f txid.sql\n> $ psql\n>\n> insert into simpsons values (1, 'homer');\n>\n> $ pg_ctl stop\n> $ pg_resetwal -x 1000 $PGDATA\n> $ pg_ctl start -l ~/logfile\n> $ psql\n>\n> update pg_class set relfrozenxid = (relfrozenxid::text::integer +\n> 2000000)::text::xid where relname = 'simpsons';\n>\n> rhaas=# select * from simpsons;\n> a | b\n> ---+---\n> (0 rows)\n>\n> rhaas=# vacuum simpsons;\n> ERROR: found xmin 1049082 from before relfrozenxid 2000506\n> CONTEXT: while scanning block 0 of relation \"public.simpsons\"\n>\n> This is a fairly insane situation, because we should have relfrozenxid\n> < tuple xid < xid counter, but instead we have xid counter < tuple xid\n> < relfrozenxid, but it demonstrates that it's possible to have a\n> database which is sufficiently corrupt that you can't escape from the\n> new sanity checks using only INSERT, UPDATE, and DELETE.\n>\n> Now, an even easier way to create a table with a tuple that prevents\n> vacuuming and also can't just be deleted is to simply remove a\n> required pg_clog file (and maybe restart the server to clear out any\n> cached data in the SLRUs). What we typically do with customers who\n> need to recover from that situation today is give them a script to\n> fabricate a bogus CLOG file that shows all transactions as committed\n> (or, perhaps, aborted). But I think that the tools proposed on this\n> thread might be a better approach in certain cases. If the problem is\n> that a pg_clog file vanished, then recreating it with whatever content\n> you think is closest to what was probably there before is likely the\n> best you can do. But if you've got some individual tuples with crazy\n> xmin values, you don't really want to drop matching files in pg_clog;\n> it's better to fix the tuples.\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\n>", "msg_date": "Fri, 24 Jul 2020 14:35:08 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "\n\n> 24 июля 2020 г., в 14:05, Ashutosh Sharma <ashu.coek88@gmail.com> написал(а):\n> \n> Attached is the patch that adds heap_force_kill(regclass, tid[]) and heap_force_freeze(regclass, tid[]) functions which Robert mentioned in the first email in this thread. The patch basically adds an extension named pg_surgery that contains these functions. Please have a look and let me know your feedback. Thank you.\n\nThanks for the patch!\nI have just few random thoughts.\n\nI think here we should report that we haven't done what was asked.\n+\t\t\t/* Nothing to do if the itemid is unused or already dead. */\n+\t\t\tif (!ItemIdIsUsed(itemid) || ItemIdIsDead(itemid))\n+\t\t\t\tcontinue;\n\nAlso, should we try to fix VM along the way?\nAre there any caveats with concurrent VACUUM? (I do not see any, just asking)\nIt would be good to have some checks for interrupts in safe places.\n\nI think we should not trust user entierly here. I'd prefer validation and graceful exit, not a core dump.\n+\t\tAssert(noffs <= PageGetMaxOffsetNumber(page));\n\nFor some reason we had unlogged versions of these functions. But I do not recall exact rationale..\nAlso, I'd be happy if we had something like \"Restore this tuple iff this does not break unique constraint\". To do so we need to sort tids by xmin\\xmax, to revive most recent data first.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Sun, 26 Jul 2020 22:54:35 +0500", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Hi,\n\nThanks for sharing your thoughts. Please find my comments inline below:\n\n\n>\n> I think here we should report that we haven't done what was asked.\n> + /* Nothing to do if the itemid is unused or\n> already dead. */\n> + if (!ItemIdIsUsed(itemid) || ItemIdIsDead(itemid))\n> + continue;\n>\n>\nOkay. Will add a log message saying \"skipping tid ... because ...\"\n\n\n> Also, should we try to fix VM along the way?\n>\n\nI think we should let VACUUM do that.\n\n\n> Are there any caveats with concurrent VACUUM? (I do not see any, just\n> asking)\n>\n\nAs of now, I don't see any.\n\n\n> It would be good to have some checks for interrupts in safe places.\n>\n\nI think I have already added those wherever I felt it was required. If you\nfeel it's missing somewhere, it could be good if you could point it out.\n\n\n> I think we should not trust user entierly here. I'd prefer validation and\n> graceful exit, not a core dump.\n> + Assert(noffs <= PageGetMaxOffsetNumber(page));\n>\n>\nYeah, sounds reasonable. Will do that.\n\n\n> For some reason we had unlogged versions of these functions. But I do not\n> recall exact rationale..\n> Also, I'd be happy if we had something like \"Restore this tuple iff this\n> does not break unique constraint\". To do so we need to sort tids by\n> xmin\\xmax, to revive most recent data first.\n>\n\nI didn't get this point. Could you please elaborate.\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\nHi,Thanks for sharing your thoughts. Please find my comments inline below: \n\nI think here we should report that we haven't done what was asked.\n+                       /* Nothing to do if the itemid is unused or already dead. */\n+                       if (!ItemIdIsUsed(itemid) || ItemIdIsDead(itemid))\n+                               continue;\nOkay. Will add a log message saying \"skipping tid ... because ...\" \nAlso, should we try to fix VM along the way?I think we should let VACUUM do that. \nAre there any caveats with concurrent VACUUM? (I do not see any, just asking)As of now, I don't see any. \nIt would be good to have some checks for interrupts in safe places.I think I have already added those wherever I felt it was required. If you feel it's missing somewhere, it could be good if you could point it out. \nI think we should not trust user entierly here. I'd prefer validation and graceful exit, not a core dump.\n+               Assert(noffs <= PageGetMaxOffsetNumber(page));\nYeah, sounds reasonable. Will do that. \nFor some reason we had unlogged versions of these functions. But I do not recall exact rationale..\nAlso, I'd be happy if we had something like \"Restore this tuple iff this does not break unique constraint\". To do so we need to sort tids by xmin\\xmax, to revive most recent data first.I didn't get this point. Could you please elaborate. --With Regards,Ashutosh SharmaEnterpriseDB:http://www.enterprisedb.com", "msg_date": "Mon, 27 Jul 2020 10:06:54 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "\n\n> 27 июля 2020 г., в 09:36, Ashutosh Sharma <ashu.coek88@gmail.com> написал(а):\n> \n> > Also, should we try to fix VM along the way?\n> \n> I think we should let VACUUM do that.\nSometimes VACUUM will not get to these pages, because they are marked All Frozen.\nAn possibly some tuples will get stale on this page again\n\n> > Are there any caveats with concurrent VACUUM? (I do not see any, just asking)\n> \n> As of now, I don't see any.\nVACUUM has collection of dead item pointers. We will not resurrect any of them, right?\n\n> > It would be good to have some checks for interrupts in safe places.\n> \n> I think I have already added those wherever I felt it was required. If you feel it's missing somewhere, it could be good if you could point it out.\nSorry, I just overlooked that call, everything is fine here.\n\n> > Also, I'd be happy if we had something like \"Restore this tuple iff this does not break unique constraint\". To do so we need to sort tids by xmin\\xmax, to revive most recent data first.\n> \n> I didn't get this point. Could you please elaborate. \nYou may have 10 corrupted tuples for the same record, that was updated 9 times. And if you have unique constraint on the table you may want to have only latest version of the row. So you want to kill 9 tuples and freeze 1.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Tue, 28 Jul 2020 13:22:47 +0500", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Hello Ashutosh,\n\nOn Fri, 24 Jul 2020 at 14:35, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Hi All,\n>\n> Attached is the patch that adds heap_force_kill(regclass, tid[]) and heap_force_freeze(regclass, tid[]) functions which Robert mentioned in the first email in this thread. The patch basically adds an extension named pg_surgery that contains these functions. Please have a look and let me know your feedback. Thank you.\n>\n\nThanks for the patch.\n\n1. We would be marking buffer dirty and writing wal even if we have\nnot done any changes( ex if we pass invalid/dead tids). Maybe we can\nhandle this better?\n\ncosmetic changes\n1. Maybe \"HTupleSurgicalOption\" instead of \"HTupleForceOption\" and\nalso the variable names could use surgery instead.\n2. extension comment pg_surgery.control \"extension to perform surgery\nthe damaged heap table\" -> \"extension to perform surgery on the\ndamaged heap table\"\n\n> On Thu, Jul 16, 2020 at 9:44 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>>\n>> On Thu, Jul 16, 2020 at 10:00 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> > I see your point, though: the tuple has to be able to survive\n>> > HOT-pruning in order to cause a problem when we check whether it needs\n>> > freezing.\n>>\n>> Here's an example where the new sanity checks fail on an invisible\n>> tuple without any concurrent transactions:\n>>\n>> $ initdb\n>> $ pg_ctl start -l ~/logfile\n>> $ createdb\n>> $ psql\n>>\n>> create table simpsons (a int, b text);\n>> vacuum freeze;\n>>\n>> $ cat > txid.sql\n>> select txid_current();\n>> $ pgbench -t 131072 -c 8 -j 8 -n -f txid.sql\n>> $ psql\n>>\n>> insert into simpsons values (1, 'homer');\n>>\n>> $ pg_ctl stop\n>> $ pg_resetwal -x 1000 $PGDATA\n>> $ pg_ctl start -l ~/logfile\n>> $ psql\n>>\n>> update pg_class set relfrozenxid = (relfrozenxid::text::integer +\n>> 2000000)::text::xid where relname = 'simpsons';\n>>\n>> rhaas=# select * from simpsons;\n>> a | b\n>> ---+---\n>> (0 rows)\n>>\n>> rhaas=# vacuum simpsons;\n>> ERROR: found xmin 1049082 from before relfrozenxid 2000506\n>> CONTEXT: while scanning block 0 of relation \"public.simpsons\"\n>>\n>> This is a fairly insane situation, because we should have relfrozenxid\n>> < tuple xid < xid counter, but instead we have xid counter < tuple xid\n>> < relfrozenxid, but it demonstrates that it's possible to have a\n>> database which is sufficiently corrupt that you can't escape from the\n>> new sanity checks using only INSERT, UPDATE, and DELETE.\n>>\n>> Now, an even easier way to create a table with a tuple that prevents\n>> vacuuming and also can't just be deleted is to simply remove a\n>> required pg_clog file (and maybe restart the server to clear out any\n>> cached data in the SLRUs). What we typically do with customers who\n>> need to recover from that situation today is give them a script to\n>> fabricate a bogus CLOG file that shows all transactions as committed\n>> (or, perhaps, aborted). But I think that the tools proposed on this\n>> thread might be a better approach in certain cases. If the problem is\n>> that a pg_clog file vanished, then recreating it with whatever content\n>> you think is closest to what was probably there before is likely the\n>> best you can do. But if you've got some individual tuples with crazy\n>> xmin values, you don't really want to drop matching files in pg_clog;\n>> it's better to fix the tuples.\n>>\n>> --\n>> Robert Haas\n>> EnterpriseDB: http://www.enterprisedb.com\n>> The Enterprise PostgreSQL Company\n>>\n>>\n\n\n-- \n\nM Beena Emerson\n\nSr. Software Engineer\n\n\nedbpostgres.com\n\n\n", "msg_date": "Wed, 29 Jul 2020 02:12:12 +0530", "msg_from": "MBeena Emerson <mbeena.emerson@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "> > I think we should let VACUUM do that.\n> Sometimes VACUUM will not get to these pages, because they are marked All\n> Frozen.\n> An possibly some tuples will get stale on this page again\n>\n\nHmm, okay, will have a look into this. Thanks.\n\n\n>\n> > > Are there any caveats with concurrent VACUUM? (I do not see any, just\n> asking)\n> >\n> > As of now, I don't see any.\n> VACUUM has collection of dead item pointers. We will not resurrect any of\n> them, right?\n>\n\nWe won't be doing any such things.\n\n\n> > > It would be good to have some checks for interrupts in safe places.\n> >\n> > I think I have already added those wherever I felt it was required. If\n> you feel it's missing somewhere, it could be good if you could point it out.\n> Sorry, I just overlooked that call, everything is fine here.\n>\n\nOkay, thanks for confirming.\n\n\n> > > Also, I'd be happy if we had something like \"Restore this tuple iff\n> this does not break unique constraint\". To do so we need to sort tids by\n> xmin\\xmax, to revive most recent data first.\n> >\n> > I didn't get this point. Could you please elaborate.\n> You may have 10 corrupted tuples for the same record, that was updated 9\n> times. And if you have unique constraint on the table you may want to have\n> only latest version of the row. So you want to kill 9 tuples and freeze 1.\n>\n\nOkay, in that case, users need to pass the tids of the 9 tuples that they\nwant to kill to heap_force_kill function and the tid of the tuple that they\nwant to be marked as frozen to heap_force_freeze function. Just to inform\nyou that this tool is not used to detect any data corruption, it is just\nmeant to remove/clean the corrupted data in a table so that the operations\nlike vacuum, pg_dump/restore succeeds. It's users responsibility to\nidentify the corrupted data and pass its tid to either of these functions\nas per the requirement.\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n> I think we should let VACUUM do that.\nSometimes VACUUM will not get to these pages, because they are marked All Frozen.\nAn possibly some tuples will get stale on this page againHmm, okay, will have a look into this. Thanks. \n\n> > Are there any caveats with concurrent VACUUM? (I do not see any, just asking)\n> \n> As of now, I don't see any.\nVACUUM has collection of dead item pointers. We will not resurrect any of them, right?We won't be doing any such things. \n> > It would be good to have some checks for interrupts in safe places.\n> \n> I think I have already added those wherever I felt it was required. If you feel it's missing somewhere, it could be good if you could point it out.\nSorry, I just overlooked that call, everything is fine here.Okay, thanks for confirming. \n> > Also, I'd be happy if we had something like \"Restore this tuple iff this does not break unique constraint\". To do so we need to sort tids by xmin\\xmax, to revive most recent data first.\n> \n> I didn't get this point. Could you please elaborate. \nYou may have 10 corrupted tuples for the same record, that was updated 9 times. And if you have unique constraint on the table you may want to have only latest version of the row. So you want to kill 9 tuples and freeze 1.Okay, in that case, users need to pass the tids of the 9 tuples that they want to kill to heap_force_kill function and the tid of the tuple that they want to be marked as frozen to heap_force_freeze function. Just to inform you that this tool is not used to detect any data corruption, it is just meant to remove/clean the corrupted data in a table so that the operations like vacuum, pg_dump/restore succeeds. It's users responsibility to identify the corrupted data and pass its tid to either of these functions as per the requirement.--With Regards,Ashutosh SharmaEnterpriseDB:http://www.enterprisedb.com", "msg_date": "Wed, 29 Jul 2020 09:58:08 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Hi Beena,\n\nThanks for the review.\n\n1. We would be marking buffer dirty and writing wal even if we have\n> not done any changes( ex if we pass invalid/dead tids). Maybe we can\n> handle this better?\n>\n\nYeah, we can skip this when nothing has changed. Will take care of it in\nthe next version of patch.\n\n\n> cosmetic changes\n> 1. Maybe \"HTupleSurgicalOption\" instead of \"HTupleForceOption\" and\n> also the variable names could use surgery instead.\n>\n\nI think that looks fine. I would rather prefer the word \"Force\" just\nbecause all the enum options contain the word \"Force\" in it.\n\n\n> 2. extension comment pg_surgery.control \"extension to perform surgery\n> the damaged heap table\" -> \"extension to perform surgery on the\n> damaged heap table\"\n>\n\nOkay, will fix that typo.\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\nHi Beena,Thanks for the review.\n1. We would be marking buffer dirty and writing wal even if we have\nnot done any changes( ex if we pass invalid/dead tids). Maybe we can\nhandle this better?Yeah, we can skip this when nothing has changed. Will take care of it in the next version of patch. \ncosmetic changes\n1. Maybe \"HTupleSurgicalOption\" instead of \"HTupleForceOption\" and\nalso the variable names could use surgery instead.I think that looks fine. I would rather prefer the word \"Force\" just because all the enum options contain the word \"Force\" in it. \n2. extension comment pg_surgery.control \"extension to perform surgery\nthe damaged heap table\" -> \"extension to perform surgery on the\ndamaged heap table\"Okay, will fix that typo.--With Regards,Ashutosh SharmaEnterpriseDB:http://www.enterprisedb.com", "msg_date": "Wed, 29 Jul 2020 10:07:49 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Tue, Jul 14, 2020 at 9:12 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> A number of EDB customers have had this error crop on their tables for\n> reasons that we have usually not been able to determine. In many\n\n<long-shot>Do you happen to know if they ever used the\nsnapshot-too-old feature?</long-shot>\n\n\n", "msg_date": "Wed, 29 Jul 2020 19:22:38 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Wed, Jul 29, 2020 at 3:23 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Tue, Jul 14, 2020 at 9:12 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > A number of EDB customers have had this error crop on their tables for\n> > reasons that we have usually not been able to determine. In many\n>\n> <long-shot>Do you happen to know if they ever used the\n> snapshot-too-old feature?</long-shot>\n\nI don't have any reason to believe that they did. Why?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 29 Jul 2020 09:36:31 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Thu, Jul 30, 2020 at 1:36 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Wed, Jul 29, 2020 at 3:23 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > On Tue, Jul 14, 2020 at 9:12 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > A number of EDB customers have had this error crop on their tables for\n> > > reasons that we have usually not been able to determine. In many\n> >\n> > <long-shot>Do you happen to know if they ever used the\n> > snapshot-too-old feature?</long-shot>\n>\n> I don't have any reason to believe that they did. Why?\n\nNothing specific, I was just contemplating the problems with that\nfeature and the patches[1] proposed so far to fix some of them, and\nwhat types of corruption might be possible due to that stuff, and it\noccurred to me to ask if you'd thought about that in connection to\nthese reports.\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BTgmoY%3Daqf0zjTD%2B3dUWYkgMiNDegDLFjo%2B6ze%3DWtpik%2B3XqA%40mail.gmail.com\n\n\n", "msg_date": "Thu, 30 Jul 2020 10:29:52 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Wed, Jul 29, 2020 at 9:58 AM Ashutosh Sharma <ashu.coek88@gmail.com>\nwrote:\n\n>\n> > I think we should let VACUUM do that.\n>> Sometimes VACUUM will not get to these pages, because they are marked All\n>> Frozen.\n>> An possibly some tuples will get stale on this page again\n>>\n>\n> Hmm, okay, will have a look into this. Thanks.\n>\n\nI had a look over this and found that one can use the DISABLE_PAGE_SKIPPING\noption with VACUUM to disable all its page-skipping behavior.\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\nOn Wed, Jul 29, 2020 at 9:58 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> I think we should let VACUUM do that.\nSometimes VACUUM will not get to these pages, because they are marked All Frozen.\nAn possibly some tuples will get stale on this page againHmm, okay, will have a look into this. Thanks.I had a look over this and found that one can use the DISABLE_PAGE_SKIPPING option with VACUUM to disable all its page-skipping behavior.-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com", "msg_date": "Fri, 31 Jul 2020 18:02:24 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Attached is the new version of patch that addresses the comments from\nAndrey and Beena.\n\nOn Wed, Jul 29, 2020 at 10:07 AM Ashutosh Sharma <ashu.coek88@gmail.com>\nwrote:\n\n> Hi Beena,\n>\n> Thanks for the review.\n>\n> 1. We would be marking buffer dirty and writing wal even if we have\n>> not done any changes( ex if we pass invalid/dead tids). Maybe we can\n>> handle this better?\n>>\n>\n> Yeah, we can skip this when nothing has changed. Will take care of it in\n> the next version of patch.\n>\n>\n>> cosmetic changes\n>> 1. Maybe \"HTupleSurgicalOption\" instead of \"HTupleForceOption\" and\n>> also the variable names could use surgery instead.\n>>\n>\n> I think that looks fine. I would rather prefer the word \"Force\" just\n> because all the enum options contain the word \"Force\" in it.\n>\n>\n>> 2. extension comment pg_surgery.control \"extension to perform surgery\n>> the damaged heap table\" -> \"extension to perform surgery on the\n>> damaged heap table\"\n>>\n>\n> Okay, will fix that typo.\n>\n> --\n> With Regards,\n> Ashutosh Sharma\n> EnterpriseDB:http://www.enterprisedb.com\n>", "msg_date": "Fri, 31 Jul 2020 18:22:23 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "\n\n> 31 июля 2020 г., в 17:32, Ashutosh Sharma <ashu.coek88@gmail.com> написал(а):\n> \n> \n> On Wed, Jul 29, 2020 at 9:58 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> \n> > I think we should let VACUUM do that.\n> Sometimes VACUUM will not get to these pages, because they are marked All Frozen.\n> An possibly some tuples will get stale on this page again\n> \n> Hmm, okay, will have a look into this. Thanks.\n> \n> I had a look over this and found that one can use the DISABLE_PAGE_SKIPPING option with VACUUM to disable all its page-skipping behavior.\n\nOh, wow, I didn't know that. Thanks! This actually will do the trick.\nI'll try to review your patch again next week.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Fri, 31 Jul 2020 19:57:22 +0500", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Fri, Jul 31, 2020 at 8:52 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> Attached is the new version of patch that addresses the comments from Andrey and Beena.\n\n+PGFILEDESC = \"pg_surgery - perform surgery on the damaged heap table\"\n\nthe -> a\n\nI also suggest: heap table -> relation, because we might want to\nextend this to other cases later.\n\n+-- toast table.\n+begin;\n+create table ttab(a text);\n+insert into ttab select string_agg(chr(floor(random() * 26)::int +\n65), '') from generate_series(1,10000);\n+select * from ttab where xmin = 2;\n+ a\n+---\n+(0 rows)\n+\n+select heap_force_freeze('ttab'::regclass, ARRAY['(0, 1)']::tid[]);\n+ heap_force_freeze\n+-------------------\n+\n+(1 row)\n+\n\nI don't understand the point of this. You're not testing the function\non the TOAST table; you're testing it on the main table when there\nhappens to be a TOAST table that is probably getting used for\nsomething. But that's not really relevant to what is being tested\nhere, so as written this seems redundant with the previous cases.\n\n+-- test pg_surgery functions with the unsupported relations. Should fail.\n\nPlease name the specific functions being tested here in case we add\nmore in the future that are tested separately.\n\n+++ b/contrib/pg_surgery/heap_surgery_funcs.c\n\nI think we could drop _funcs from the file name.\n\n+#ifdef PG_MODULE_MAGIC\n+PG_MODULE_MAGIC;\n+#endif\n\nThe #ifdef here is not required, and if you look at other contrib\nmodules you'll see that they don't have it.\n\nI don't like all the macros at the top of the file much. CHECKARRVALID\nis only used in one place, so it seems to me that you might as well\njust inline it and lose the macro. Likewise for SORT and ARRISEMPTY.\n\nOnce you do that, heap_force_common() can just compute the number of\narray elements once, instead of doing it once inside ARRISEMPTY, then\nagain inside SORT, and then a third time to initialize ntids. You can\njust have a local variable in that function that is set once and then\nused as needed. Then you are only doing ARRNELEMS once, so you can get\nrid of that macro too. The same technique can be used to get rid of\nARRPTR. So then all the macros go away without introducing any code\nduplication.\n\n+/* Options to forcefully change the state of a heap tuple. */\n+typedef enum HTupleForceOption\n+{\n+ FORCE_KILL,\n+ FORCE_FREEZE\n+} HTupleForceOption;\n\nI suggest un-abbreviating HTuple to HeapTuple and un-abbreviating the\nenum members to HEAP_FORCE_KILL and HEAP_FORCE_FREE. Also, how about\noption -> operation?\n\n+ return heap_force_common(fcinfo, FORCE_KILL);\n\nI think it might be more idiomatic to use PG_RETURN_DATUM here. I\nknow it's the same thing, though, and perhaps I'm even wrong about the\nprevailing style.\n\n+ Assert(force_opt == FORCE_KILL || force_opt == FORCE_FREEZE);\n\nI think this is unnecessary. It's an enum with 2 values.\n\n+ if (ARRISEMPTY(ta))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"empty tid array\")));\n\nI don't see why this should be an error. Why can't we just continue\nnormally and if it does nothing, it does nothing? You'd need to change\nthe do..while loop to a while loop so that the end condition is tested\nat the top, but that seems fine.\n\n+ rel = relation_open(relid, AccessShareLock);\n\nMaybe we should take RowExclusiveLock, since we are going to modify\nrows. Not sure how much it matters, though.\n\n+ if (!superuser() && GetUserId() != rel->rd_rel->relowner)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n+ errmsg(\"must be superuser or object owner to use %s.\",\n+ force_opt == FORCE_KILL ? \"heap_force_kill()\" :\n+ \"heap_force_freeze()\")));\n\nThis is the wrong way to do a permissions check, and it's also the\nwrong way to write an error message about having failed one. To see\nthe correct method, grep for pg_class_aclcheck. However, I think that\nwe shouldn't in general trust the object owner to do this, unless the\nsuper-user gave permission. This is a data-corrupting operation, and\nonly the boss is allowed to authorize it. So I think you should also\nadd REVOKE EXECUTE FROM PUBLIC statements to the SQL file, and then\nhave this check as a backup. Then, the superuser is always allowed,\nand if they choose to GRANT EXECUTE on this function to some users,\nthose users can do it for their own relations, but not others.\n\n+ if (rel->rd_rel->relam != HEAP_TABLE_AM_OID)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+ errmsg(\"only heap AM is supported\")));\n+\n+ check_relation_relkind(rel);\n\nSeems like these checks are in the wrong order. Also, maybe you could\ncall the function something like check_relation_ok() and put the\npermissions test, the relkind test, and the relam test all inside of\nit, just to tighten up the code in this main function a bit.\n\n+ if (noffs > maxoffset)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"number of offsets specified for block %u exceeds the max\noffset number %u\",\n+ blkno, maxoffset)));\n\nHmm, this doesn't seem quite right. The actual problem is if an\nindividual item pointer's offset number is greater than maxoffset,\nwhich can be true even if the total number of offsets is less than\nmaxoffset. So I think you need to remove this check and add a check\ninside the loop which follows that offnos[i] is in range.\n\nThe way you've structured that loop is actually problematic -- I don't\nthink we want to be calling elog() or ereport() inside a critical\nsection. You could fix the case that checks for an invalid force_opt\nby just doing if (op == HEAP_FORCE_KILL) { ... } else { Assert(op ==\nHEAP_FORCE_FREEZE); ... }, or by using a switch with no default. The\nNOTICE case you have here is a bigger problem. You really cannot\nmodify the buffer like this and then decide, oops, never mind, I think\nI won't mark it dirty or write WAL for the changes. If you do that,\nthe buffer is still in memory, but it's now been modified. A\nsubsequent operation that modifies it will start with the altered\nstate you created here, quite possibly leading to WAL that cannot be\ncorrectly replayed on the standby. In other words, you've got to\ndecide for certain whether you want to proceed with the operation\n*before* you enter the critical section. You also need to emit any\nmessages before or after the critical section. So you could:\n\n1. If you encounter a TID that's unused or dead, skip it silently.\n-or-\n2. Loop over offsets twice. The first time, ERROR if you find any one\nthat is unused or dead. Then start a critical section. Loop again and\ndo the real work.\n-or-\n3. Like #2, but emit a NOTICE about a unused or dead item rather than\nan ERROR, and skip the critical section and the second loop if you did\nthat >0 times.\n-or-\n4. Like #3, but don't skip anything just because you emitted a NOTICE\nabout the page.\n\n#3 is closest to the behavior you have now, but I'm not sure what else\nit has going for it. It doesn't seem like particularly intuitive\nbehavior that finding a dead or unused TID should cause other item\nTIDs on the same page not to get processed while still permitting TIDs\non other pages to get processed. I don't think that's the behavior\nusers will be expecting. I think my vote is for #4, which will emit a\nNOTICE about any TID that is dead or unused -- and I guess also about\nany TID whose offset number is out of range -- but won't actually skip\nany operations that can be performed. But there are decent arguments\nfor #1 or #2 too.\n\n+ (errmsg(\"skipping tid (%u, %u) because it is already marked %s\",\n+ blkno, offnos[i],\n+ ItemIdIsDead(itemid) ? \"dead\" : \"unused\")));\n\nI believe this violates our guidelines on message construction. Have\ntwo completely separate messages -- and maybe lose the word \"already\":\n\n\"skipping tid (%u, %u) because it is dead\"\n\"skipping tid (%u, %u) because it is unused\"\n\nThe point of this is that it makes it easier for translators.\n\nI see very little point in what verify_tid() is doing. Before using\neach block number, we should check that it's less than or equal to a\ncached value of RelationGetNumberOfBlocks(rel). That's necessary in\nany case to avoid funny errors; and then the check here against\nspecifically InvalidBlockNumber is redundant. For the offset number,\nsame thing: we need to check each offset against the page's\nPageGetMaxOffsetNumber(page); and if we do that then we don't need\nthese checks.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 31 Jul 2020 14:47:51 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Hi Robert,\n\nThanks for the review.\n\nI've gone through all your review comments and understood all of them\nexcept this one:\n\nYou really cannot\n> modify the buffer like this and then decide, oops, never mind, I think\n> I won't mark it dirty or write WAL for the changes. If you do that,\n> the buffer is still in memory, but it's now been modified. A\n> subsequent operation that modifies it will start with the altered\n> state you created here, quite possibly leading to WAL that cannot be\n> correctly replayed on the standby. In other words, you've got to\n> decide for certain whether you want to proceed with the operation\n> *before* you enter the critical section.\n>\n\nCould you please explain this point once more in detail? I am not quite\nable to understand under what circumstances a buffer would be modified, but\nwon't be marked as dirty or a WAL won't be written for it.\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\nHi Robert,Thanks for the review.I've gone through all your review comments and understood all of them except this one:You really cannot\nmodify the buffer like this and then decide, oops, never mind, I think\nI won't mark it dirty or write WAL for the changes. If you do that,\nthe buffer is still in memory, but it's now been modified. A\nsubsequent operation that modifies it will start with the altered\nstate you created here, quite possibly leading to WAL that cannot be\ncorrectly replayed on the standby. In other words, you've got to\ndecide for certain whether you want to proceed with the operation\n*before* you enter the critical section. Could you please explain this point once more in detail? I am not quite able to understand under what circumstances a buffer would be modified, but won't be marked as dirty or a WAL won't be written for it.--With Regards,Ashutosh SharmaEnterpriseDB:http://www.enterprisedb.com", "msg_date": "Mon, 3 Aug 2020 14:35:38 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Mon, Aug 3, 2020 at 5:05 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> Could you please explain this point once more in detail? I am not quite able to understand under what circumstances a buffer would be modified, but won't be marked as dirty or a WAL won't be written for it.\n\nWhenever this branch is taken:\n\n+ if (nskippedItems == noffs)\n+ goto skip_wal;\n\nAt this point you have already modified the page, using ItemIdSetDead,\nHeapTupleHeaderSet*, and/or directly adjusting htup->infomask. If this\nbranch is taken, then MarkBufferDirty() and log_newpage_buffer() are\nskipped.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 3 Aug 2020 09:35:49 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Mon, Aug 3, 2020 at 7:06 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Aug 3, 2020 at 5:05 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > Could you please explain this point once more in detail? I am not quite able to understand under what circumstances a buffer would be modified, but won't be marked as dirty or a WAL won't be written for it.\n>\n> Whenever this branch is taken:\n>\n> + if (nskippedItems == noffs)\n> + goto skip_wal;\n>\n\nIf the above path is taken that means none of the items in the page\ngot changed. As per the following if-check whenever an item in the\noffnos[] array is found dead or unused, it is skipped (due to continue\nstatement) which means the item is neither marked dead nor it is\nmarked frozen. Now, if this happens for all the items in a page, then\nthe above condition (nskippedItems == noffs) would be true and hence\nthe buffer would remain unchanged, so, we don't mark such a buffer as\ndirty and neither do any WAL logging for it. This is my understanding,\nplease let me know if I am missing something here. Thank you.\n\nif (!ItemIdIsUsed(itemid) || ItemIdIsDead(itemid))\n{\n nskippedItems++;\n ereport(NOTICE,\n (errmsg(\"skipping tid (%u, %u) because it is\nalready marked %s\",\n blkno, offnos[i],\n ItemIdIsDead(itemid) ? \"dead\" : \"unused\")));\n continue;\n}\n\n> At this point you have already modified the page, using ItemIdSetDead,\n> HeapTupleHeaderSet*, and/or directly adjusting htup->infomask. If this\n> branch is taken, then MarkBufferDirty() and log_newpage_buffer() are\n> skipped.\n>\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 3 Aug 2020 21:43:04 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Hi Robert,\n\nThanks for the review. Please find my comments inline:\n\nOn Sat, Aug 1, 2020 at 12:18 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Jul 31, 2020 at 8:52 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > Attached is the new version of patch that addresses the comments from Andrey and Beena.\n>\n> +PGFILEDESC = \"pg_surgery - perform surgery on the damaged heap table\"\n>\n> the -> a\n>\n> I also suggest: heap table -> relation, because we might want to\n> extend this to other cases later.\n>\n\nCorrected.\n\n> +-- toast table.\n> +begin;\n> +create table ttab(a text);\n> +insert into ttab select string_agg(chr(floor(random() * 26)::int +\n> 65), '') from generate_series(1,10000);\n> +select * from ttab where xmin = 2;\n> + a\n> +---\n> +(0 rows)\n> +\n> +select heap_force_freeze('ttab'::regclass, ARRAY['(0, 1)']::tid[]);\n> + heap_force_freeze\n> +-------------------\n> +\n> +(1 row)\n> +\n>\n> I don't understand the point of this. You're not testing the function\n> on the TOAST table; you're testing it on the main table when there\n> happens to be a TOAST table that is probably getting used for\n> something. But that's not really relevant to what is being tested\n> here, so as written this seems redundant with the previous cases.\n>\n\nYeah, it's being tested on the main table, not on a toast table. I've\nremoved this test-case and also restricted direct access to the toast\ntable using heap_force_kill/freeze functions. I think we shouldn't be\nusing these functions to do any changes in the toast table. We will\nonly use these functions with the main table and let VACUUM remove the\ncorresponding data chunks (pointed by the tuple that got removed from\nthe main table).\n\nAnother option would be to identify all the data chunks corresponding\nto the tuple (ctid) being killed from the main table and remove them\none by one. We will only do this if the tuple from the main table that\nhas been marked as killed has an external storage. We will have to add\na bunch of code for this otherwise we can let VACUUM do this for us.\nLet me know your thoughts on this.\n\n> +-- test pg_surgery functions with the unsupported relations. Should fail.\n>\n> Please name the specific functions being tested here in case we add\n> more in the future that are tested separately.\n>\n\nDone.\n\n> +++ b/contrib/pg_surgery/heap_surgery_funcs.c\n>\n> I think we could drop _funcs from the file name.\n>\n\nDone.\n\n> +#ifdef PG_MODULE_MAGIC\n> +PG_MODULE_MAGIC;\n> +#endif\n>\n> The #ifdef here is not required, and if you look at other contrib\n> modules you'll see that they don't have it.\n>\n\nOkay, done.\n\n> I don't like all the macros at the top of the file much. CHECKARRVALID\n> is only used in one place, so it seems to me that you might as well\n> just inline it and lose the macro. Likewise for SORT and ARRISEMPTY.\n>\n\nDone.\n\n> Once you do that, heap_force_common() can just compute the number of\n> array elements once, instead of doing it once inside ARRISEMPTY, then\n> again inside SORT, and then a third time to initialize ntids. You can\n> just have a local variable in that function that is set once and then\n> used as needed. Then you are only doing ARRNELEMS once, so you can get\n> rid of that macro too. The same technique can be used to get rid of\n> ARRPTR. So then all the macros go away without introducing any code\n> duplication.\n>\n\nDone.\n\n> +/* Options to forcefully change the state of a heap tuple. */\n> +typedef enum HTupleForceOption\n> +{\n> + FORCE_KILL,\n> + FORCE_FREEZE\n> +} HTupleForceOption;\n>\n> I suggest un-abbreviating HTuple to HeapTuple and un-abbreviating the\n> enum members to HEAP_FORCE_KILL and HEAP_FORCE_FREE.\n\nDone.\n\nAlso, how about\n> option -> operation?\n>\n\nI think both look okay to me.\n\n> + return heap_force_common(fcinfo, FORCE_KILL);\n>\n> I think it might be more idiomatic to use PG_RETURN_DATUM here. I\n> know it's the same thing, though, and perhaps I'm even wrong about the\n> prevailing style.\n>\n\nDone.\n\n> + Assert(force_opt == FORCE_KILL || force_opt == FORCE_FREEZE);\n>\n> I think this is unnecessary. It's an enum with 2 values.\n>\n\nRemoved.\n\n> + if (ARRISEMPTY(ta))\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"empty tid array\")));\n>\n> I don't see why this should be an error. Why can't we just continue\n> normally and if it does nothing, it does nothing? You'd need to change\n> the do..while loop to a while loop so that the end condition is tested\n> at the top, but that seems fine.\n>\n\nI think it's okay to have this check. So, just left it as-is. We do\nhave such checks in other contrib modules as well wherever the array\nis being passed as an input to the function.\n\n> + rel = relation_open(relid, AccessShareLock);\n>\n> Maybe we should take RowExclusiveLock, since we are going to modify\n> rows. Not sure how much it matters, though.\n>\n\nI don't know how it would make a difference, but still as you said\nreplaced AccessShare with RowExclusive.\n\n> + if (!superuser() && GetUserId() != rel->rd_rel->relowner)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> + errmsg(\"must be superuser or object owner to use %s.\",\n> + force_opt == FORCE_KILL ? \"heap_force_kill()\" :\n> + \"heap_force_freeze()\")));\n>\n> This is the wrong way to do a permissions check, and it's also the\n> wrong way to write an error message about having failed one. To see\n> the correct method, grep for pg_class_aclcheck. However, I think that\n> we shouldn't in general trust the object owner to do this, unless the\n> super-user gave permission. This is a data-corrupting operation, and\n> only the boss is allowed to authorize it. So I think you should also\n> add REVOKE EXECUTE FROM PUBLIC statements to the SQL file, and then\n> have this check as a backup. Then, the superuser is always allowed,\n> and if they choose to GRANT EXECUTE on this function to some users,\n> those users can do it for their own relations, but not others.\n>\n\nDone.\n\n> + if (rel->rd_rel->relam != HEAP_TABLE_AM_OID)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> + errmsg(\"only heap AM is supported\")));\n> +\n> + check_relation_relkind(rel);\n>\n> Seems like these checks are in the wrong order.\n\nI don't think there is anything wrong with the order. I can see the\nsame order in other contrib modules as well.\n\nAlso, maybe you could\n> call the function something like check_relation_ok() and put the\n> permissions test, the relkind test, and the relam test all inside of\n> it, just to tighten up the code in this main function a bit.\n>\n\nYeah, I've added a couple of functions named sanity_check_relation and\nsanity_check_tid_array and shifted all the sanity checks there.\n\n> + if (noffs > maxoffset)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"number of offsets specified for block %u exceeds the max\n> offset number %u\",\n> + blkno, maxoffset)));\n>\n> Hmm, this doesn't seem quite right. The actual problem is if an\n> individual item pointer's offset number is greater than maxoffset,\n> which can be true even if the total number of offsets is less than\n> maxoffset. So I think you need to remove this check and add a check\n> inside the loop which follows that offnos[i] is in range.\n>\n\nAgreed and done.\n\n> The way you've structured that loop is actually problematic -- I don't\n> think we want to be calling elog() or ereport() inside a critical\n> section. You could fix the case that checks for an invalid force_opt\n> by just doing if (op == HEAP_FORCE_KILL) { ... } else { Assert(op ==\n> HEAP_FORCE_FREEZE); ... }, or by using a switch with no default. The\n> NOTICE case you have here is a bigger problem.\n\nDone.\n\nYou really cannot\n> modify the buffer like this and then decide, oops, never mind, I think\n> I won't mark it dirty or write WAL for the changes. If you do that,\n> the buffer is still in memory, but it's now been modified. A\n> subsequent operation that modifies it will start with the altered\n> state you created here, quite possibly leading to WAL that cannot be\n> correctly replayed on the standby. In other words, you've got to\n> decide for certain whether you want to proceed with the operation\n> *before* you enter the critical section. You also need to emit any\n> messages before or after the critical section. So you could:\n>\n\nThis is still not clear. I think Robert needs to respond to my earlier comment.\n\n> I believe this violates our guidelines on message construction. Have\n> two completely separate messages -- and maybe lose the word \"already\":\n>\n> \"skipping tid (%u, %u) because it is dead\"\n> \"skipping tid (%u, %u) because it is unused\"\n>\n> The point of this is that it makes it easier for translators.\n>\n\nDone.\n\n> I see very little point in what verify_tid() is doing. Before using\n> each block number, we should check that it's less than or equal to a\n> cached value of RelationGetNumberOfBlocks(rel). That's necessary in\n> any case to avoid funny errors; and then the check here against\n> specifically InvalidBlockNumber is redundant. For the offset number,\n> same thing: we need to check each offset against the page's\n> PageGetMaxOffsetNumber(page); and if we do that then we don't need\n> these checks.\n>\n\nDone.\n\nPlease check the attached patch for the changes.\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com", "msg_date": "Wed, 5 Aug 2020 19:12:02 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Mon, Aug 3, 2020 at 12:13 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> If the above path is taken that means none of the items in the page\n> got changed.\n\nOops. I didn't realize that, sorry. Maybe it would be a little more\nclear if instead of \"int nSkippedItems\" you had \"bool\ndid_modify_page\"? Then you could initialize it to false and set it to\ntrue just before doing the page modifications.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 5 Aug 2020 15:34:00 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Wed, Aug 5, 2020 at 9:42 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> Yeah, it's being tested on the main table, not on a toast table. I've\n> removed this test-case and also restricted direct access to the toast\n> table using heap_force_kill/freeze functions. I think we shouldn't be\n> using these functions to do any changes in the toast table. We will\n> only use these functions with the main table and let VACUUM remove the\n> corresponding data chunks (pointed by the tuple that got removed from\n> the main table).\n\nI agree with removing the test case, but I disagree with restricting\nthis from being used on the TOAST table. These are tools for experts,\nwho may use them as they see fit. It's unlikely that it would be\nuseful to use this on a TOAST table, I think, but not impossible.\n\n> Another option would be to identify all the data chunks corresponding\n> to the tuple (ctid) being killed from the main table and remove them\n> one by one. We will only do this if the tuple from the main table that\n> has been marked as killed has an external storage. We will have to add\n> a bunch of code for this otherwise we can let VACUUM do this for us.\n> Let me know your thoughts on this.\n\nI don't think VACUUM will do anything for us automatically -- it isn't\ngoing to know that we force-killed the tuple in the main table.\nNormally, a tuple delete would have to set xmax on the TOAST tuples\nand then VACUUM would do its thing, but in this case that won't\nhappen. So if you force-kill a tuple in the main table you would end\nup with a space leak in the TOAST table.\n\nThe problem here is that one reason you might force-killing a tuple in\nthe main table is because it's full of garbage. If so, trying to\ndecode the tuple so that you can find the TOAST pointers might crash\nor error out, or maybe that part will work but then you'll error out\ntrying to look up the corresponding TOAST tuples, either because the\nvalues are not valid or because the TOAST table itself is generally\nhosed in some way. So I think it is probably best if we keep this tool\nas simple as possible, with as few dependencies as we can, and\ndocument the possible negative outcomes of using it. It's not\nimpossible to recover from a space-leak like this; you can always move\nthe data into a new table with CTAS and then drop the old one. Not\nsure whether CLUSTER or VACUUM FULL would also be sufficient.\n\nSeparately, we might want to add a TOAST-checker to amcheck, or\nenhance the heap-checker Mark is working on, and one of the things it\ncould do is check for TOAST entries to which nothing points. Then if\nyou force-kill tuples in the main table you could also use that tool\nto look for things in the TOAST table that ought to also be\nforce-killed.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 5 Aug 2020 15:58:56 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Thu, Aug 6, 2020 at 1:04 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Aug 3, 2020 at 12:13 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > If the above path is taken that means none of the items in the page\n> > got changed.\n>\n> Oops. I didn't realize that, sorry. Maybe it would be a little more\n> clear if instead of \"int nSkippedItems\" you had \"bool\n> did_modify_page\"? Then you could initialize it to false and set it to\n> true just before doing the page modifications.\n>\n\nOkay, np, in that case, as you suggested, I will replace \"int\nnSkippedItems\" with \"did_modify_page\" to increase the clarity. I will\ndo this change in the next version of patch. Thanks.\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 6 Aug 2020 11:33:04 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Thu, Aug 6, 2020 at 1:29 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Aug 5, 2020 at 9:42 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > Yeah, it's being tested on the main table, not on a toast table. I've\n> > removed this test-case and also restricted direct access to the toast\n> > table using heap_force_kill/freeze functions. I think we shouldn't be\n> > using these functions to do any changes in the toast table. We will\n> > only use these functions with the main table and let VACUUM remove the\n> > corresponding data chunks (pointed by the tuple that got removed from\n> > the main table).\n>\n> I agree with removing the test case, but I disagree with restricting\n> this from being used on the TOAST table. These are tools for experts,\n> who may use them as they see fit. It's unlikely that it would be\n> useful to use this on a TOAST table, I think, but not impossible.\n>\n\nOkay, If you want I can remove the restriction on a toast table, but,\nthen that means a user can directly remove the data chunks from the\ntoast table without changing anything in the main table. This means we\nwon't be able to query the main table as it will fail with an error\nlike \"ERROR: unexpected chunk number ...\". So, we will have to find\nsome way to identify the pointer to the chunks that got deleted from\nthe toast table and remove that pointer from the main table. We also\nneed to make sure that before we remove a tuple (pointer) from the\nmain table, we identify all the remaining data chunks pointed by this\ntuple and remove them completely only then that table would be\nconsidered to be in a good state. Now, I am not sure if we can\ncurrently do all these things.\n\n> > Another option would be to identify all the data chunks corresponding\n> > to the tuple (ctid) being killed from the main table and remove them\n> > one by one. We will only do this if the tuple from the main table that\n> > has been marked as killed has an external storage. We will have to add\n> > a bunch of code for this otherwise we can let VACUUM do this for us.\n> > Let me know your thoughts on this.\n>\n> I don't think VACUUM will do anything for us automatically -- it isn't\n> going to know that we force-killed the tuple in the main table.\n> Normally, a tuple delete would have to set xmax on the TOAST tuples\n> and then VACUUM would do its thing, but in this case that won't\n> happen. So if you force-kill a tuple in the main table you would end\n> up with a space leak in the TOAST table.\n>\n> The problem here is that one reason you might force-killing a tuple in\n> the main table is because it's full of garbage. If so, trying to\n> decode the tuple so that you can find the TOAST pointers might crash\n> or error out, or maybe that part will work but then you'll error out\n> trying to look up the corresponding TOAST tuples, either because the\n> values are not valid or because the TOAST table itself is generally\n> hosed in some way. So I think it is probably best if we keep this tool\n> as simple as possible, with as few dependencies as we can, and\n> document the possible negative outcomes of using it.\n\nI completely agree with you.\n\nIt's not\n> impossible to recover from a space-leak like this; you can always move\n> the data into a new table with CTAS and then drop the old one. Not\n> sure whether CLUSTER or VACUUM FULL would also be sufficient.\n>\n\nYeah, I think, we can either use CTAS or VACUUM FULL, both look fine.\n\n> Separately, we might want to add a TOAST-checker to amcheck, or\n> enhance the heap-checker Mark is working on, and one of the things it\n> could do is check for TOAST entries to which nothing points. Then if\n> you force-kill tuples in the main table you could also use that tool\n> to look for things in the TOAST table that ought to also be\n> force-killed.\n>\n\nOkay, good to know that. Thanks for sharing this info.\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 6 Aug 2020 11:41:38 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Wed, 5 Aug 2020 at 22:42, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Hi Robert,\n>\n> Thanks for the review. Please find my comments inline:\n>\n> On Sat, Aug 1, 2020 at 12:18 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Fri, Jul 31, 2020 at 8:52 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > > Attached is the new version of patch that addresses the comments from Andrey and Beena.\n> >\n> > +PGFILEDESC = \"pg_surgery - perform surgery on the damaged heap table\"\n> >\n> > the -> a\n> >\n> > I also suggest: heap table -> relation, because we might want to\n> > extend this to other cases later.\n> >\n>\n> Corrected.\n>\n> > +-- toast table.\n> > +begin;\n> > +create table ttab(a text);\n> > +insert into ttab select string_agg(chr(floor(random() * 26)::int +\n> > 65), '') from generate_series(1,10000);\n> > +select * from ttab where xmin = 2;\n> > + a\n> > +---\n> > +(0 rows)\n> > +\n> > +select heap_force_freeze('ttab'::regclass, ARRAY['(0, 1)']::tid[]);\n> > + heap_force_freeze\n> > +-------------------\n> > +\n> > +(1 row)\n> > +\n> >\n> > I don't understand the point of this. You're not testing the function\n> > on the TOAST table; you're testing it on the main table when there\n> > happens to be a TOAST table that is probably getting used for\n> > something. But that's not really relevant to what is being tested\n> > here, so as written this seems redundant with the previous cases.\n> >\n>\n> Yeah, it's being tested on the main table, not on a toast table. I've\n> removed this test-case and also restricted direct access to the toast\n> table using heap_force_kill/freeze functions. I think we shouldn't be\n> using these functions to do any changes in the toast table. We will\n> only use these functions with the main table and let VACUUM remove the\n> corresponding data chunks (pointed by the tuple that got removed from\n> the main table).\n>\n> Another option would be to identify all the data chunks corresponding\n> to the tuple (ctid) being killed from the main table and remove them\n> one by one. We will only do this if the tuple from the main table that\n> has been marked as killed has an external storage. We will have to add\n> a bunch of code for this otherwise we can let VACUUM do this for us.\n> Let me know your thoughts on this.\n>\n> > +-- test pg_surgery functions with the unsupported relations. Should fail.\n> >\n> > Please name the specific functions being tested here in case we add\n> > more in the future that are tested separately.\n> >\n>\n> Done.\n>\n> > +++ b/contrib/pg_surgery/heap_surgery_funcs.c\n> >\n> > I think we could drop _funcs from the file name.\n> >\n>\n> Done.\n>\n> > +#ifdef PG_MODULE_MAGIC\n> > +PG_MODULE_MAGIC;\n> > +#endif\n> >\n> > The #ifdef here is not required, and if you look at other contrib\n> > modules you'll see that they don't have it.\n> >\n>\n> Okay, done.\n>\n> > I don't like all the macros at the top of the file much. CHECKARRVALID\n> > is only used in one place, so it seems to me that you might as well\n> > just inline it and lose the macro. Likewise for SORT and ARRISEMPTY.\n> >\n>\n> Done.\n>\n> > Once you do that, heap_force_common() can just compute the number of\n> > array elements once, instead of doing it once inside ARRISEMPTY, then\n> > again inside SORT, and then a third time to initialize ntids. You can\n> > just have a local variable in that function that is set once and then\n> > used as needed. Then you are only doing ARRNELEMS once, so you can get\n> > rid of that macro too. The same technique can be used to get rid of\n> > ARRPTR. So then all the macros go away without introducing any code\n> > duplication.\n> >\n>\n> Done.\n>\n> > +/* Options to forcefully change the state of a heap tuple. */\n> > +typedef enum HTupleForceOption\n> > +{\n> > + FORCE_KILL,\n> > + FORCE_FREEZE\n> > +} HTupleForceOption;\n> >\n> > I suggest un-abbreviating HTuple to HeapTuple and un-abbreviating the\n> > enum members to HEAP_FORCE_KILL and HEAP_FORCE_FREE.\n>\n> Done.\n>\n> Also, how about\n> > option -> operation?\n> >\n>\n> I think both look okay to me.\n>\n> > + return heap_force_common(fcinfo, FORCE_KILL);\n> >\n> > I think it might be more idiomatic to use PG_RETURN_DATUM here. I\n> > know it's the same thing, though, and perhaps I'm even wrong about the\n> > prevailing style.\n> >\n>\n> Done.\n>\n> > + Assert(force_opt == FORCE_KILL || force_opt == FORCE_FREEZE);\n> >\n> > I think this is unnecessary. It's an enum with 2 values.\n> >\n>\n> Removed.\n>\n> > + if (ARRISEMPTY(ta))\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > + errmsg(\"empty tid array\")));\n> >\n> > I don't see why this should be an error. Why can't we just continue\n> > normally and if it does nothing, it does nothing? You'd need to change\n> > the do..while loop to a while loop so that the end condition is tested\n> > at the top, but that seems fine.\n> >\n>\n> I think it's okay to have this check. So, just left it as-is. We do\n> have such checks in other contrib modules as well wherever the array\n> is being passed as an input to the function.\n>\n> > + rel = relation_open(relid, AccessShareLock);\n> >\n> > Maybe we should take RowExclusiveLock, since we are going to modify\n> > rows. Not sure how much it matters, though.\n> >\n>\n> I don't know how it would make a difference, but still as you said\n> replaced AccessShare with RowExclusive.\n>\n> > + if (!superuser() && GetUserId() != rel->rd_rel->relowner)\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> > + errmsg(\"must be superuser or object owner to use %s.\",\n> > + force_opt == FORCE_KILL ? \"heap_force_kill()\" :\n> > + \"heap_force_freeze()\")));\n> >\n> > This is the wrong way to do a permissions check, and it's also the\n> > wrong way to write an error message about having failed one. To see\n> > the correct method, grep for pg_class_aclcheck. However, I think that\n> > we shouldn't in general trust the object owner to do this, unless the\n> > super-user gave permission. This is a data-corrupting operation, and\n> > only the boss is allowed to authorize it. So I think you should also\n> > add REVOKE EXECUTE FROM PUBLIC statements to the SQL file, and then\n> > have this check as a backup. Then, the superuser is always allowed,\n> > and if they choose to GRANT EXECUTE on this function to some users,\n> > those users can do it for their own relations, but not others.\n> >\n>\n> Done.\n>\n> > + if (rel->rd_rel->relam != HEAP_TABLE_AM_OID)\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> > + errmsg(\"only heap AM is supported\")));\n> > +\n> > + check_relation_relkind(rel);\n> >\n> > Seems like these checks are in the wrong order.\n>\n> I don't think there is anything wrong with the order. I can see the\n> same order in other contrib modules as well.\n>\n> Also, maybe you could\n> > call the function something like check_relation_ok() and put the\n> > permissions test, the relkind test, and the relam test all inside of\n> > it, just to tighten up the code in this main function a bit.\n> >\n>\n> Yeah, I've added a couple of functions named sanity_check_relation and\n> sanity_check_tid_array and shifted all the sanity checks there.\n>\n> > + if (noffs > maxoffset)\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > + errmsg(\"number of offsets specified for block %u exceeds the max\n> > offset number %u\",\n> > + blkno, maxoffset)));\n> >\n> > Hmm, this doesn't seem quite right. The actual problem is if an\n> > individual item pointer's offset number is greater than maxoffset,\n> > which can be true even if the total number of offsets is less than\n> > maxoffset. So I think you need to remove this check and add a check\n> > inside the loop which follows that offnos[i] is in range.\n> >\n>\n> Agreed and done.\n>\n> > The way you've structured that loop is actually problematic -- I don't\n> > think we want to be calling elog() or ereport() inside a critical\n> > section. You could fix the case that checks for an invalid force_opt\n> > by just doing if (op == HEAP_FORCE_KILL) { ... } else { Assert(op ==\n> > HEAP_FORCE_FREEZE); ... }, or by using a switch with no default. The\n> > NOTICE case you have here is a bigger problem.\n>\n> Done.\n>\n> You really cannot\n> > modify the buffer like this and then decide, oops, never mind, I think\n> > I won't mark it dirty or write WAL for the changes. If you do that,\n> > the buffer is still in memory, but it's now been modified. A\n> > subsequent operation that modifies it will start with the altered\n> > state you created here, quite possibly leading to WAL that cannot be\n> > correctly replayed on the standby. In other words, you've got to\n> > decide for certain whether you want to proceed with the operation\n> > *before* you enter the critical section. You also need to emit any\n> > messages before or after the critical section. So you could:\n> >\n>\n> This is still not clear. I think Robert needs to respond to my earlier comment.\n>\n> > I believe this violates our guidelines on message construction. Have\n> > two completely separate messages -- and maybe lose the word \"already\":\n> >\n> > \"skipping tid (%u, %u) because it is dead\"\n> > \"skipping tid (%u, %u) because it is unused\"\n> >\n> > The point of this is that it makes it easier for translators.\n> >\n>\n> Done.\n>\n> > I see very little point in what verify_tid() is doing. Before using\n> > each block number, we should check that it's less than or equal to a\n> > cached value of RelationGetNumberOfBlocks(rel). That's necessary in\n> > any case to avoid funny errors; and then the check here against\n> > specifically InvalidBlockNumber is redundant. For the offset number,\n> > same thing: we need to check each offset against the page's\n> > PageGetMaxOffsetNumber(page); and if we do that then we don't need\n> > these checks.\n> >\n>\n> Done.\n>\n> Please check the attached patch for the changes.\n\nI also looked at this version patch and have some small comments:\n\n+ Oid relid = PG_GETARG_OID(0);\n+ ArrayType *ta = PG_GETARG_ARRAYTYPE_P_COPY(1);\n+ ItemPointer tids;\n+ int ntids;\n+ Relation rel;\n+ Buffer buf;\n+ Page page;\n+ ItemId itemid;\n+ BlockNumber blkno;\n+ OffsetNumber *offnos;\n+ OffsetNumber offno,\n+ noffs,\n+ curr_start_ptr,\n+ next_start_ptr,\n+ maxoffset;\n+ int i,\n+ nskippedItems,\n+ nblocks;\n\nYou declare all variables at the top of heap_force_common() function\nbut I think we can declare some variables such as buf, page inside of\nthe do loop.\n\n---\n+ if (offnos[i] > maxoffset)\n+ {\n+ ereport(NOTICE,\n+ errmsg(\"skipping tid (%u, %u) because it\ncontains an invalid offset\",\n+ blkno, offnos[i]));\n+ continue;\n+ }\n\nIf all tids on a page take the above path, we will end up logging FPI\nin spite of modifying nothing on the page.\n\n---\n+ /* XLOG stuff */\n+ if (RelationNeedsWAL(rel))\n+ log_newpage_buffer(buf, true);\n\nI think we need to set the returned LSN by log_newpage_buffer() to the page lsn.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 6 Aug 2020 17:11:27 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "I have been doing some user-level testing of this feature, apart from\nsanity test for extension and it's functions\n\nI have tried to corrupt tuples and then able to fix it using\nheap_force_freeze/kill functions.\n\n\n--corrupt relfrozenxid, cause vacuum failed.\n\nupdate pg_class set relfrozenxid = (relfrozenxid::text::integer +\n10)::text::xid where relname = 'test_tbl';\n\nUPDATE 1\n\ninsert into test_tbl values (2, 'BBB');\n\n\npostgres=# vacuum test_tbl;\n\nERROR: found xmin 507 from before relfrozenxid 516\n\nCONTEXT: while scanning block 0 of relation \"public.test_tbl\"\n\n\npostgres=# select *, ctid, xmin, xmax from test_tbl;\n\n a | b | ctid | xmin | xmax\n\n---+-----+-------+------+------\n\n 1 | AAA | (0,1) | 505 | 0\n\n 2 | BBB | (0,2) | 507 | 0\n\n(2 rows)\n\n\n--fixed using heap_force_freeze\n\npostgres=# select heap_force_freeze('test_tbl'::regclass,\nARRAY['(0,2)']::tid[]);\n\n heap_force_freeze\n\n-------------------\n\n\npostgres=# vacuum test_tbl;\n\nVACUUM\n\npostgres=# select *, ctid, xmin, xmax from test_tbl;\n\n a | b | ctid | xmin | xmax\n\n---+-----+-------+------+------\n\n 1 | AAA | (0,1) | 505 | 0\n\n 2 | BBB | (0,2) | 2 | 0\n\n(2 rows)\n\n\n--corrupt table headers in base/oid. file, cause table access failed.\n\npostgres=# select ctid, * from test_tbl;\n\nERROR: could not access status of transaction 4294967295\n\nDETAIL: Could not open file \"pg_xact/0FFF\": No such file or directory.\n\n\n--removed corrupted tuple using heap_force_kill\n\npostgres=# select heap_force_kill('test_tbl'::regclass,\nARRAY['(0,2)']::tid[]);\n\n heap_force_kill\n\n-----------------\n\n\n\n(1 row)\n\n\npostgres=# select ctid, * from test_tbl;\n\n ctid | a | b\n\n-------+---+-----\n\n (0,1) | 1 | AAA\n\n(1 row)\n\n\nI will be continuing with my testing with the latest patch and update here\nif found anything.\n\n\nThanks & Regards,\nRajkumar Raghuwanshi\n\n\nOn Thu, Aug 6, 2020 at 1:42 PM Masahiko Sawada <\nmasahiko.sawada@2ndquadrant.com> wrote:\n\n> On Wed, 5 Aug 2020 at 22:42, Ashutosh Sharma <ashu.coek88@gmail.com>\n> wrote:\n> >\n> > Hi Robert,\n> >\n> > Thanks for the review. Please find my comments inline:\n> >\n> > On Sat, Aug 1, 2020 at 12:18 AM Robert Haas <robertmhaas@gmail.com>\n> wrote:\n> > >\n> > > On Fri, Jul 31, 2020 at 8:52 AM Ashutosh Sharma <ashu.coek88@gmail.com>\n> wrote:\n> > > > Attached is the new version of patch that addresses the comments\n> from Andrey and Beena.\n> > >\n> > > +PGFILEDESC = \"pg_surgery - perform surgery on the damaged heap table\"\n> > >\n> > > the -> a\n> > >\n> > > I also suggest: heap table -> relation, because we might want to\n> > > extend this to other cases later.\n> > >\n> >\n> > Corrected.\n> >\n> > > +-- toast table.\n> > > +begin;\n> > > +create table ttab(a text);\n> > > +insert into ttab select string_agg(chr(floor(random() * 26)::int +\n> > > 65), '') from generate_series(1,10000);\n> > > +select * from ttab where xmin = 2;\n> > > + a\n> > > +---\n> > > +(0 rows)\n> > > +\n> > > +select heap_force_freeze('ttab'::regclass, ARRAY['(0, 1)']::tid[]);\n> > > + heap_force_freeze\n> > > +-------------------\n> > > +\n> > > +(1 row)\n> > > +\n> > >\n> > > I don't understand the point of this. You're not testing the function\n> > > on the TOAST table; you're testing it on the main table when there\n> > > happens to be a TOAST table that is probably getting used for\n> > > something. But that's not really relevant to what is being tested\n> > > here, so as written this seems redundant with the previous cases.\n> > >\n> >\n> > Yeah, it's being tested on the main table, not on a toast table. I've\n> > removed this test-case and also restricted direct access to the toast\n> > table using heap_force_kill/freeze functions. I think we shouldn't be\n> > using these functions to do any changes in the toast table. We will\n> > only use these functions with the main table and let VACUUM remove the\n> > corresponding data chunks (pointed by the tuple that got removed from\n> > the main table).\n> >\n> > Another option would be to identify all the data chunks corresponding\n> > to the tuple (ctid) being killed from the main table and remove them\n> > one by one. We will only do this if the tuple from the main table that\n> > has been marked as killed has an external storage. We will have to add\n> > a bunch of code for this otherwise we can let VACUUM do this for us.\n> > Let me know your thoughts on this.\n> >\n> > > +-- test pg_surgery functions with the unsupported relations. Should\n> fail.\n> > >\n> > > Please name the specific functions being tested here in case we add\n> > > more in the future that are tested separately.\n> > >\n> >\n> > Done.\n> >\n> > > +++ b/contrib/pg_surgery/heap_surgery_funcs.c\n> > >\n> > > I think we could drop _funcs from the file name.\n> > >\n> >\n> > Done.\n> >\n> > > +#ifdef PG_MODULE_MAGIC\n> > > +PG_MODULE_MAGIC;\n> > > +#endif\n> > >\n> > > The #ifdef here is not required, and if you look at other contrib\n> > > modules you'll see that they don't have it.\n> > >\n> >\n> > Okay, done.\n> >\n> > > I don't like all the macros at the top of the file much. CHECKARRVALID\n> > > is only used in one place, so it seems to me that you might as well\n> > > just inline it and lose the macro. Likewise for SORT and ARRISEMPTY.\n> > >\n> >\n> > Done.\n> >\n> > > Once you do that, heap_force_common() can just compute the number of\n> > > array elements once, instead of doing it once inside ARRISEMPTY, then\n> > > again inside SORT, and then a third time to initialize ntids. You can\n> > > just have a local variable in that function that is set once and then\n> > > used as needed. Then you are only doing ARRNELEMS once, so you can get\n> > > rid of that macro too. The same technique can be used to get rid of\n> > > ARRPTR. So then all the macros go away without introducing any code\n> > > duplication.\n> > >\n> >\n> > Done.\n> >\n> > > +/* Options to forcefully change the state of a heap tuple. */\n> > > +typedef enum HTupleForceOption\n> > > +{\n> > > + FORCE_KILL,\n> > > + FORCE_FREEZE\n> > > +} HTupleForceOption;\n> > >\n> > > I suggest un-abbreviating HTuple to HeapTuple and un-abbreviating the\n> > > enum members to HEAP_FORCE_KILL and HEAP_FORCE_FREE.\n> >\n> > Done.\n> >\n> > Also, how about\n> > > option -> operation?\n> > >\n> >\n> > I think both look okay to me.\n> >\n> > > + return heap_force_common(fcinfo, FORCE_KILL);\n> > >\n> > > I think it might be more idiomatic to use PG_RETURN_DATUM here. I\n> > > know it's the same thing, though, and perhaps I'm even wrong about the\n> > > prevailing style.\n> > >\n> >\n> > Done.\n> >\n> > > + Assert(force_opt == FORCE_KILL || force_opt == FORCE_FREEZE);\n> > >\n> > > I think this is unnecessary. It's an enum with 2 values.\n> > >\n> >\n> > Removed.\n> >\n> > > + if (ARRISEMPTY(ta))\n> > > + ereport(ERROR,\n> > > + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > > + errmsg(\"empty tid array\")));\n> > >\n> > > I don't see why this should be an error. Why can't we just continue\n> > > normally and if it does nothing, it does nothing? You'd need to change\n> > > the do..while loop to a while loop so that the end condition is tested\n> > > at the top, but that seems fine.\n> > >\n> >\n> > I think it's okay to have this check. So, just left it as-is. We do\n> > have such checks in other contrib modules as well wherever the array\n> > is being passed as an input to the function.\n> >\n> > > + rel = relation_open(relid, AccessShareLock);\n> > >\n> > > Maybe we should take RowExclusiveLock, since we are going to modify\n> > > rows. Not sure how much it matters, though.\n> > >\n> >\n> > I don't know how it would make a difference, but still as you said\n> > replaced AccessShare with RowExclusive.\n> >\n> > > + if (!superuser() && GetUserId() != rel->rd_rel->relowner)\n> > > + ereport(ERROR,\n> > > + (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> > > + errmsg(\"must be superuser or object owner to use %s.\",\n> > > + force_opt == FORCE_KILL ? \"heap_force_kill()\" :\n> > > + \"heap_force_freeze()\")));\n> > >\n> > > This is the wrong way to do a permissions check, and it's also the\n> > > wrong way to write an error message about having failed one. To see\n> > > the correct method, grep for pg_class_aclcheck. However, I think that\n> > > we shouldn't in general trust the object owner to do this, unless the\n> > > super-user gave permission. This is a data-corrupting operation, and\n> > > only the boss is allowed to authorize it. So I think you should also\n> > > add REVOKE EXECUTE FROM PUBLIC statements to the SQL file, and then\n> > > have this check as a backup. Then, the superuser is always allowed,\n> > > and if they choose to GRANT EXECUTE on this function to some users,\n> > > those users can do it for their own relations, but not others.\n> > >\n> >\n> > Done.\n> >\n> > > + if (rel->rd_rel->relam != HEAP_TABLE_AM_OID)\n> > > + ereport(ERROR,\n> > > + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> > > + errmsg(\"only heap AM is supported\")));\n> > > +\n> > > + check_relation_relkind(rel);\n> > >\n> > > Seems like these checks are in the wrong order.\n> >\n> > I don't think there is anything wrong with the order. I can see the\n> > same order in other contrib modules as well.\n> >\n> > Also, maybe you could\n> > > call the function something like check_relation_ok() and put the\n> > > permissions test, the relkind test, and the relam test all inside of\n> > > it, just to tighten up the code in this main function a bit.\n> > >\n> >\n> > Yeah, I've added a couple of functions named sanity_check_relation and\n> > sanity_check_tid_array and shifted all the sanity checks there.\n> >\n> > > + if (noffs > maxoffset)\n> > > + ereport(ERROR,\n> > > + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > > + errmsg(\"number of offsets specified for block %u exceeds the max\n> > > offset number %u\",\n> > > + blkno, maxoffset)));\n> > >\n> > > Hmm, this doesn't seem quite right. The actual problem is if an\n> > > individual item pointer's offset number is greater than maxoffset,\n> > > which can be true even if the total number of offsets is less than\n> > > maxoffset. So I think you need to remove this check and add a check\n> > > inside the loop which follows that offnos[i] is in range.\n> > >\n> >\n> > Agreed and done.\n> >\n> > > The way you've structured that loop is actually problematic -- I don't\n> > > think we want to be calling elog() or ereport() inside a critical\n> > > section. You could fix the case that checks for an invalid force_opt\n> > > by just doing if (op == HEAP_FORCE_KILL) { ... } else { Assert(op ==\n> > > HEAP_FORCE_FREEZE); ... }, or by using a switch with no default. The\n> > > NOTICE case you have here is a bigger problem.\n> >\n> > Done.\n> >\n> > You really cannot\n> > > modify the buffer like this and then decide, oops, never mind, I think\n> > > I won't mark it dirty or write WAL for the changes. If you do that,\n> > > the buffer is still in memory, but it's now been modified. A\n> > > subsequent operation that modifies it will start with the altered\n> > > state you created here, quite possibly leading to WAL that cannot be\n> > > correctly replayed on the standby. In other words, you've got to\n> > > decide for certain whether you want to proceed with the operation\n> > > *before* you enter the critical section. You also need to emit any\n> > > messages before or after the critical section. So you could:\n> > >\n> >\n> > This is still not clear. I think Robert needs to respond to my earlier\n> comment.\n> >\n> > > I believe this violates our guidelines on message construction. Have\n> > > two completely separate messages -- and maybe lose the word \"already\":\n> > >\n> > > \"skipping tid (%u, %u) because it is dead\"\n> > > \"skipping tid (%u, %u) because it is unused\"\n> > >\n> > > The point of this is that it makes it easier for translators.\n> > >\n> >\n> > Done.\n> >\n> > > I see very little point in what verify_tid() is doing. Before using\n> > > each block number, we should check that it's less than or equal to a\n> > > cached value of RelationGetNumberOfBlocks(rel). That's necessary in\n> > > any case to avoid funny errors; and then the check here against\n> > > specifically InvalidBlockNumber is redundant. For the offset number,\n> > > same thing: we need to check each offset against the page's\n> > > PageGetMaxOffsetNumber(page); and if we do that then we don't need\n> > > these checks.\n> > >\n> >\n> > Done.\n> >\n> > Please check the attached patch for the changes.\n>\n> I also looked at this version patch and have some small comments:\n>\n> + Oid relid = PG_GETARG_OID(0);\n> + ArrayType *ta = PG_GETARG_ARRAYTYPE_P_COPY(1);\n> + ItemPointer tids;\n> + int ntids;\n> + Relation rel;\n> + Buffer buf;\n> + Page page;\n> + ItemId itemid;\n> + BlockNumber blkno;\n> + OffsetNumber *offnos;\n> + OffsetNumber offno,\n> + noffs,\n> + curr_start_ptr,\n> + next_start_ptr,\n> + maxoffset;\n> + int i,\n> + nskippedItems,\n> + nblocks;\n>\n> You declare all variables at the top of heap_force_common() function\n> but I think we can declare some variables such as buf, page inside of\n> the do loop.\n>\n> ---\n> + if (offnos[i] > maxoffset)\n> + {\n> + ereport(NOTICE,\n> + errmsg(\"skipping tid (%u, %u) because it\n> contains an invalid offset\",\n> + blkno, offnos[i]));\n> + continue;\n> + }\n>\n> If all tids on a page take the above path, we will end up logging FPI\n> in spite of modifying nothing on the page.\n>\n> ---\n> + /* XLOG stuff */\n> + if (RelationNeedsWAL(rel))\n> + log_newpage_buffer(buf, true);\n>\n> I think we need to set the returned LSN by log_newpage_buffer() to the\n> page lsn.\n>\n> Regards,\n>\n> --\n> Masahiko Sawada http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n>\n\nI have been doing some user-level testing of this feature, apart from sanity test for extension and it's functionsI have tried to corrupt tuples and then able to fix it using heap_force_freeze/kill functions.--corrupt relfrozenxid, cause vacuum failed.update pg_class set relfrozenxid = (relfrozenxid::text::integer + 10)::text::xid where relname = 'test_tbl';UPDATE 1insert into test_tbl values (2, 'BBB');postgres=# vacuum test_tbl;ERROR:  found xmin 507 from before relfrozenxid 516CONTEXT:  while scanning block 0 of relation \"public.test_tbl\"postgres=# select *, ctid, xmin, xmax from test_tbl; a |  b  | ctid  | xmin | xmax ---+-----+-------+------+------ 1 | AAA | (0,1) |  505 |    0 2 | BBB | (0,2) |  507 |    0(2 rows)--fixed using heap_force_freezepostgres=# select heap_force_freeze('test_tbl'::regclass, ARRAY['(0,2)']::tid[]); heap_force_freeze -------------------postgres=# vacuum test_tbl;VACUUMpostgres=# select *, ctid, xmin, xmax from test_tbl; a |  b  | ctid  | xmin | xmax ---+-----+-------+------+------ 1 | AAA | (0,1) |  505 |    0 2 | BBB | (0,2) |    2 |    0(2 rows)--corrupt table headers in base/oid. file, cause table access failed.postgres=# select ctid, * from test_tbl;ERROR:  could not access status of transaction 4294967295DETAIL:  Could not open file \"pg_xact/0FFF\": No such file or directory.--removed corrupted tuple using heap_force_killpostgres=# select heap_force_kill('test_tbl'::regclass, ARRAY['(0,2)']::tid[]); heap_force_kill ----------------- (1 row)postgres=# select ctid, * from test_tbl; ctid  | a |  b  -------+---+----- (0,1) | 1 | AAA(1 row)I will be continuing with my testing with the latest patch and update here if found anything.Thanks & Regards,Rajkumar RaghuwanshiOn Thu, Aug 6, 2020 at 1:42 PM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:On Wed, 5 Aug 2020 at 22:42, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Hi Robert,\n>\n> Thanks for the review. Please find my comments inline:\n>\n> On Sat, Aug 1, 2020 at 12:18 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Fri, Jul 31, 2020 at 8:52 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > > Attached is the new version of patch that addresses the comments from Andrey and Beena.\n> >\n> > +PGFILEDESC = \"pg_surgery - perform surgery on the damaged heap table\"\n> >\n> > the -> a\n> >\n> > I also suggest: heap table -> relation, because we might want to\n> > extend this to other cases later.\n> >\n>\n> Corrected.\n>\n> > +-- toast table.\n> > +begin;\n> > +create table ttab(a text);\n> > +insert into ttab select string_agg(chr(floor(random() * 26)::int +\n> > 65), '') from generate_series(1,10000);\n> > +select * from ttab where xmin = 2;\n> > + a\n> > +---\n> > +(0 rows)\n> > +\n> > +select heap_force_freeze('ttab'::regclass, ARRAY['(0, 1)']::tid[]);\n> > + heap_force_freeze\n> > +-------------------\n> > +\n> > +(1 row)\n> > +\n> >\n> > I don't understand the point of this. You're not testing the function\n> > on the TOAST table; you're testing it on the main table when there\n> > happens to be a TOAST table that is probably getting used for\n> > something. But that's not really relevant to what is being tested\n> > here, so as written this seems redundant with the previous cases.\n> >\n>\n> Yeah, it's being tested on the main table, not on a toast table. I've\n> removed this test-case and also restricted direct access to the toast\n> table using heap_force_kill/freeze functions. I think we shouldn't be\n> using these functions to do any changes in the toast table. We will\n> only use these functions with the main table and let VACUUM remove the\n> corresponding data chunks (pointed by the tuple that got removed from\n> the main table).\n>\n> Another option would be to identify all the data chunks corresponding\n> to the tuple (ctid) being killed from the main table and remove them\n> one by one. We will only do this if the tuple from the main table that\n> has been marked as killed has an external storage. We will have to add\n> a bunch of code for this otherwise we can let VACUUM do this for us.\n> Let me know your thoughts on this.\n>\n> > +-- test pg_surgery functions with the unsupported relations. Should fail.\n> >\n> > Please name the specific functions being tested here in case we add\n> > more in the future that are tested separately.\n> >\n>\n> Done.\n>\n> > +++ b/contrib/pg_surgery/heap_surgery_funcs.c\n> >\n> > I think we could drop _funcs from the file name.\n> >\n>\n> Done.\n>\n> > +#ifdef PG_MODULE_MAGIC\n> > +PG_MODULE_MAGIC;\n> > +#endif\n> >\n> > The #ifdef here is not required, and if you look at other contrib\n> > modules you'll see that they don't have it.\n> >\n>\n> Okay, done.\n>\n> > I don't like all the macros at the top of the file much. CHECKARRVALID\n> > is only used in one place, so it seems to me that you might as well\n> > just inline it and lose the macro. Likewise for SORT and ARRISEMPTY.\n> >\n>\n> Done.\n>\n> > Once you do that, heap_force_common() can just compute the number of\n> > array elements once, instead of doing it once inside ARRISEMPTY, then\n> > again inside SORT, and then a third time to initialize ntids. You can\n> > just have a local variable in that function that is set once and then\n> > used as needed. Then you are only doing ARRNELEMS once, so you can get\n> > rid of that macro too. The same technique can be used to get rid of\n> > ARRPTR. So then all the macros go away without introducing any code\n> > duplication.\n> >\n>\n> Done.\n>\n> > +/* Options to forcefully change the state of a heap tuple. */\n> > +typedef enum HTupleForceOption\n> > +{\n> > + FORCE_KILL,\n> > + FORCE_FREEZE\n> > +} HTupleForceOption;\n> >\n> > I suggest un-abbreviating HTuple to HeapTuple and un-abbreviating the\n> > enum members to HEAP_FORCE_KILL and HEAP_FORCE_FREE.\n>\n> Done.\n>\n> Also, how about\n> > option -> operation?\n> >\n>\n> I think both look okay to me.\n>\n> > + return heap_force_common(fcinfo, FORCE_KILL);\n> >\n> > I think it might be more idiomatic to use PG_RETURN_DATUM here.  I\n> > know it's the same thing, though, and perhaps I'm even wrong about the\n> > prevailing style.\n> >\n>\n> Done.\n>\n> > + Assert(force_opt == FORCE_KILL || force_opt == FORCE_FREEZE);\n> >\n> > I think this is unnecessary. It's an enum with 2 values.\n> >\n>\n> Removed.\n>\n> > + if (ARRISEMPTY(ta))\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > + errmsg(\"empty tid array\")));\n> >\n> > I don't see why this should be an error. Why can't we just continue\n> > normally and if it does nothing, it does nothing? You'd need to change\n> > the do..while loop to a while loop so that the end condition is tested\n> > at the top, but that seems fine.\n> >\n>\n> I think it's okay to have this check. So, just left it as-is. We do\n> have such checks in other contrib modules as well wherever the array\n> is being passed as an input to the function.\n>\n> > + rel = relation_open(relid, AccessShareLock);\n> >\n> > Maybe we should take RowExclusiveLock, since we are going to modify\n> > rows. Not sure how much it matters, though.\n> >\n>\n> I don't know how it would make a difference, but still as you said\n> replaced AccessShare with RowExclusive.\n>\n> > + if (!superuser() && GetUserId() != rel->rd_rel->relowner)\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> > + errmsg(\"must be superuser or object owner to use %s.\",\n> > + force_opt == FORCE_KILL ? \"heap_force_kill()\" :\n> > + \"heap_force_freeze()\")));\n> >\n> > This is the wrong way to do a permissions check, and it's also the\n> > wrong way to write an error message about having failed one. To see\n> > the correct method, grep for pg_class_aclcheck. However, I think that\n> > we shouldn't in general trust the object owner to do this, unless the\n> > super-user gave permission. This is a data-corrupting operation, and\n> > only the boss is allowed to authorize it. So I think you should also\n> > add REVOKE EXECUTE FROM PUBLIC statements to the SQL file, and then\n> > have this check as a backup. Then, the superuser is always allowed,\n> > and if they choose to GRANT EXECUTE on this function to some users,\n> > those users can do it for their own relations, but not others.\n> >\n>\n> Done.\n>\n> > + if (rel->rd_rel->relam != HEAP_TABLE_AM_OID)\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> > + errmsg(\"only heap AM is supported\")));\n> > +\n> > + check_relation_relkind(rel);\n> >\n> > Seems like these checks are in the wrong order.\n>\n> I don't think there is anything wrong with the order. I can see the\n> same order in other contrib modules as well.\n>\n> Also, maybe you could\n> > call the function something like check_relation_ok() and put the\n> > permissions test, the relkind test, and the relam test all inside of\n> > it, just to tighten up the code in this main function a bit.\n> >\n>\n> Yeah, I've added a couple of functions named sanity_check_relation and\n> sanity_check_tid_array and shifted all the sanity checks there.\n>\n> > + if (noffs > maxoffset)\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > + errmsg(\"number of offsets specified for block %u exceeds the max\n> > offset number %u\",\n> > + blkno, maxoffset)));\n> >\n> > Hmm, this doesn't seem quite right. The actual problem is if an\n> > individual item pointer's offset number is greater than maxoffset,\n> > which can be true even if the total number of offsets is less than\n> > maxoffset. So I think you need to remove this check and add a check\n> > inside the loop which follows that offnos[i] is in range.\n> >\n>\n> Agreed and done.\n>\n> > The way you've structured that loop is actually problematic -- I don't\n> > think we want to be calling elog() or ereport() inside a critical\n> > section. You could fix the case that checks for an invalid force_opt\n> > by just doing if (op == HEAP_FORCE_KILL) { ... } else { Assert(op ==\n> > HEAP_FORCE_FREEZE); ... }, or by using a switch with no default. The\n> > NOTICE case you have here is a bigger problem.\n>\n> Done.\n>\n> You really cannot\n> > modify the buffer like this and then decide, oops, never mind, I think\n> > I won't mark it dirty or write WAL for the changes. If you do that,\n> > the buffer is still in memory, but it's now been modified. A\n> > subsequent operation that modifies it will start with the altered\n> > state you created here, quite possibly leading to WAL that cannot be\n> > correctly replayed on the standby. In other words, you've got to\n> > decide for certain whether you want to proceed with the operation\n> > *before* you enter the critical section. You also need to emit any\n> > messages before or after the critical section.  So you could:\n> >\n>\n> This is still not clear. I think Robert needs to respond to my earlier comment.\n>\n> > I believe this violates our guidelines on message construction. Have\n> > two completely separate messages -- and maybe lose the word \"already\":\n> >\n> > \"skipping tid (%u, %u) because it is dead\"\n> > \"skipping tid (%u, %u) because it is unused\"\n> >\n> > The point of this is that it makes it easier for translators.\n> >\n>\n> Done.\n>\n> > I see very little point in what verify_tid() is doing. Before using\n> > each block number, we should check that it's less than or equal to a\n> > cached value of RelationGetNumberOfBlocks(rel). That's necessary in\n> > any case to avoid funny errors; and then the check here against\n> > specifically InvalidBlockNumber is redundant. For the offset number,\n> > same thing: we need to check each offset against the page's\n> > PageGetMaxOffsetNumber(page); and if we do that then we don't need\n> > these checks.\n> >\n>\n> Done.\n>\n> Please check the attached patch for the changes.\n\nI also looked at this version patch and have some small comments:\n\n+   Oid             relid = PG_GETARG_OID(0);\n+   ArrayType      *ta = PG_GETARG_ARRAYTYPE_P_COPY(1);\n+   ItemPointer     tids;\n+   int             ntids;\n+   Relation        rel;\n+   Buffer          buf;\n+   Page            page;\n+   ItemId          itemid;\n+   BlockNumber     blkno;\n+   OffsetNumber   *offnos;\n+   OffsetNumber    offno,\n+                   noffs,\n+                   curr_start_ptr,\n+                   next_start_ptr,\n+                   maxoffset;\n+   int             i,\n+                   nskippedItems,\n+                   nblocks;\n\nYou declare all variables at the top of heap_force_common() function\nbut I think we can declare some variables such as buf, page inside of\nthe do loop.\n\n---\n+           if (offnos[i] > maxoffset)\n+           {\n+               ereport(NOTICE,\n+                        errmsg(\"skipping tid (%u, %u) because it\ncontains an invalid offset\",\n+                               blkno, offnos[i]));\n+               continue;\n+           }\n\nIf all tids on a page take the above path, we will end up logging FPI\nin spite of modifying nothing on the page.\n\n---\n+       /* XLOG stuff */\n+       if (RelationNeedsWAL(rel))\n+           log_newpage_buffer(buf, true);\n\nI think we need to set the returned LSN by log_newpage_buffer() to the page lsn.\n\nRegards,\n\n--\nMasahiko Sawada            http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 6 Aug 2020 14:25:35 +0530", "msg_from": "Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Hello Masahiko-san,\n\nThanks for looking into the patch. Please find my comments inline below:\n\nOn Thu, Aug 6, 2020 at 1:42 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Wed, 5 Aug 2020 at 22:42, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > Please check the attached patch for the changes.\n>\n> I also looked at this version patch and have some small comments:\n>\n> + Oid relid = PG_GETARG_OID(0);\n> + ArrayType *ta = PG_GETARG_ARRAYTYPE_P_COPY(1);\n> + ItemPointer tids;\n> + int ntids;\n> + Relation rel;\n> + Buffer buf;\n> + Page page;\n> + ItemId itemid;\n> + BlockNumber blkno;\n> + OffsetNumber *offnos;\n> + OffsetNumber offno,\n> + noffs,\n> + curr_start_ptr,\n> + next_start_ptr,\n> + maxoffset;\n> + int i,\n> + nskippedItems,\n> + nblocks;\n>\n> You declare all variables at the top of heap_force_common() function\n> but I think we can declare some variables such as buf, page inside of\n> the do loop.\n>\n\nSure, I will do this in the next version of patch.\n\n> ---\n> + if (offnos[i] > maxoffset)\n> + {\n> + ereport(NOTICE,\n> + errmsg(\"skipping tid (%u, %u) because it\n> contains an invalid offset\",\n> + blkno, offnos[i]));\n> + continue;\n> + }\n>\n> If all tids on a page take the above path, we will end up logging FPI\n> in spite of modifying nothing on the page.\n>\n\nYeah, that's right. I've taken care of this in the new version of\npatch which I am yet to share.\n\n> ---\n> + /* XLOG stuff */\n> + if (RelationNeedsWAL(rel))\n> + log_newpage_buffer(buf, true);\n>\n> I think we need to set the returned LSN by log_newpage_buffer() to the page lsn.\n>\n\nI think we are already setting the page lsn in the log_newpage() which\nis being called from log_newpage_buffer(). So, AFAIU, no change would\nbe required here. Please let me know if I am missing something.\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 6 Aug 2020 14:35:42 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Thu, 6 Aug 2020 at 18:05, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Hello Masahiko-san,\n>\n> Thanks for looking into the patch. Please find my comments inline below:\n>\n> On Thu, Aug 6, 2020 at 1:42 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Wed, 5 Aug 2020 at 22:42, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > > Please check the attached patch for the changes.\n> >\n> > I also looked at this version patch and have some small comments:\n> >\n> > + Oid relid = PG_GETARG_OID(0);\n> > + ArrayType *ta = PG_GETARG_ARRAYTYPE_P_COPY(1);\n> > + ItemPointer tids;\n> > + int ntids;\n> > + Relation rel;\n> > + Buffer buf;\n> > + Page page;\n> > + ItemId itemid;\n> > + BlockNumber blkno;\n> > + OffsetNumber *offnos;\n> > + OffsetNumber offno,\n> > + noffs,\n> > + curr_start_ptr,\n> > + next_start_ptr,\n> > + maxoffset;\n> > + int i,\n> > + nskippedItems,\n> > + nblocks;\n> >\n> > You declare all variables at the top of heap_force_common() function\n> > but I think we can declare some variables such as buf, page inside of\n> > the do loop.\n> >\n>\n> Sure, I will do this in the next version of patch.\n>\n> > ---\n> > + if (offnos[i] > maxoffset)\n> > + {\n> > + ereport(NOTICE,\n> > + errmsg(\"skipping tid (%u, %u) because it\n> > contains an invalid offset\",\n> > + blkno, offnos[i]));\n> > + continue;\n> > + }\n> >\n> > If all tids on a page take the above path, we will end up logging FPI\n> > in spite of modifying nothing on the page.\n> >\n>\n> Yeah, that's right. I've taken care of this in the new version of\n> patch which I am yet to share.\n>\n> > ---\n> > + /* XLOG stuff */\n> > + if (RelationNeedsWAL(rel))\n> > + log_newpage_buffer(buf, true);\n> >\n> > I think we need to set the returned LSN by log_newpage_buffer() to the page lsn.\n> >\n>\n> I think we are already setting the page lsn in the log_newpage() which\n> is being called from log_newpage_buffer(). So, AFAIU, no change would\n> be required here. Please let me know if I am missing something.\n\nYou're right. I'd missed the comment of log_newpage_buffer(). Thanks!\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 6 Aug 2020 18:19:20 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Attached v4 patch fixes the latest comments from Robert and Masahiko-san.\n\nChanges:\n1) Let heap_force_kill and freeze functions to be used with toast tables.\n2) Replace \"int nskippedItems\" with \"bool did_modify_page\" flag to\nknow if any modification happened in the page or not.\n3) Declare some of the variables such as buf, page inside of the do\nloop instead of declaring them at the top of heap_force_common\nfunction.\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\nOn Thu, Aug 6, 2020 at 2:49 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Thu, 6 Aug 2020 at 18:05, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> >\n> > Hello Masahiko-san,\n> >\n> > Thanks for looking into the patch. Please find my comments inline below:\n> >\n> > On Thu, Aug 6, 2020 at 1:42 PM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > > On Wed, 5 Aug 2020 at 22:42, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > > > Please check the attached patch for the changes.\n> > >\n> > > I also looked at this version patch and have some small comments:\n> > >\n> > > + Oid relid = PG_GETARG_OID(0);\n> > > + ArrayType *ta = PG_GETARG_ARRAYTYPE_P_COPY(1);\n> > > + ItemPointer tids;\n> > > + int ntids;\n> > > + Relation rel;\n> > > + Buffer buf;\n> > > + Page page;\n> > > + ItemId itemid;\n> > > + BlockNumber blkno;\n> > > + OffsetNumber *offnos;\n> > > + OffsetNumber offno,\n> > > + noffs,\n> > > + curr_start_ptr,\n> > > + next_start_ptr,\n> > > + maxoffset;\n> > > + int i,\n> > > + nskippedItems,\n> > > + nblocks;\n> > >\n> > > You declare all variables at the top of heap_force_common() function\n> > > but I think we can declare some variables such as buf, page inside of\n> > > the do loop.\n> > >\n> >\n> > Sure, I will do this in the next version of patch.\n> >\n> > > ---\n> > > + if (offnos[i] > maxoffset)\n> > > + {\n> > > + ereport(NOTICE,\n> > > + errmsg(\"skipping tid (%u, %u) because it\n> > > contains an invalid offset\",\n> > > + blkno, offnos[i]));\n> > > + continue;\n> > > + }\n> > >\n> > > If all tids on a page take the above path, we will end up logging FPI\n> > > in spite of modifying nothing on the page.\n> > >\n> >\n> > Yeah, that's right. I've taken care of this in the new version of\n> > patch which I am yet to share.\n> >\n> > > ---\n> > > + /* XLOG stuff */\n> > > + if (RelationNeedsWAL(rel))\n> > > + log_newpage_buffer(buf, true);\n> > >\n> > > I think we need to set the returned LSN by log_newpage_buffer() to the page lsn.\n> > >\n> >\n> > I think we are already setting the page lsn in the log_newpage() which\n> > is being called from log_newpage_buffer(). So, AFAIU, no change would\n> > be required here. Please let me know if I am missing something.\n>\n> You're right. I'd missed the comment of log_newpage_buffer(). Thanks!\n>\n> Regards,\n>\n> --\n> Masahiko Sawada http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 6 Aug 2020 18:53:38 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Thu, Aug 6, 2020 at 2:11 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> Okay, If you want I can remove the restriction on a toast table, but,\n> then that means a user can directly remove the data chunks from the\n> toast table without changing anything in the main table. This means we\n> won't be able to query the main table as it will fail with an error\n> like \"ERROR: unexpected chunk number ...\". So, we will have to find\n> some way to identify the pointer to the chunks that got deleted from\n> the toast table and remove that pointer from the main table. We also\n> need to make sure that before we remove a tuple (pointer) from the\n> main table, we identify all the remaining data chunks pointed by this\n> tuple and remove them completely only then that table would be\n> considered to be in a good state. Now, I am not sure if we can\n> currently do all these things.\n\nThat's the user's problem. If they don't have a plan for that, they\nshouldn't use this tool. I don't think we can or should try to handle\nit in this code.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 6 Aug 2020 11:48:51 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Thu, Aug 6, 2020 at 9:19 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Aug 6, 2020 at 2:11 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > Okay, If you want I can remove the restriction on a toast table, but,\n> > then that means a user can directly remove the data chunks from the\n> > toast table without changing anything in the main table. This means we\n> > won't be able to query the main table as it will fail with an error\n> > like \"ERROR: unexpected chunk number ...\". So, we will have to find\n> > some way to identify the pointer to the chunks that got deleted from\n> > the toast table and remove that pointer from the main table. We also\n> > need to make sure that before we remove a tuple (pointer) from the\n> > main table, we identify all the remaining data chunks pointed by this\n> > tuple and remove them completely only then that table would be\n> > considered to be in a good state. Now, I am not sure if we can\n> > currently do all these things.\n>\n> That's the user's problem. If they don't have a plan for that, they\n> shouldn't use this tool. I don't think we can or should try to handle\n> it in this code.\n>\n\nOkay, thanks.\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 7 Aug 2020 12:01:47 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Thanks Rajkumar for testing the patch.\n\nHere are some of the additional test-cases that I would suggest you to\nexecute, if possible:\n\n1) You may try running the test-cases that you have executed so far\nwith SR setup and see if the changes are getting reflected on the\nstandby.\n\n2) You may also try running some concurrent test-cases for e.g. try\nrunning these functions with VACUUM or some other sql commands\n(preferable DML commands) in parallel.\n\n3) See what happens when you pass some invalid tids (containing\ninvalid block or offset number) to these functions. You may also try\nrunning these functions on the same tuple repeatedly and see the\nbehaviour.\n\n...\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\nOn Thu, Aug 6, 2020 at 2:25 PM Rajkumar Raghuwanshi\n<rajkumar.raghuwanshi@enterprisedb.com> wrote:\n>\n> I have been doing some user-level testing of this feature, apart from sanity test for extension and it's functions\n>\n> I have tried to corrupt tuples and then able to fix it using heap_force_freeze/kill functions.\n>\n>\n> --corrupt relfrozenxid, cause vacuum failed.\n>\n> update pg_class set relfrozenxid = (relfrozenxid::text::integer + 10)::text::xid where relname = 'test_tbl';\n>\n> UPDATE 1\n>\n> insert into test_tbl values (2, 'BBB');\n>\n>\n> postgres=# vacuum test_tbl;\n>\n> ERROR: found xmin 507 from before relfrozenxid 516\n>\n> CONTEXT: while scanning block 0 of relation \"public.test_tbl\"\n>\n>\n> postgres=# select *, ctid, xmin, xmax from test_tbl;\n>\n> a | b | ctid | xmin | xmax\n>\n> ---+-----+-------+------+------\n>\n> 1 | AAA | (0,1) | 505 | 0\n>\n> 2 | BBB | (0,2) | 507 | 0\n>\n> (2 rows)\n>\n>\n> --fixed using heap_force_freeze\n>\n> postgres=# select heap_force_freeze('test_tbl'::regclass, ARRAY['(0,2)']::tid[]);\n>\n> heap_force_freeze\n>\n> -------------------\n>\n>\n> postgres=# vacuum test_tbl;\n>\n> VACUUM\n>\n> postgres=# select *, ctid, xmin, xmax from test_tbl;\n>\n> a | b | ctid | xmin | xmax\n>\n> ---+-----+-------+------+------\n>\n> 1 | AAA | (0,1) | 505 | 0\n>\n> 2 | BBB | (0,2) | 2 | 0\n>\n> (2 rows)\n>\n>\n> --corrupt table headers in base/oid. file, cause table access failed.\n>\n> postgres=# select ctid, * from test_tbl;\n>\n> ERROR: could not access status of transaction 4294967295\n>\n> DETAIL: Could not open file \"pg_xact/0FFF\": No such file or directory.\n>\n>\n> --removed corrupted tuple using heap_force_kill\n>\n> postgres=# select heap_force_kill('test_tbl'::regclass, ARRAY['(0,2)']::tid[]);\n>\n> heap_force_kill\n>\n> -----------------\n>\n>\n>\n> (1 row)\n>\n>\n> postgres=# select ctid, * from test_tbl;\n>\n> ctid | a | b\n>\n> -------+---+-----\n>\n> (0,1) | 1 | AAA\n>\n> (1 row)\n>\n>\n> I will be continuing with my testing with the latest patch and update here if found anything.\n>\n>\n> Thanks & Regards,\n> Rajkumar Raghuwanshi\n>\n>\n> On Thu, Aug 6, 2020 at 1:42 PM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\n>>\n>> On Wed, 5 Aug 2020 at 22:42, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>> >\n>> > Hi Robert,\n>> >\n>> > Thanks for the review. Please find my comments inline:\n>> >\n>> > On Sat, Aug 1, 2020 at 12:18 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> > >\n>> > > On Fri, Jul 31, 2020 at 8:52 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>> > > > Attached is the new version of patch that addresses the comments from Andrey and Beena.\n>> > >\n>> > > +PGFILEDESC = \"pg_surgery - perform surgery on the damaged heap table\"\n>> > >\n>> > > the -> a\n>> > >\n>> > > I also suggest: heap table -> relation, because we might want to\n>> > > extend this to other cases later.\n>> > >\n>> >\n>> > Corrected.\n>> >\n>> > > +-- toast table.\n>> > > +begin;\n>> > > +create table ttab(a text);\n>> > > +insert into ttab select string_agg(chr(floor(random() * 26)::int +\n>> > > 65), '') from generate_series(1,10000);\n>> > > +select * from ttab where xmin = 2;\n>> > > + a\n>> > > +---\n>> > > +(0 rows)\n>> > > +\n>> > > +select heap_force_freeze('ttab'::regclass, ARRAY['(0, 1)']::tid[]);\n>> > > + heap_force_freeze\n>> > > +-------------------\n>> > > +\n>> > > +(1 row)\n>> > > +\n>> > >\n>> > > I don't understand the point of this. You're not testing the function\n>> > > on the TOAST table; you're testing it on the main table when there\n>> > > happens to be a TOAST table that is probably getting used for\n>> > > something. But that's not really relevant to what is being tested\n>> > > here, so as written this seems redundant with the previous cases.\n>> > >\n>> >\n>> > Yeah, it's being tested on the main table, not on a toast table. I've\n>> > removed this test-case and also restricted direct access to the toast\n>> > table using heap_force_kill/freeze functions. I think we shouldn't be\n>> > using these functions to do any changes in the toast table. We will\n>> > only use these functions with the main table and let VACUUM remove the\n>> > corresponding data chunks (pointed by the tuple that got removed from\n>> > the main table).\n>> >\n>> > Another option would be to identify all the data chunks corresponding\n>> > to the tuple (ctid) being killed from the main table and remove them\n>> > one by one. We will only do this if the tuple from the main table that\n>> > has been marked as killed has an external storage. We will have to add\n>> > a bunch of code for this otherwise we can let VACUUM do this for us.\n>> > Let me know your thoughts on this.\n>> >\n>> > > +-- test pg_surgery functions with the unsupported relations. Should fail.\n>> > >\n>> > > Please name the specific functions being tested here in case we add\n>> > > more in the future that are tested separately.\n>> > >\n>> >\n>> > Done.\n>> >\n>> > > +++ b/contrib/pg_surgery/heap_surgery_funcs.c\n>> > >\n>> > > I think we could drop _funcs from the file name.\n>> > >\n>> >\n>> > Done.\n>> >\n>> > > +#ifdef PG_MODULE_MAGIC\n>> > > +PG_MODULE_MAGIC;\n>> > > +#endif\n>> > >\n>> > > The #ifdef here is not required, and if you look at other contrib\n>> > > modules you'll see that they don't have it.\n>> > >\n>> >\n>> > Okay, done.\n>> >\n>> > > I don't like all the macros at the top of the file much. CHECKARRVALID\n>> > > is only used in one place, so it seems to me that you might as well\n>> > > just inline it and lose the macro. Likewise for SORT and ARRISEMPTY.\n>> > >\n>> >\n>> > Done.\n>> >\n>> > > Once you do that, heap_force_common() can just compute the number of\n>> > > array elements once, instead of doing it once inside ARRISEMPTY, then\n>> > > again inside SORT, and then a third time to initialize ntids. You can\n>> > > just have a local variable in that function that is set once and then\n>> > > used as needed. Then you are only doing ARRNELEMS once, so you can get\n>> > > rid of that macro too. The same technique can be used to get rid of\n>> > > ARRPTR. So then all the macros go away without introducing any code\n>> > > duplication.\n>> > >\n>> >\n>> > Done.\n>> >\n>> > > +/* Options to forcefully change the state of a heap tuple. */\n>> > > +typedef enum HTupleForceOption\n>> > > +{\n>> > > + FORCE_KILL,\n>> > > + FORCE_FREEZE\n>> > > +} HTupleForceOption;\n>> > >\n>> > > I suggest un-abbreviating HTuple to HeapTuple and un-abbreviating the\n>> > > enum members to HEAP_FORCE_KILL and HEAP_FORCE_FREE.\n>> >\n>> > Done.\n>> >\n>> > Also, how about\n>> > > option -> operation?\n>> > >\n>> >\n>> > I think both look okay to me.\n>> >\n>> > > + return heap_force_common(fcinfo, FORCE_KILL);\n>> > >\n>> > > I think it might be more idiomatic to use PG_RETURN_DATUM here. I\n>> > > know it's the same thing, though, and perhaps I'm even wrong about the\n>> > > prevailing style.\n>> > >\n>> >\n>> > Done.\n>> >\n>> > > + Assert(force_opt == FORCE_KILL || force_opt == FORCE_FREEZE);\n>> > >\n>> > > I think this is unnecessary. It's an enum with 2 values.\n>> > >\n>> >\n>> > Removed.\n>> >\n>> > > + if (ARRISEMPTY(ta))\n>> > > + ereport(ERROR,\n>> > > + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n>> > > + errmsg(\"empty tid array\")));\n>> > >\n>> > > I don't see why this should be an error. Why can't we just continue\n>> > > normally and if it does nothing, it does nothing? You'd need to change\n>> > > the do..while loop to a while loop so that the end condition is tested\n>> > > at the top, but that seems fine.\n>> > >\n>> >\n>> > I think it's okay to have this check. So, just left it as-is. We do\n>> > have such checks in other contrib modules as well wherever the array\n>> > is being passed as an input to the function.\n>> >\n>> > > + rel = relation_open(relid, AccessShareLock);\n>> > >\n>> > > Maybe we should take RowExclusiveLock, since we are going to modify\n>> > > rows. Not sure how much it matters, though.\n>> > >\n>> >\n>> > I don't know how it would make a difference, but still as you said\n>> > replaced AccessShare with RowExclusive.\n>> >\n>> > > + if (!superuser() && GetUserId() != rel->rd_rel->relowner)\n>> > > + ereport(ERROR,\n>> > > + (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n>> > > + errmsg(\"must be superuser or object owner to use %s.\",\n>> > > + force_opt == FORCE_KILL ? \"heap_force_kill()\" :\n>> > > + \"heap_force_freeze()\")));\n>> > >\n>> > > This is the wrong way to do a permissions check, and it's also the\n>> > > wrong way to write an error message about having failed one. To see\n>> > > the correct method, grep for pg_class_aclcheck. However, I think that\n>> > > we shouldn't in general trust the object owner to do this, unless the\n>> > > super-user gave permission. This is a data-corrupting operation, and\n>> > > only the boss is allowed to authorize it. So I think you should also\n>> > > add REVOKE EXECUTE FROM PUBLIC statements to the SQL file, and then\n>> > > have this check as a backup. Then, the superuser is always allowed,\n>> > > and if they choose to GRANT EXECUTE on this function to some users,\n>> > > those users can do it for their own relations, but not others.\n>> > >\n>> >\n>> > Done.\n>> >\n>> > > + if (rel->rd_rel->relam != HEAP_TABLE_AM_OID)\n>> > > + ereport(ERROR,\n>> > > + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n>> > > + errmsg(\"only heap AM is supported\")));\n>> > > +\n>> > > + check_relation_relkind(rel);\n>> > >\n>> > > Seems like these checks are in the wrong order.\n>> >\n>> > I don't think there is anything wrong with the order. I can see the\n>> > same order in other contrib modules as well.\n>> >\n>> > Also, maybe you could\n>> > > call the function something like check_relation_ok() and put the\n>> > > permissions test, the relkind test, and the relam test all inside of\n>> > > it, just to tighten up the code in this main function a bit.\n>> > >\n>> >\n>> > Yeah, I've added a couple of functions named sanity_check_relation and\n>> > sanity_check_tid_array and shifted all the sanity checks there.\n>> >\n>> > > + if (noffs > maxoffset)\n>> > > + ereport(ERROR,\n>> > > + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n>> > > + errmsg(\"number of offsets specified for block %u exceeds the max\n>> > > offset number %u\",\n>> > > + blkno, maxoffset)));\n>> > >\n>> > > Hmm, this doesn't seem quite right. The actual problem is if an\n>> > > individual item pointer's offset number is greater than maxoffset,\n>> > > which can be true even if the total number of offsets is less than\n>> > > maxoffset. So I think you need to remove this check and add a check\n>> > > inside the loop which follows that offnos[i] is in range.\n>> > >\n>> >\n>> > Agreed and done.\n>> >\n>> > > The way you've structured that loop is actually problematic -- I don't\n>> > > think we want to be calling elog() or ereport() inside a critical\n>> > > section. You could fix the case that checks for an invalid force_opt\n>> > > by just doing if (op == HEAP_FORCE_KILL) { ... } else { Assert(op ==\n>> > > HEAP_FORCE_FREEZE); ... }, or by using a switch with no default. The\n>> > > NOTICE case you have here is a bigger problem.\n>> >\n>> > Done.\n>> >\n>> > You really cannot\n>> > > modify the buffer like this and then decide, oops, never mind, I think\n>> > > I won't mark it dirty or write WAL for the changes. If you do that,\n>> > > the buffer is still in memory, but it's now been modified. A\n>> > > subsequent operation that modifies it will start with the altered\n>> > > state you created here, quite possibly leading to WAL that cannot be\n>> > > correctly replayed on the standby. In other words, you've got to\n>> > > decide for certain whether you want to proceed with the operation\n>> > > *before* you enter the critical section. You also need to emit any\n>> > > messages before or after the critical section. So you could:\n>> > >\n>> >\n>> > This is still not clear. I think Robert needs to respond to my earlier comment.\n>> >\n>> > > I believe this violates our guidelines on message construction. Have\n>> > > two completely separate messages -- and maybe lose the word \"already\":\n>> > >\n>> > > \"skipping tid (%u, %u) because it is dead\"\n>> > > \"skipping tid (%u, %u) because it is unused\"\n>> > >\n>> > > The point of this is that it makes it easier for translators.\n>> > >\n>> >\n>> > Done.\n>> >\n>> > > I see very little point in what verify_tid() is doing. Before using\n>> > > each block number, we should check that it's less than or equal to a\n>> > > cached value of RelationGetNumberOfBlocks(rel). That's necessary in\n>> > > any case to avoid funny errors; and then the check here against\n>> > > specifically InvalidBlockNumber is redundant. For the offset number,\n>> > > same thing: we need to check each offset against the page's\n>> > > PageGetMaxOffsetNumber(page); and if we do that then we don't need\n>> > > these checks.\n>> > >\n>> >\n>> > Done.\n>> >\n>> > Please check the attached patch for the changes.\n>>\n>> I also looked at this version patch and have some small comments:\n>>\n>> + Oid relid = PG_GETARG_OID(0);\n>> + ArrayType *ta = PG_GETARG_ARRAYTYPE_P_COPY(1);\n>> + ItemPointer tids;\n>> + int ntids;\n>> + Relation rel;\n>> + Buffer buf;\n>> + Page page;\n>> + ItemId itemid;\n>> + BlockNumber blkno;\n>> + OffsetNumber *offnos;\n>> + OffsetNumber offno,\n>> + noffs,\n>> + curr_start_ptr,\n>> + next_start_ptr,\n>> + maxoffset;\n>> + int i,\n>> + nskippedItems,\n>> + nblocks;\n>>\n>> You declare all variables at the top of heap_force_common() function\n>> but I think we can declare some variables such as buf, page inside of\n>> the do loop.\n>>\n>> ---\n>> + if (offnos[i] > maxoffset)\n>> + {\n>> + ereport(NOTICE,\n>> + errmsg(\"skipping tid (%u, %u) because it\n>> contains an invalid offset\",\n>> + blkno, offnos[i]));\n>> + continue;\n>> + }\n>>\n>> If all tids on a page take the above path, we will end up logging FPI\n>> in spite of modifying nothing on the page.\n>>\n>> ---\n>> + /* XLOG stuff */\n>> + if (RelationNeedsWAL(rel))\n>> + log_newpage_buffer(buf, true);\n>>\n>> I think we need to set the returned LSN by log_newpage_buffer() to the page lsn.\n>>\n>> Regards,\n>>\n>> --\n>> Masahiko Sawada http://www.2ndQuadrant.com/\n>> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>>\n>>\n\n\n", "msg_date": "Fri, 7 Aug 2020 12:45:17 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Thu, Aug 6, 2020 at 9:23 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> Attached v4 patch fixes the latest comments from Robert and Masahiko-san.\n\nCompiler warning:\n\nheap_surgery.c:136:13: error: comparison of unsigned expression < 0 is\nalways false [-Werror,-Wtautological-compare]\n if (blkno < 0 || blkno >= nblocks)\n ~~~~~ ^ ~\n\nThere's a certain inconsistency to these messages:\n\nrhaas=# create table foo (a int);\nCREATE TABLE\nrhaas=# insert into foo values (1);\nINSERT 0 1\nrhaas=# select heap_force_kill('foo'::regclass, array['(0,2)'::tid]);\nNOTICE: skipping tid (0, 2) because it contains an invalid offset\n heap_force_kill\n-----------------\n\n(1 row)\n\nrhaas=# select heap_force_kill('foo'::regclass, array['(1,0)'::tid]);\nERROR: invalid item pointer\nLOCATION: tids_same_page_fetch_offnums, heap_surgery.c:347\nrhaas=# select heap_force_kill('foo'::regclass, array['(1,1)'::tid]);\nERROR: block number 1 is out of range for relation \"foo\"\n\n From a user perspective it seems like I've made three very similar\nmistakes: in the first case, the offset is too high, in the second\ncase it's too low, and in the third case the block number is out of\nrange. But in one case I get a NOTICE and in the other two cases I get\nan ERROR. In one case I get the relation name and in the other two\ncases I don't. The two complaints about an invalid offset are phrased\ncompletely differently from each other. For example, suppose you do\nthis:\n\nERROR: tid (%u, %u) is invalid for relation \"%s\" because the block\nnumber is out of range (%u..%u)\nERROR: tid (%u, %u) is invalid for relation \"%s\" because the item\nnumber is out of range for this block (%u..%u)\nERROR: tid (%u, %u) is invalid for relation \"%s\" because the item is unused\nERROR: tid (%u, %u) is invalid for relation \"%s\" because the item is dead\n\nI think I misled you when I said to use pg_class_aclcheck. I think it\nshould actually be pg_class_ownercheck.\n\nI think the relkind sanity check should permit RELKIND_MATVIEW also.\n\nIt's unclear to me why the freeze logic here shouldn't do this part\nwhat heap_prepare_freeze_tuple() does when freezing xmax:\n\n frz->t_infomask2 &= ~HEAP_HOT_UPDATED;\n frz->t_infomask2 &= ~HEAP_KEYS_UPDATED;\n\nLikewise, why should we not freeze or invalidate xvac in the case\nwhere tuple->t_infomask & HEAP_MOVED, as heap_prepare_freeze_tuple()\nwould do?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 7 Aug 2020 11:50:55 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Fri, Aug 7, 2020 at 9:21 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Aug 6, 2020 at 9:23 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> There's a certain inconsistency to these messages:\n>\n> rhaas=# create table foo (a int);\n> CREATE TABLE\n> rhaas=# insert into foo values (1);\n> INSERT 0 1\n> rhaas=# select heap_force_kill('foo'::regclass, array['(0,2)'::tid]);\n> NOTICE: skipping tid (0, 2) because it contains an invalid offset\n> heap_force_kill\n> -----------------\n>\n> (1 row)\n>\n> rhaas=# select heap_force_kill('foo'::regclass, array['(1,0)'::tid]);\n> ERROR: invalid item pointer\n> LOCATION: tids_same_page_fetch_offnums, heap_surgery.c:347\n> rhaas=# select heap_force_kill('foo'::regclass, array['(1,1)'::tid]);\n> ERROR: block number 1 is out of range for relation \"foo\"\n>\n> From a user perspective it seems like I've made three very similar\n> mistakes: in the first case, the offset is too high, in the second\n> case it's too low, and in the third case the block number is out of\n> range. But in one case I get a NOTICE and in the other two cases I get\n> an ERROR. In one case I get the relation name and in the other two\n> cases I don't. The two complaints about an invalid offset are phrased\n> completely differently from each other. For example, suppose you do\n> this:\n>\n> ERROR: tid (%u, %u) is invalid for relation \"%s\" because the block\n> number is out of range (%u..%u)\n> ERROR: tid (%u, %u) is invalid for relation \"%s\" because the item\n> number is out of range for this block (%u..%u)\n> ERROR: tid (%u, %u) is invalid for relation \"%s\" because the item is unused\n> ERROR: tid (%u, %u) is invalid for relation \"%s\" because the item is dead\n>\n\nThank you for your suggestions. To make this consistent, I am planning\nto do the following changes:\n\nRemove the error message to report \"invalid item pointer\" from\ntids_same_page_fetch_offnums() and expand the if-check to detect any\ninvalid offset number in the CRITICAL section such that it not just\nchecks if the offset number is > maxoffset, but also checks if the\noffset number is equal to 0. That way it would also do the job that\n\"if (!ItemPointerIsValid)\" was doing for us.\n\nFurther, if any invalid block number is detected, then I am planning\nto skip all the tids associated with this block and move to the next\nblock. Hence, instead of reporting the error I would report the NOTICE\nmessage to the user.\n\nThe other two messages for reporting unused items and dead items\nremain the same. Hence, with above change, we would be reporting the\nfollowing 4 messages:\n\nNOTICE: skipping all the tids in block %u for relation \"%s\" because\nthe block number is out of range\n\nNOTICE: skipping tid (%u, %u) for relation \"%s\" because the item\nnumber is out of range for this block\n\nNOTICE: skipping tid (%u, %u) for relation \"%s\" because it is marked dead\n\nNOTICE: skipping tid (%u, %u) for relation \"%s\" because it is marked unused\n\nPlease let me know if you are okay with the above changes or not?\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 11 Aug 2020 13:08:48 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Tue, Aug 11, 2020 at 3:39 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> The other two messages for reporting unused items and dead items\n> remain the same. Hence, with above change, we would be reporting the\n> following 4 messages:\n>\n> NOTICE: skipping all the tids in block %u for relation \"%s\" because\n> the block number is out of range\n>\n> NOTICE: skipping tid (%u, %u) for relation \"%s\" because the item\n> number is out of range for this block\n>\n> NOTICE: skipping tid (%u, %u) for relation \"%s\" because it is marked dead\n>\n> NOTICE: skipping tid (%u, %u) for relation \"%s\" because it is marked unused\n>\n> Please let me know if you are okay with the above changes or not?\n\nThat seems broadly reasonable, but I would suggest phrasing the first\nmessage like this:\n\nskipping block %u for relation \"%s\" because the block number is out of range\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 11 Aug 2020 10:03:29 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Tue, Aug 11, 2020 at 7:33 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Aug 11, 2020 at 3:39 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > The other two messages for reporting unused items and dead items\n> > remain the same. Hence, with above change, we would be reporting the\n> > following 4 messages:\n> >\n> > NOTICE: skipping all the tids in block %u for relation \"%s\" because\n> > the block number is out of range\n> >\n> > NOTICE: skipping tid (%u, %u) for relation \"%s\" because the item\n> > number is out of range for this block\n> >\n> > NOTICE: skipping tid (%u, %u) for relation \"%s\" because it is marked dead\n> >\n> > NOTICE: skipping tid (%u, %u) for relation \"%s\" because it is marked unused\n> >\n> > Please let me know if you are okay with the above changes or not?\n>\n> That seems broadly reasonable, but I would suggest phrasing the first\n> message like this:\n>\n> skipping block %u for relation \"%s\" because the block number is out of range\n>\n\nOkay, thanks for the confirmation. I'll do that.\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 11 Aug 2020 20:17:30 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Thanks Robert for the review. Please find my comments inline below:\n\nOn Fri, Aug 7, 2020 at 9:21 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Aug 6, 2020 at 9:23 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > Attached v4 patch fixes the latest comments from Robert and Masahiko-san.\n>\n> Compiler warning:\n>\n> heap_surgery.c:136:13: error: comparison of unsigned expression < 0 is\n> always false [-Werror,-Wtautological-compare]\n> if (blkno < 0 || blkno >= nblocks)\n> ~~~~~ ^ ~\n>\n\nFixed.\n\n> There's a certain inconsistency to these messages:\n>\n> rhaas=# create table foo (a int);\n> CREATE TABLE\n> rhaas=# insert into foo values (1);\n> INSERT 0 1\n> rhaas=# select heap_force_kill('foo'::regclass, array['(0,2)'::tid]);\n> NOTICE: skipping tid (0, 2) because it contains an invalid offset\n> heap_force_kill\n> -----------------\n>\n> (1 row)\n>\n> rhaas=# select heap_force_kill('foo'::regclass, array['(1,0)'::tid]);\n> ERROR: invalid item pointer\n> LOCATION: tids_same_page_fetch_offnums, heap_surgery.c:347\n> rhaas=# select heap_force_kill('foo'::regclass, array['(1,1)'::tid]);\n> ERROR: block number 1 is out of range for relation \"foo\"\n>\n> From a user perspective it seems like I've made three very similar\n> mistakes: in the first case, the offset is too high, in the second\n> case it's too low, and in the third case the block number is out of\n> range. But in one case I get a NOTICE and in the other two cases I get\n> an ERROR. In one case I get the relation name and in the other two\n> cases I don't. The two complaints about an invalid offset are phrased\n> completely differently from each other. For example, suppose you do\n> this:\n>\n> ERROR: tid (%u, %u) is invalid for relation \"%s\" because the block\n> number is out of range (%u..%u)\n> ERROR: tid (%u, %u) is invalid for relation \"%s\" because the item\n> number is out of range for this block (%u..%u)\n> ERROR: tid (%u, %u) is invalid for relation \"%s\" because the item is unused\n> ERROR: tid (%u, %u) is invalid for relation \"%s\" because the item is dead\n>\n\nCorrected.\n\n> I think I misled you when I said to use pg_class_aclcheck. I think it\n> should actually be pg_class_ownercheck.\n>\n\nokay, I've changed it to pg_class_ownercheck.\n\n> I think the relkind sanity check should permit RELKIND_MATVIEW also.\n>\n\nYeah, actually we should allow MATVIEW, don't know why I thought of\nblocking it earlier.\n\n> It's unclear to me why the freeze logic here shouldn't do this part\n> what heap_prepare_freeze_tuple() does when freezing xmax:\n>\n> frz->t_infomask2 &= ~HEAP_HOT_UPDATED;\n> frz->t_infomask2 &= ~HEAP_KEYS_UPDATED;\n>\n\nYeah, we should have these changes when freezing the xmax.\n\n> Likewise, why should we not freeze or invalidate xvac in the case\n> where tuple->t_infomask & HEAP_MOVED, as heap_prepare_freeze_tuple()\n> would do?\n>\n\nAgain, we should have this as well.\n\nApart from above, this time I've also added the documentation on\npg_surgery module and added a few more test-cases.\n\nAttached patch with above changes.\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com", "msg_date": "Wed, 12 Aug 2020 18:56:52 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Hi Ashutosh\r\n\r\nI stumbled upon this thread today, went through your patch and it looks good. A minor suggestion in sanity_check_relation():\r\n\r\n\tif (rel->rd_rel->relam != HEAP_TABLE_AM_OID)\r\n\t\tereport(ERROR,\r\n\t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\r\n\t\t\t\t errmsg(\"only heap AM is supported\")));\r\n\r\nInstead of checking the access method OID, it seems better to check the handler OID like so:\r\n\r\n\tif (rel->rd_amhandler != HEAP_TABLE_AM_HANDLER_OID)\r\n\r\nThe reason is current version of sanity_check_relation() would emit error for the following case even when the table structure is actually heap.\r\n\r\n\tcreate access method myam type table handler heap_tableam_handler;\r\n\tcreate table mytable (…) using myam;\r\n\r\nAsim\r\n", "msg_date": "Thu, 13 Aug 2020 07:06:14 +0000", "msg_from": "Asim Praveen <pasim@vmware.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Hi Asim,\n\nThanks for having a look into the patch and for sharing your feedback.\nPlease find my comments inline below:\n\nOn Thu, Aug 13, 2020 at 12:36 PM Asim Praveen <pasim@vmware.com> wrote:\n>\n> Hi Ashutosh\n>\n> I stumbled upon this thread today, went through your patch and it looks good. A minor suggestion in sanity_check_relation():\n>\n> if (rel->rd_rel->relam != HEAP_TABLE_AM_OID)\n> ereport(ERROR,\n> (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> errmsg(\"only heap AM is supported\")));\n>\n> Instead of checking the access method OID, it seems better to check the handler OID like so:\n>\n> if (rel->rd_amhandler != HEAP_TABLE_AM_HANDLER_OID)\n>\n> The reason is current version of sanity_check_relation() would emit error for the following case even when the table structure is actually heap.\n>\n> create access method myam type table handler heap_tableam_handler;\n> create table mytable (…) using myam;\n>\n\nThis looks like a very good suggestion to me. I will do this change in\nthe next version. Just wondering if we should be doing similar changes\nin other contrib modules (like pgrowlocks, pageinspect and\npgstattuple) as well?\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 Aug 2020 13:22:38 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Thu, Aug 13, 2020 at 3:52 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> This looks like a very good suggestion to me. I will do this change in\n> the next version. Just wondering if we should be doing similar changes\n> in other contrib modules (like pgrowlocks, pageinspect and\n> pgstattuple) as well?\n\nIt seems like it should be consistent, but I'm not sure the proposed\nchange is really an improvement.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 13 Aug 2020 15:03:16 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Wed, 12 Aug 2020 at 22:27, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Thanks Robert for the review. Please find my comments inline below:\n>\n> On Fri, Aug 7, 2020 at 9:21 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Thu, Aug 6, 2020 at 9:23 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > > Attached v4 patch fixes the latest comments from Robert and Masahiko-san.\n> >\n> > Compiler warning:\n> >\n> > heap_surgery.c:136:13: error: comparison of unsigned expression < 0 is\n> > always false [-Werror,-Wtautological-compare]\n> > if (blkno < 0 || blkno >= nblocks)\n> > ~~~~~ ^ ~\n> >\n>\n> Fixed.\n>\n> > There's a certain inconsistency to these messages:\n> >\n> > rhaas=# create table foo (a int);\n> > CREATE TABLE\n> > rhaas=# insert into foo values (1);\n> > INSERT 0 1\n> > rhaas=# select heap_force_kill('foo'::regclass, array['(0,2)'::tid]);\n> > NOTICE: skipping tid (0, 2) because it contains an invalid offset\n> > heap_force_kill\n> > -----------------\n> >\n> > (1 row)\n> >\n> > rhaas=# select heap_force_kill('foo'::regclass, array['(1,0)'::tid]);\n> > ERROR: invalid item pointer\n> > LOCATION: tids_same_page_fetch_offnums, heap_surgery.c:347\n> > rhaas=# select heap_force_kill('foo'::regclass, array['(1,1)'::tid]);\n> > ERROR: block number 1 is out of range for relation \"foo\"\n> >\n> > From a user perspective it seems like I've made three very similar\n> > mistakes: in the first case, the offset is too high, in the second\n> > case it's too low, and in the third case the block number is out of\n> > range. But in one case I get a NOTICE and in the other two cases I get\n> > an ERROR. In one case I get the relation name and in the other two\n> > cases I don't. The two complaints about an invalid offset are phrased\n> > completely differently from each other. For example, suppose you do\n> > this:\n> >\n> > ERROR: tid (%u, %u) is invalid for relation \"%s\" because the block\n> > number is out of range (%u..%u)\n> > ERROR: tid (%u, %u) is invalid for relation \"%s\" because the item\n> > number is out of range for this block (%u..%u)\n> > ERROR: tid (%u, %u) is invalid for relation \"%s\" because the item is unused\n> > ERROR: tid (%u, %u) is invalid for relation \"%s\" because the item is dead\n> >\n>\n> Corrected.\n>\n> > I think I misled you when I said to use pg_class_aclcheck. I think it\n> > should actually be pg_class_ownercheck.\n> >\n>\n> okay, I've changed it to pg_class_ownercheck.\n>\n> > I think the relkind sanity check should permit RELKIND_MATVIEW also.\n> >\n>\n> Yeah, actually we should allow MATVIEW, don't know why I thought of\n> blocking it earlier.\n>\n> > It's unclear to me why the freeze logic here shouldn't do this part\n> > what heap_prepare_freeze_tuple() does when freezing xmax:\n> >\n> > frz->t_infomask2 &= ~HEAP_HOT_UPDATED;\n> > frz->t_infomask2 &= ~HEAP_KEYS_UPDATED;\n> >\n>\n> Yeah, we should have these changes when freezing the xmax.\n>\n> > Likewise, why should we not freeze or invalidate xvac in the case\n> > where tuple->t_infomask & HEAP_MOVED, as heap_prepare_freeze_tuple()\n> > would do?\n> >\n>\n> Again, we should have this as well.\n>\n> Apart from above, this time I've also added the documentation on\n> pg_surgery module and added a few more test-cases.\n>\n> Attached patch with above changes.\n>\n\nThank you for updating the patch! Here are my comments on v5 patch:\n\n--- a/contrib/Makefile\n+++ b/contrib/Makefile\n@@ -35,6 +35,7 @@ SUBDIRS = \\\n pg_standby \\\n pg_stat_statements \\\n pg_trgm \\\n+ pg_surgery \\\n pgcrypto \\\n\nI guess we use alphabetical order here. So pg_surgery should be placed\nbefore pg_trgm.\n\n---\n+ if (heap_force_opt == HEAP_FORCE_KILL)\n+ ItemIdSetDead(itemid);\n\nI think that if the page is an all-visible page, we should clear an\nall-visible bit on the visibility map corresponding to the page and\nPD_ALL_VISIBLE on the page header. Otherwise, index only scan would\nreturn the wrong results.\n\n---\n+ /*\n+ * We do not mark the buffer dirty or do WAL logging for unmodifed\n+ * pages.\n+ */\n+ if (!did_modify_page)\n+ goto skip_wal;\n+\n+ /* Mark buffer dirty before we write WAL. */\n+ MarkBufferDirty(buf);\n+\n+ /* XLOG stuff */\n+ if (RelationNeedsWAL(rel))\n+ log_newpage_buffer(buf, true);\n+\n+skip_wal:\n+ END_CRIT_SECTION();\n+\n\ns/unmodifed/unmodified/\n\nDo we really need to use goto? I think we can modify it like follows:\n\n if (did_modity_page)\n {\n /* Mark buffer dirty before we write WAL. */\n MarkBufferDirty(buf);\n\n /* XLOG stuff */\n if (RelationNeedsWAL(rel))\n log_newpage_buffer(buf, true);\n }\n\n END_CRIT_SECTION();\n\n---\npg_force_freeze() can revival a tuple that is already deleted but not\nvacuumed yet. Therefore, the user might need to reindex indexes after\nusing that function. For instance, with the following script, the last\ntwo queries: index scan and seq scan, will return different results.\n\nset enable_seqscan to off;\nset enable_bitmapscan to off;\nset enable_indexonlyscan to off;\ncreate table tbl (a int primary key);\ninsert into tbl values (1);\n\nupdate tbl set a = a + 100 where a = 1;\n\nexplain analyze select * from tbl where a < 200;\n\n-- revive deleted tuple on heap\nselect heap_force_freeze('tbl', array['(0,1)'::tid]);\n\n-- index scan returns 2 tuples\nexplain analyze select * from tbl where a < 200;\n\n-- seq scan returns 1 tuple\nset enable_seqscan to on;\nexplain analyze select * from tbl;\n\nAlso, if a tuple updated and moved to another partition is revived by\nheap_force_freeze(), its ctid still has special values:\nMovedPartitionsOffsetNumber and MovedPartitionsBlockNumber. I don't\nsee a problem yet caused by a visible tuple having the special ctid\nvalue, but it might be worth considering either to reset ctid value as\nwell or to not freezing already-deleted tuple.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 14 Aug 2020 13:36:55 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Hello Masahiko-san,\n\nThanks for the review. Please check the comments inline below:\n\nOn Fri, Aug 14, 2020 at 10:07 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n\n> Thank you for updating the patch! Here are my comments on v5 patch:\n>\n> --- a/contrib/Makefile\n> +++ b/contrib/Makefile\n> @@ -35,6 +35,7 @@ SUBDIRS = \\\n> pg_standby \\\n> pg_stat_statements \\\n> pg_trgm \\\n> + pg_surgery \\\n> pgcrypto \\\n>\n> I guess we use alphabetical order here. So pg_surgery should be placed\n> before pg_trgm.\n>\n\nOkay, will take care of this in the next version of patch.\n\n> ---\n> + if (heap_force_opt == HEAP_FORCE_KILL)\n> + ItemIdSetDead(itemid);\n>\n> I think that if the page is an all-visible page, we should clear an\n> all-visible bit on the visibility map corresponding to the page and\n> PD_ALL_VISIBLE on the page header. Otherwise, index only scan would\n> return the wrong results.\n>\n\nI think we should let VACUUM do that. Please note that this module is\nintended to be used only on a damaged relation and should only be\noperated on damaged tuples of such relations. And the execution of any\nof the functions provided by this module on a damaged relation must be\nfollowed by VACUUM with DISABLE_PAGE_SKIPPING option on that relation.\nThis is necessary to bring back a damaged relation to the sane state\nonce a surgery is performed on it. I will try to add this note in the\ndocumentation for this module.\n\n> ---\n> + /*\n> + * We do not mark the buffer dirty or do WAL logging for unmodifed\n> + * pages.\n> + */\n> + if (!did_modify_page)\n> + goto skip_wal;\n> +\n> + /* Mark buffer dirty before we write WAL. */\n> + MarkBufferDirty(buf);\n> +\n> + /* XLOG stuff */\n> + if (RelationNeedsWAL(rel))\n> + log_newpage_buffer(buf, true);\n> +\n> +skip_wal:\n> + END_CRIT_SECTION();\n> +\n>\n> s/unmodifed/unmodified/\n>\n\nokay, will fix this typo.\n\n> Do we really need to use goto? I think we can modify it like follows:\n>\n> if (did_modity_page)\n> {\n> /* Mark buffer dirty before we write WAL. */\n> MarkBufferDirty(buf);\n>\n> /* XLOG stuff */\n> if (RelationNeedsWAL(rel))\n> log_newpage_buffer(buf, true);\n> }\n>\n> END_CRIT_SECTION();\n>\n\nNo, we don't need it. We can achieve the same by checking the status\nof did_modify_page flag as you suggested. I will do this change in the\nnext version.\n\n> ---\n> pg_force_freeze() can revival a tuple that is already deleted but not\n> vacuumed yet. Therefore, the user might need to reindex indexes after\n> using that function. For instance, with the following script, the last\n> two queries: index scan and seq scan, will return different results.\n>\n> set enable_seqscan to off;\n> set enable_bitmapscan to off;\n> set enable_indexonlyscan to off;\n> create table tbl (a int primary key);\n> insert into tbl values (1);\n>\n> update tbl set a = a + 100 where a = 1;\n>\n> explain analyze select * from tbl where a < 200;\n>\n> -- revive deleted tuple on heap\n> select heap_force_freeze('tbl', array['(0,1)'::tid]);\n>\n> -- index scan returns 2 tuples\n> explain analyze select * from tbl where a < 200;\n>\n> -- seq scan returns 1 tuple\n> set enable_seqscan to on;\n> explain analyze select * from tbl;\n>\n\nI am not sure if this is the right use-case of pg_force_freeze\nfunction. I think we should only be running pg_force_freeze function\non a tuple for which VACUUM reports \"found xmin ABC from before\nrelfrozenxid PQR\" sort of error otherwise it might worsen the things\ninstead of making it better. Now, the question is - can VACUUM report\nthis type of error for a deleted tuple or it would only report it for\na live tuple? AFAIU this won't be reported for the deleted tuples\nbecause VACUUM wouldn't consider freezing a tuple that has been\ndeleted.\n\n> Also, if a tuple updated and moved to another partition is revived by\n> heap_force_freeze(), its ctid still has special values:\n> MovedPartitionsOffsetNumber and MovedPartitionsBlockNumber. I don't\n> see a problem yet caused by a visible tuple having the special ctid\n> value, but it might be worth considering either to reset ctid value as\n> well or to not freezing already-deleted tuple.\n>\n\nFor this as well, the answer remains the same as above.\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 17 Aug 2020 11:35:06 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Hello Masahiko-san,\n\nI've spent some more time trying to understand the code in\nlazy_scan_heap function to know under what all circumstances a VACUUM\ncan fail with \"found xmin ... before relfrozenxid ...\" error for a\ntuple whose xmin is behind relfrozenxid. Here are my observations:\n\n1) It can fail with this error for a live tuple\n\nOR,\n\n2) It can also fail with this error if a tuple (that went through\nupdate) is marked as HEAP_HOT_UPDATED or HEAP_ONLY_TUPLE.\n\nOR,\n\n3) If there are any concurrent transactions, then the tuple might be\nmarked as HEAPTUPLE_INSERT_IN_PROGRESS or HEAPTUPLE_DELETE_IN_PROGRESS\nor HEAPTUPLE_RECENTLY_DEAD in which case also VACUUM can fail with\nthis error.\n\nNow, AFAIU, as we will be dealing with a damaged table, the chances of\npoint #3 being the cause of this error looks impossible in our case\nbecause I don't think we will be doing anything in parallel when\nperforming surgery on a damaged table, in fact we shouldn't be doing\nany such things. However, it is quite possible that reason #2 could\ncause VACUUM to fail with this sort of error, but, as we are already\nskipping redirected item pointers in heap_force_common(), I think, we\nwould never be marking HEAP_HOT_UPDATED tuple as frozen and I don't\nsee any problem in marking HEAP_ONLY_TUPLE as frozen. So, probably, we\nmay not need to handle point #2 as well.\n\nFurther, I also don't see VACUUM reporting this error for a tuple that\nhas been moved from one partition to another. So, I think we might not\nneed to do any special handling for a tuple that got updated and its\nnew version was moved to another partition.\n\nIf you feel I am missing something here, please correct me. Thank you.\n\nMoreover, while I was exploring on above, I noticed that in\nlazy_scan_heap(), before we call HeapTupleSatisfiesVacuum() we check\nfor a redirected item pointers and if any redirected item pointer is\ndetected we do not call HeapTupleSatisfiesVacuum(). So, not sure how\nHeapTupleSatisfiesVacuum would ever return a dead tuple that is marked\nwith HEAP_HOT_UPDATED. I am referring to the following code in\nlazy_scan_heap().\n\n for (offnum = FirstOffsetNumber;\n offnum <= maxoff;\n offnum = OffsetNumberNext(offnum))\n {\n ItemId itemid;\n\n itemid = PageGetItemId(page, offnum);\n\n.............\n.............\n\n\n /* Redirect items mustn't be touched */ <-- this check\nwould bypass the redirected item pointers from being checked for\nHeapTupleSatisfiesVacuum.\n if (ItemIdIsRedirected(itemid))\n {\n hastup = true; /* this page won't be truncatable */\n continue;\n }\n\n..............\n..............\n\n switch (HeapTupleSatisfiesVacuum(&tuple, OldestXmin, buf))\n {\n case HEAPTUPLE_DEAD:\n\n if (HeapTupleIsHotUpdated(&tuple) ||\n HeapTupleIsHeapOnly(&tuple) ||\n params->index_cleanup == VACOPT_TERNARY_DISABLED)\n nkeep += 1;\n else\n tupgone = true; /* we can delete the tuple */\n..............\n..............\n }\n\n\nSo, the point is, would HeapTupleIsHotUpdated(&tuple) ever be true?\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\nOn Mon, Aug 17, 2020 at 11:35 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Hello Masahiko-san,\n>\n> Thanks for the review. Please check the comments inline below:\n>\n> On Fri, Aug 14, 2020 at 10:07 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n>\n> > Thank you for updating the patch! Here are my comments on v5 patch:\n> >\n> > --- a/contrib/Makefile\n> > +++ b/contrib/Makefile\n> > @@ -35,6 +35,7 @@ SUBDIRS = \\\n> > pg_standby \\\n> > pg_stat_statements \\\n> > pg_trgm \\\n> > + pg_surgery \\\n> > pgcrypto \\\n> >\n> > I guess we use alphabetical order here. So pg_surgery should be placed\n> > before pg_trgm.\n> >\n>\n> Okay, will take care of this in the next version of patch.\n>\n> > ---\n> > + if (heap_force_opt == HEAP_FORCE_KILL)\n> > + ItemIdSetDead(itemid);\n> >\n> > I think that if the page is an all-visible page, we should clear an\n> > all-visible bit on the visibility map corresponding to the page and\n> > PD_ALL_VISIBLE on the page header. Otherwise, index only scan would\n> > return the wrong results.\n> >\n>\n> I think we should let VACUUM do that. Please note that this module is\n> intended to be used only on a damaged relation and should only be\n> operated on damaged tuples of such relations. And the execution of any\n> of the functions provided by this module on a damaged relation must be\n> followed by VACUUM with DISABLE_PAGE_SKIPPING option on that relation.\n> This is necessary to bring back a damaged relation to the sane state\n> once a surgery is performed on it. I will try to add this note in the\n> documentation for this module.\n>\n> > ---\n> > + /*\n> > + * We do not mark the buffer dirty or do WAL logging for unmodifed\n> > + * pages.\n> > + */\n> > + if (!did_modify_page)\n> > + goto skip_wal;\n> > +\n> > + /* Mark buffer dirty before we write WAL. */\n> > + MarkBufferDirty(buf);\n> > +\n> > + /* XLOG stuff */\n> > + if (RelationNeedsWAL(rel))\n> > + log_newpage_buffer(buf, true);\n> > +\n> > +skip_wal:\n> > + END_CRIT_SECTION();\n> > +\n> >\n> > s/unmodifed/unmodified/\n> >\n>\n> okay, will fix this typo.\n>\n> > Do we really need to use goto? I think we can modify it like follows:\n> >\n> > if (did_modity_page)\n> > {\n> > /* Mark buffer dirty before we write WAL. */\n> > MarkBufferDirty(buf);\n> >\n> > /* XLOG stuff */\n> > if (RelationNeedsWAL(rel))\n> > log_newpage_buffer(buf, true);\n> > }\n> >\n> > END_CRIT_SECTION();\n> >\n>\n> No, we don't need it. We can achieve the same by checking the status\n> of did_modify_page flag as you suggested. I will do this change in the\n> next version.\n>\n> > ---\n> > pg_force_freeze() can revival a tuple that is already deleted but not\n> > vacuumed yet. Therefore, the user might need to reindex indexes after\n> > using that function. For instance, with the following script, the last\n> > two queries: index scan and seq scan, will return different results.\n> >\n> > set enable_seqscan to off;\n> > set enable_bitmapscan to off;\n> > set enable_indexonlyscan to off;\n> > create table tbl (a int primary key);\n> > insert into tbl values (1);\n> >\n> > update tbl set a = a + 100 where a = 1;\n> >\n> > explain analyze select * from tbl where a < 200;\n> >\n> > -- revive deleted tuple on heap\n> > select heap_force_freeze('tbl', array['(0,1)'::tid]);\n> >\n> > -- index scan returns 2 tuples\n> > explain analyze select * from tbl where a < 200;\n> >\n> > -- seq scan returns 1 tuple\n> > set enable_seqscan to on;\n> > explain analyze select * from tbl;\n> >\n>\n> I am not sure if this is the right use-case of pg_force_freeze\n> function. I think we should only be running pg_force_freeze function\n> on a tuple for which VACUUM reports \"found xmin ABC from before\n> relfrozenxid PQR\" sort of error otherwise it might worsen the things\n> instead of making it better. Now, the question is - can VACUUM report\n> this type of error for a deleted tuple or it would only report it for\n> a live tuple? AFAIU this won't be reported for the deleted tuples\n> because VACUUM wouldn't consider freezing a tuple that has been\n> deleted.\n>\n> > Also, if a tuple updated and moved to another partition is revived by\n> > heap_force_freeze(), its ctid still has special values:\n> > MovedPartitionsOffsetNumber and MovedPartitionsBlockNumber. I don't\n> > see a problem yet caused by a visible tuple having the special ctid\n> > value, but it might be worth considering either to reset ctid value as\n> > well or to not freezing already-deleted tuple.\n> >\n>\n> For this as well, the answer remains the same as above.\n>\n> --\n> With Regards,\n> Ashutosh Sharma\n> EnterpriseDB:http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 18 Aug 2020 13:46:48 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Attached is the new version of patch that addresses the comments from\nAsim Praveen and Masahiko-san. It also improves the documentation to\nsome extent.\n\n\nOn Tue, Aug 18, 2020 at 1:46 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Hello Masahiko-san,\n>\n> I've spent some more time trying to understand the code in\n> lazy_scan_heap function to know under what all circumstances a VACUUM\n> can fail with \"found xmin ... before relfrozenxid ...\" error for a\n> tuple whose xmin is behind relfrozenxid. Here are my observations:\n>\n> 1) It can fail with this error for a live tuple\n>\n> OR,\n>\n> 2) It can also fail with this error if a tuple (that went through\n> update) is marked as HEAP_HOT_UPDATED or HEAP_ONLY_TUPLE.\n>\n> OR,\n>\n> 3) If there are any concurrent transactions, then the tuple might be\n> marked as HEAPTUPLE_INSERT_IN_PROGRESS or HEAPTUPLE_DELETE_IN_PROGRESS\n> or HEAPTUPLE_RECENTLY_DEAD in which case also VACUUM can fail with\n> this error.\n>\n> Now, AFAIU, as we will be dealing with a damaged table, the chances of\n> point #3 being the cause of this error looks impossible in our case\n> because I don't think we will be doing anything in parallel when\n> performing surgery on a damaged table, in fact we shouldn't be doing\n> any such things. However, it is quite possible that reason #2 could\n> cause VACUUM to fail with this sort of error, but, as we are already\n> skipping redirected item pointers in heap_force_common(), I think, we\n> would never be marking HEAP_HOT_UPDATED tuple as frozen and I don't\n> see any problem in marking HEAP_ONLY_TUPLE as frozen. So, probably, we\n> may not need to handle point #2 as well.\n>\n> Further, I also don't see VACUUM reporting this error for a tuple that\n> has been moved from one partition to another. So, I think we might not\n> need to do any special handling for a tuple that got updated and its\n> new version was moved to another partition.\n>\n> If you feel I am missing something here, please correct me. Thank you.\n>\n> Moreover, while I was exploring on above, I noticed that in\n> lazy_scan_heap(), before we call HeapTupleSatisfiesVacuum() we check\n> for a redirected item pointers and if any redirected item pointer is\n> detected we do not call HeapTupleSatisfiesVacuum(). So, not sure how\n> HeapTupleSatisfiesVacuum would ever return a dead tuple that is marked\n> with HEAP_HOT_UPDATED. I am referring to the following code in\n> lazy_scan_heap().\n>\n> for (offnum = FirstOffsetNumber;\n> offnum <= maxoff;\n> offnum = OffsetNumberNext(offnum))\n> {\n> ItemId itemid;\n>\n> itemid = PageGetItemId(page, offnum);\n>\n> .............\n> .............\n>\n>\n> /* Redirect items mustn't be touched */ <-- this check\n> would bypass the redirected item pointers from being checked for\n> HeapTupleSatisfiesVacuum.\n> if (ItemIdIsRedirected(itemid))\n> {\n> hastup = true; /* this page won't be truncatable */\n> continue;\n> }\n>\n> ..............\n> ..............\n>\n> switch (HeapTupleSatisfiesVacuum(&tuple, OldestXmin, buf))\n> {\n> case HEAPTUPLE_DEAD:\n>\n> if (HeapTupleIsHotUpdated(&tuple) ||\n> HeapTupleIsHeapOnly(&tuple) ||\n> params->index_cleanup == VACOPT_TERNARY_DISABLED)\n> nkeep += 1;\n> else\n> tupgone = true; /* we can delete the tuple */\n> ..............\n> ..............\n> }\n>\n>\n> So, the point is, would HeapTupleIsHotUpdated(&tuple) ever be true?\n>\n> --\n> With Regards,\n> Ashutosh Sharma\n> EnterpriseDB:http://www.enterprisedb.com\n>\n> On Mon, Aug 17, 2020 at 11:35 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> >\n> > Hello Masahiko-san,\n> >\n> > Thanks for the review. Please check the comments inline below:\n> >\n> > On Fri, Aug 14, 2020 at 10:07 AM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > > Thank you for updating the patch! Here are my comments on v5 patch:\n> > >\n> > > --- a/contrib/Makefile\n> > > +++ b/contrib/Makefile\n> > > @@ -35,6 +35,7 @@ SUBDIRS = \\\n> > > pg_standby \\\n> > > pg_stat_statements \\\n> > > pg_trgm \\\n> > > + pg_surgery \\\n> > > pgcrypto \\\n> > >\n> > > I guess we use alphabetical order here. So pg_surgery should be placed\n> > > before pg_trgm.\n> > >\n> >\n> > Okay, will take care of this in the next version of patch.\n> >\n> > > ---\n> > > + if (heap_force_opt == HEAP_FORCE_KILL)\n> > > + ItemIdSetDead(itemid);\n> > >\n> > > I think that if the page is an all-visible page, we should clear an\n> > > all-visible bit on the visibility map corresponding to the page and\n> > > PD_ALL_VISIBLE on the page header. Otherwise, index only scan would\n> > > return the wrong results.\n> > >\n> >\n> > I think we should let VACUUM do that. Please note that this module is\n> > intended to be used only on a damaged relation and should only be\n> > operated on damaged tuples of such relations. And the execution of any\n> > of the functions provided by this module on a damaged relation must be\n> > followed by VACUUM with DISABLE_PAGE_SKIPPING option on that relation.\n> > This is necessary to bring back a damaged relation to the sane state\n> > once a surgery is performed on it. I will try to add this note in the\n> > documentation for this module.\n> >\n> > > ---\n> > > + /*\n> > > + * We do not mark the buffer dirty or do WAL logging for unmodifed\n> > > + * pages.\n> > > + */\n> > > + if (!did_modify_page)\n> > > + goto skip_wal;\n> > > +\n> > > + /* Mark buffer dirty before we write WAL. */\n> > > + MarkBufferDirty(buf);\n> > > +\n> > > + /* XLOG stuff */\n> > > + if (RelationNeedsWAL(rel))\n> > > + log_newpage_buffer(buf, true);\n> > > +\n> > > +skip_wal:\n> > > + END_CRIT_SECTION();\n> > > +\n> > >\n> > > s/unmodifed/unmodified/\n> > >\n> >\n> > okay, will fix this typo.\n> >\n> > > Do we really need to use goto? I think we can modify it like follows:\n> > >\n> > > if (did_modity_page)\n> > > {\n> > > /* Mark buffer dirty before we write WAL. */\n> > > MarkBufferDirty(buf);\n> > >\n> > > /* XLOG stuff */\n> > > if (RelationNeedsWAL(rel))\n> > > log_newpage_buffer(buf, true);\n> > > }\n> > >\n> > > END_CRIT_SECTION();\n> > >\n> >\n> > No, we don't need it. We can achieve the same by checking the status\n> > of did_modify_page flag as you suggested. I will do this change in the\n> > next version.\n> >\n> > > ---\n> > > pg_force_freeze() can revival a tuple that is already deleted but not\n> > > vacuumed yet. Therefore, the user might need to reindex indexes after\n> > > using that function. For instance, with the following script, the last\n> > > two queries: index scan and seq scan, will return different results.\n> > >\n> > > set enable_seqscan to off;\n> > > set enable_bitmapscan to off;\n> > > set enable_indexonlyscan to off;\n> > > create table tbl (a int primary key);\n> > > insert into tbl values (1);\n> > >\n> > > update tbl set a = a + 100 where a = 1;\n> > >\n> > > explain analyze select * from tbl where a < 200;\n> > >\n> > > -- revive deleted tuple on heap\n> > > select heap_force_freeze('tbl', array['(0,1)'::tid]);\n> > >\n> > > -- index scan returns 2 tuples\n> > > explain analyze select * from tbl where a < 200;\n> > >\n> > > -- seq scan returns 1 tuple\n> > > set enable_seqscan to on;\n> > > explain analyze select * from tbl;\n> > >\n> >\n> > I am not sure if this is the right use-case of pg_force_freeze\n> > function. I think we should only be running pg_force_freeze function\n> > on a tuple for which VACUUM reports \"found xmin ABC from before\n> > relfrozenxid PQR\" sort of error otherwise it might worsen the things\n> > instead of making it better. Now, the question is - can VACUUM report\n> > this type of error for a deleted tuple or it would only report it for\n> > a live tuple? AFAIU this won't be reported for the deleted tuples\n> > because VACUUM wouldn't consider freezing a tuple that has been\n> > deleted.\n> >\n> > > Also, if a tuple updated and moved to another partition is revived by\n> > > heap_force_freeze(), its ctid still has special values:\n> > > MovedPartitionsOffsetNumber and MovedPartitionsBlockNumber. I don't\n> > > see a problem yet caused by a visible tuple having the special ctid\n> > > value, but it might be worth considering either to reset ctid value as\n> > > well or to not freezing already-deleted tuple.\n> > >\n> >\n> > For this as well, the answer remains the same as above.\n> >\n> > --\n> > With Regards,\n> > Ashutosh Sharma\n> > EnterpriseDB:http://www.enterprisedb.com", "msg_date": "Tue, 18 Aug 2020 16:51:38 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Tue, Jul 14, 2020 at 12:28 AM Peter Geoghegan <pg@bowt.ie> wrote:\n\n> On Mon, Jul 13, 2020 at 2:12 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > 1. There's nothing to identify the tuple that has the problem, and no\n> > way to know how many more of them there might be. Back-patching\n> > b61d161c146328ae6ba9ed937862d66e5c8b035a would help with the first\n> > part of this.\n>\n> I am in favor of backpatching such changes in cases where senior\n> community members feel that it could help with hypothetical\n> undiscovered data corruption issues -- if they're willing to take\n> responsibility for the change. It certainly wouldn't be the first\n> time. A \"defense in depth\" mindset seems like the right one when it\n> comes to data corruption bugs. Early detection is really important.\n>\n> > Moreover, not everyone is as\n> > interested in an extended debugging exercise as they are in getting\n> > the system working again, and VACUUM failing repeatedly is a pretty\n> > serious problem.\n>\n> That's absolutely consistent with my experience. Most users want to\n> get back to business as usual now, while letting somebody else do the\n> hard work of debugging.\n>\n\nAlso even if you do trace the problem you still have to recover.\n\nAnd sometimes I have found latent corruption from times when dbs were\nrunning on older versions and older servers, making debugging largely a\nfutile exercise.\n\n>\n> > Therefore, one of my colleagues has - at my request - created a couple\n> > of functions called heap_force_kill() and heap_force_freeze() which\n> > take an array of TIDs.\n>\n> > So I have these questions:\n> >\n> > - Do people think it would me smart/good/useful to include something\n> > like this in PostgreSQL?\n>\n> I'm in favor of it.\n>\n\n+1\n\nWould be worth extending it with some functions to grab rows that have\nvarious TOAST oids too.\n\n>\n> > - If so, how? I would propose a new contrib module that we back-patch\n> > all the way, because the VACUUM errors were back-patched all the way,\n> > and there seems to be no advantage in making people wait 5 years for a\n> > new version that has some kind of tooling in this area.\n>\n> I'm in favor of it being *possible* to backpatch tooling that is\n> clearly related to correctness in a fundamental way. Obviously this\n> would mean that we'd be revising our general position on backpatching\n> to allow some limited exceptions around corruption. I'm not sure that\n> this meets that standard, though. It's hardly something that we can\n> expect all that many users to be able to use effectively.\n>\n> I may be biased, but I'd be inclined to permit it in the case of\n> something like amcheck, or pg_visibility, on the grounds that they're\n> more or less the same as the new VACUUM errcontext instrumentation you\n> mentioned. The same cannot be said of something like this new\n> heap_force_kill() stuff.\n>\n> > - Any ideas for additional things we should include, or improvements\n> > on the sketch above?\n>\n> Clearly you should work out a way of making it very hard to\n> accidentally (mis)use. For example, maybe you make the functions check\n> for the presence of a sentinel file in the data directory.\n>\n\nAgreed.\n\n>\n>\n> --\n> Peter Geoghegan\n>\n>\n>\n\n-- \nBest Regards,\nChris Travers\nHead of Database\n\nTel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com\nSaarbrücker Straße 37a, 10405 Berlin\n\nOn Tue, Jul 14, 2020 at 12:28 AM Peter Geoghegan <pg@bowt.ie> wrote:On Mon, Jul 13, 2020 at 2:12 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> 1. There's nothing to identify the tuple that has the problem, and no\n> way to know how many more of them there might be. Back-patching\n> b61d161c146328ae6ba9ed937862d66e5c8b035a would help with the first\n> part of this.\n\nI am in favor of backpatching such changes in cases where senior\ncommunity members feel that it could help with hypothetical\nundiscovered data corruption issues -- if they're willing to take\nresponsibility for the change. It certainly wouldn't be the first\ntime. A \"defense in depth\" mindset seems like the right one when it\ncomes to data corruption bugs. Early detection is really important.\n\n> Moreover, not everyone is as\n> interested in an extended debugging exercise as they are in getting\n> the system working again, and VACUUM failing repeatedly is a pretty\n> serious problem.\n\nThat's absolutely consistent with my experience. Most users want to\nget back to business as usual now, while letting somebody else do the\nhard work of debugging.Also even if you do trace the problem you still have to recover.And sometimes I have found latent corruption from times when dbs were running on older versions and older servers, making debugging largely a futile exercise.\n\n> Therefore, one of my colleagues has - at my request - created a couple\n> of functions called heap_force_kill() and heap_force_freeze() which\n> take an array of TIDs.\n\n> So I have these questions:\n>\n> - Do people think it would me smart/good/useful to include something\n> like this in PostgreSQL?\n\nI'm in favor of it.+1Would be worth extending it with some functions to grab rows that have various TOAST oids too.\n\n> - If so, how? I would propose a new contrib module that we back-patch\n> all the way, because the VACUUM errors were back-patched all the way,\n> and there seems to be no advantage in making people wait 5 years for a\n> new version that has some kind of tooling in this area.\n\nI'm in favor of it being *possible* to backpatch tooling that is\nclearly related to correctness in a fundamental way. Obviously this\nwould mean that we'd be revising our general position on backpatching\nto allow some limited exceptions around corruption. I'm not sure that\nthis meets that standard, though. It's hardly something that we can\nexpect all that many users to be able to use effectively.\n\nI may be biased, but I'd be inclined to permit it in the case of\nsomething like amcheck, or pg_visibility, on the grounds that they're\nmore or less the same as the new VACUUM errcontext instrumentation you\nmentioned. The same cannot be said of something like this new\nheap_force_kill() stuff.\n\n> - Any ideas for additional things we should include, or improvements\n> on the sketch above?\n\nClearly you should work out a way of making it very hard to\naccidentally (mis)use. For example, maybe you make the functions check\nfor the presence of a sentinel file in the data directory.Agreed. \n\n\n--\nPeter Geoghegan\n\n\n-- Best Regards,Chris TraversHead of DatabaseTel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com Saarbrücker Straße 37a, 10405 Berlin", "msg_date": "Tue, 18 Aug 2020 13:29:05 +0200", "msg_from": "Chris Travers <chris.travers@adjust.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Thanks for suggestion Ashutosh, I have done testing around these suggestion\nand found no issues. I will continue testing same with updated patch posted\non this thread.\n\nOn Fri, Aug 7, 2020 at 12:45 PM Ashutosh Sharma <ashu.coek88@gmail.com>\nwrote:\n\n> Thanks Rajkumar for testing the patch.\n>\n> Here are some of the additional test-cases that I would suggest you to\n> execute, if possible:\n>\n> 1) You may try running the test-cases that you have executed so far\n> with SR setup and see if the changes are getting reflected on the\n> standby.\n>\n> 2) You may also try running some concurrent test-cases for e.g. try\n> running these functions with VACUUM or some other sql commands\n> (preferable DML commands) in parallel.\n>\n> 3) See what happens when you pass some invalid tids (containing\n> invalid block or offset number) to these functions. You may also try\n> running these functions on the same tuple repeatedly and see the\n> behaviour.\n>\n> ...\n>\n> --\n> With Regards,\n> Ashutosh Sharma\n> EnterpriseDB:http://www.enterprisedb.com\n\n\nThanks & Regards,\nRajkumar Raghuwanshi\n\nThanks for suggestion Ashutosh, I have done testing around these suggestionand found no issues. I will continue testing same with updated patch postedon this thread.On Fri, Aug 7, 2020 at 12:45 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:Thanks Rajkumar for testing the patch.\n\nHere are some of the additional test-cases that I would suggest you to\nexecute, if possible:\n\n1) You may try running the test-cases that you have executed so far\nwith SR setup and see if the changes are getting reflected on the\nstandby.\n\n2) You may also try running some concurrent test-cases for e.g. try\nrunning these functions with VACUUM or some other sql commands\n(preferable DML commands) in parallel.\n\n3) See what happens when you pass some invalid tids (containing\ninvalid block or offset number) to these functions. You may also try\nrunning these functions on the same tuple repeatedly and see the\nbehaviour.\n\n...\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com Thanks & Regards,Rajkumar Raghuwanshi", "msg_date": "Tue, 18 Aug 2020 17:48:23 +0530", "msg_from": "Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On 2020-Aug-17, Ashutosh Sharma wrote:\n\n> > + if (heap_force_opt == HEAP_FORCE_KILL)\n> > + ItemIdSetDead(itemid);\n> >\n> > I think that if the page is an all-visible page, we should clear an\n> > all-visible bit on the visibility map corresponding to the page and\n> > PD_ALL_VISIBLE on the page header. Otherwise, index only scan would\n> > return the wrong results.\n> \n> I think we should let VACUUM do that. Please note that this module is\n> intended to be used only on a damaged relation and should only be\n> operated on damaged tuples of such relations. And the execution of any\n> of the functions provided by this module on a damaged relation must be\n> followed by VACUUM with DISABLE_PAGE_SKIPPING option on that relation.\n> This is necessary to bring back a damaged relation to the sane state\n> once a surgery is performed on it. I will try to add this note in the\n> documentation for this module.\n\nIt makes sense to recommend VACUUM after fixing the page, but I agree\nwith Sawada-san that it would be sensible to reset the VM bit while\ndoing surgery, since that's the state that the page would be in. We\nshould certainly *strongly recommend* to do VACUUM DISABLE_PAGE_SKIPPING,\nbut if users fail to do so, then leaving the VM bit set just means that\nwe know *for certain* that there will be further corruption as soon as\nthe XID counter advances sufficiently.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 18 Aug 2020 12:14:53 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Mon, 17 Aug 2020 at 15:05, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Hello Masahiko-san,\n>\n> Thanks for the review. Please check the comments inline below:\n>\n> On Fri, Aug 14, 2020 at 10:07 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n>\n> > Thank you for updating the patch! Here are my comments on v5 patch:\n> >\n> > --- a/contrib/Makefile\n> > +++ b/contrib/Makefile\n> > @@ -35,6 +35,7 @@ SUBDIRS = \\\n> > pg_standby \\\n> > pg_stat_statements \\\n> > pg_trgm \\\n> > + pg_surgery \\\n> > pgcrypto \\\n> >\n> > I guess we use alphabetical order here. So pg_surgery should be placed\n> > before pg_trgm.\n> >\n>\n> Okay, will take care of this in the next version of patch.\n>\n> > ---\n> > + if (heap_force_opt == HEAP_FORCE_KILL)\n> > + ItemIdSetDead(itemid);\n> >\n> > I think that if the page is an all-visible page, we should clear an\n> > all-visible bit on the visibility map corresponding to the page and\n> > PD_ALL_VISIBLE on the page header. Otherwise, index only scan would\n> > return the wrong results.\n> >\n>\n> I think we should let VACUUM do that. Please note that this module is\n> intended to be used only on a damaged relation and should only be\n> operated on damaged tuples of such relations. And the execution of any\n> of the functions provided by this module on a damaged relation must be\n> followed by VACUUM with DISABLE_PAGE_SKIPPING option on that relation.\n> This is necessary to bring back a damaged relation to the sane state\n> once a surgery is performed on it. I will try to add this note in the\n> documentation for this module.\n>\n> > ---\n> > + /*\n> > + * We do not mark the buffer dirty or do WAL logging for unmodifed\n> > + * pages.\n> > + */\n> > + if (!did_modify_page)\n> > + goto skip_wal;\n> > +\n> > + /* Mark buffer dirty before we write WAL. */\n> > + MarkBufferDirty(buf);\n> > +\n> > + /* XLOG stuff */\n> > + if (RelationNeedsWAL(rel))\n> > + log_newpage_buffer(buf, true);\n> > +\n> > +skip_wal:\n> > + END_CRIT_SECTION();\n> > +\n> >\n> > s/unmodifed/unmodified/\n> >\n>\n> okay, will fix this typo.\n>\n> > Do we really need to use goto? I think we can modify it like follows:\n> >\n> > if (did_modity_page)\n> > {\n> > /* Mark buffer dirty before we write WAL. */\n> > MarkBufferDirty(buf);\n> >\n> > /* XLOG stuff */\n> > if (RelationNeedsWAL(rel))\n> > log_newpage_buffer(buf, true);\n> > }\n> >\n> > END_CRIT_SECTION();\n> >\n>\n> No, we don't need it. We can achieve the same by checking the status\n> of did_modify_page flag as you suggested. I will do this change in the\n> next version.\n>\n> > ---\n> > pg_force_freeze() can revival a tuple that is already deleted but not\n> > vacuumed yet. Therefore, the user might need to reindex indexes after\n> > using that function. For instance, with the following script, the last\n> > two queries: index scan and seq scan, will return different results.\n> >\n> > set enable_seqscan to off;\n> > set enable_bitmapscan to off;\n> > set enable_indexonlyscan to off;\n> > create table tbl (a int primary key);\n> > insert into tbl values (1);\n> >\n> > update tbl set a = a + 100 where a = 1;\n> >\n> > explain analyze select * from tbl where a < 200;\n> >\n> > -- revive deleted tuple on heap\n> > select heap_force_freeze('tbl', array['(0,1)'::tid]);\n> >\n> > -- index scan returns 2 tuples\n> > explain analyze select * from tbl where a < 200;\n> >\n> > -- seq scan returns 1 tuple\n> > set enable_seqscan to on;\n> > explain analyze select * from tbl;\n> >\n>\n> I am not sure if this is the right use-case of pg_force_freeze\n> function. I think we should only be running pg_force_freeze function\n> on a tuple for which VACUUM reports \"found xmin ABC from before\n> relfrozenxid PQR\" sort of error otherwise it might worsen the things\n> instead of making it better.\n\nShould this also be documented? I think that it's hard to force the\nuser to always use this module in the right situation but we need to\nshow at least when to use.\n\n> > Also, if a tuple updated and moved to another partition is revived by\n> > heap_force_freeze(), its ctid still has special values:\n> > MovedPartitionsOffsetNumber and MovedPartitionsBlockNumber. I don't\n> > see a problem yet caused by a visible tuple having the special ctid\n> > value, but it might be worth considering either to reset ctid value as\n> > well or to not freezing already-deleted tuple.\n> >\n>\n> For this as well, the answer remains the same as above.\n\nPerhaps the same is true when a tuple header is corrupted including\nxmin and ctid for some reason and the user wants to fix it? I'm\nconcerned that a live tuple having the wrong ctid will cause SEGV or\nPANIC error in the future.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 19 Aug 2020 12:56:45 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Tue, Aug 18, 2020 at 9:44 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2020-Aug-17, Ashutosh Sharma wrote:\n>\n> > > + if (heap_force_opt == HEAP_FORCE_KILL)\n> > > + ItemIdSetDead(itemid);\n> > >\n> > > I think that if the page is an all-visible page, we should clear an\n> > > all-visible bit on the visibility map corresponding to the page and\n> > > PD_ALL_VISIBLE on the page header. Otherwise, index only scan would\n> > > return the wrong results.\n> >\n> > I think we should let VACUUM do that. Please note that this module is\n> > intended to be used only on a damaged relation and should only be\n> > operated on damaged tuples of such relations. And the execution of any\n> > of the functions provided by this module on a damaged relation must be\n> > followed by VACUUM with DISABLE_PAGE_SKIPPING option on that relation.\n> > This is necessary to bring back a damaged relation to the sane state\n> > once a surgery is performed on it. I will try to add this note in the\n> > documentation for this module.\n>\n> It makes sense to recommend VACUUM after fixing the page, but I agree\n> with Sawada-san that it would be sensible to reset the VM bit while\n> doing surgery, since that's the state that the page would be in.\n\nSure, I will try to do that change but I would still recommend to\nalways run VACUUM with DISABLE_PAGE_SKIPPING option on the relation\nthat underwent surgery.\n\nWe\n> should certainly *strongly recommend* to do VACUUM DISABLE_PAGE_SKIPPING,\n> but if users fail to do so, then leaving the VM bit set just means that\n> we know *for certain* that there will be further corruption as soon as\n> the XID counter advances sufficiently.\n>\n\nYeah, I've already added a note for this in the documentation:\n\nNote: \"After a surgery is performed on a damaged relation using this\nmodule, we must run VACUUM with DISABLE_PAGE_SKIPPING option on that\nrelation to bring it back into a sane state.\"\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 19 Aug 2020 09:57:25 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Wed, Aug 19, 2020 at 9:27 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Mon, 17 Aug 2020 at 15:05, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> >\n> > > pg_force_freeze() can revival a tuple that is already deleted but not\n> > > vacuumed yet. Therefore, the user might need to reindex indexes after\n> > > using that function. For instance, with the following script, the last\n> > > two queries: index scan and seq scan, will return different results.\n> > >\n> > > set enable_seqscan to off;\n> > > set enable_bitmapscan to off;\n> > > set enable_indexonlyscan to off;\n> > > create table tbl (a int primary key);\n> > > insert into tbl values (1);\n> > >\n> > > update tbl set a = a + 100 where a = 1;\n> > >\n> > > explain analyze select * from tbl where a < 200;\n> > >\n> > > -- revive deleted tuple on heap\n> > > select heap_force_freeze('tbl', array['(0,1)'::tid]);\n> > >\n> > > -- index scan returns 2 tuples\n> > > explain analyze select * from tbl where a < 200;\n> > >\n> > > -- seq scan returns 1 tuple\n> > > set enable_seqscan to on;\n> > > explain analyze select * from tbl;\n> > >\n> >\n> > I am not sure if this is the right use-case of pg_force_freeze\n> > function. I think we should only be running pg_force_freeze function\n> > on a tuple for which VACUUM reports \"found xmin ABC from before\n> > relfrozenxid PQR\" sort of error otherwise it might worsen the things\n> > instead of making it better.\n>\n> Should this also be documented? I think that it's hard to force the\n> user to always use this module in the right situation but we need to\n> show at least when to use.\n>\n\nI've already added some examples in the documentation explaining the\nuse-case of force_freeze function. If required, I will also add a note\nabout it.\n\n> > > Also, if a tuple updated and moved to another partition is revived by\n> > > heap_force_freeze(), its ctid still has special values:\n> > > MovedPartitionsOffsetNumber and MovedPartitionsBlockNumber. I don't\n> > > see a problem yet caused by a visible tuple having the special ctid\n> > > value, but it might be worth considering either to reset ctid value as\n> > > well or to not freezing already-deleted tuple.\n> > >\n> >\n> > For this as well, the answer remains the same as above.\n>\n> Perhaps the same is true when a tuple header is corrupted including\n> xmin and ctid for some reason and the user wants to fix it? I'm\n> concerned that a live tuple having the wrong ctid will cause SEGV or\n> PANIC error in the future.\n>\n\nIf a tuple header itself is corrupted, then I think we must kill that\ntuple. If only xmin and t_ctid fields are corrupted, then probably we\ncan think of resetting the ctid value of that tuple. However, it won't\nbe always possible to detect the corrupted ctid value. It's quite\npossible that the corrupted ctid field has valid values for block\nnumber and offset number in it, but it's actually corrupted and it\nwould be difficult to consider such ctid as corrupted. Hence, we can't\ndo anything about such types of corruption. Probably in such cases we\nneed to run VACUUM FULL on such tables so that new ctid gets generated\nfor each tuple in the table.\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 19 Aug 2020 11:39:41 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Wed, 19 Aug 2020 at 15:09, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> On Wed, Aug 19, 2020 at 9:27 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Mon, 17 Aug 2020 at 15:05, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > >\n> > > > pg_force_freeze() can revival a tuple that is already deleted but not\n> > > > vacuumed yet. Therefore, the user might need to reindex indexes after\n> > > > using that function. For instance, with the following script, the last\n> > > > two queries: index scan and seq scan, will return different results.\n> > > >\n> > > > set enable_seqscan to off;\n> > > > set enable_bitmapscan to off;\n> > > > set enable_indexonlyscan to off;\n> > > > create table tbl (a int primary key);\n> > > > insert into tbl values (1);\n> > > >\n> > > > update tbl set a = a + 100 where a = 1;\n> > > >\n> > > > explain analyze select * from tbl where a < 200;\n> > > >\n> > > > -- revive deleted tuple on heap\n> > > > select heap_force_freeze('tbl', array['(0,1)'::tid]);\n> > > >\n> > > > -- index scan returns 2 tuples\n> > > > explain analyze select * from tbl where a < 200;\n> > > >\n> > > > -- seq scan returns 1 tuple\n> > > > set enable_seqscan to on;\n> > > > explain analyze select * from tbl;\n> > > >\n> > >\n> > > I am not sure if this is the right use-case of pg_force_freeze\n> > > function. I think we should only be running pg_force_freeze function\n> > > on a tuple for which VACUUM reports \"found xmin ABC from before\n> > > relfrozenxid PQR\" sort of error otherwise it might worsen the things\n> > > instead of making it better.\n> >\n> > Should this also be documented? I think that it's hard to force the\n> > user to always use this module in the right situation but we need to\n> > show at least when to use.\n> >\n>\n> I've already added some examples in the documentation explaining the\n> use-case of force_freeze function. If required, I will also add a note\n> about it.\n>\n> > > > Also, if a tuple updated and moved to another partition is revived by\n> > > > heap_force_freeze(), its ctid still has special values:\n> > > > MovedPartitionsOffsetNumber and MovedPartitionsBlockNumber. I don't\n> > > > see a problem yet caused by a visible tuple having the special ctid\n> > > > value, but it might be worth considering either to reset ctid value as\n> > > > well or to not freezing already-deleted tuple.\n> > > >\n> > >\n> > > For this as well, the answer remains the same as above.\n> >\n> > Perhaps the same is true when a tuple header is corrupted including\n> > xmin and ctid for some reason and the user wants to fix it? I'm\n> > concerned that a live tuple having the wrong ctid will cause SEGV or\n> > PANIC error in the future.\n> >\n>\n> If a tuple header itself is corrupted, then I think we must kill that\n> tuple. If only xmin and t_ctid fields are corrupted, then probably we\n> can think of resetting the ctid value of that tuple. However, it won't\n> be always possible to detect the corrupted ctid value. It's quite\n> possible that the corrupted ctid field has valid values for block\n> number and offset number in it, but it's actually corrupted and it\n> would be difficult to consider such ctid as corrupted. Hence, we can't\n> do anything about such types of corruption. Probably in such cases we\n> need to run VACUUM FULL on such tables so that new ctid gets generated\n> for each tuple in the table.\n\nUnderstood.\n\nPerhaps such corruption will be able to be detected by new heapam\ncheck functions discussed on another thread. My point was that it\nmight be better to attempt making the tuple header sane state as much\nas possible when fixing a live tuple in order to prevent further\nproblems such as databases crash by malicious attackers.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 19 Aug 2020 19:25:03 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Wed, Aug 19, 2020 at 3:55 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Wed, 19 Aug 2020 at 15:09, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> >\n> > On Wed, Aug 19, 2020 at 9:27 AM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > > On Mon, 17 Aug 2020 at 15:05, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > > >\n> > > > > pg_force_freeze() can revival a tuple that is already deleted but not\n> > > > > vacuumed yet. Therefore, the user might need to reindex indexes after\n> > > > > using that function. For instance, with the following script, the last\n> > > > > two queries: index scan and seq scan, will return different results.\n> > > > >\n> > > > > set enable_seqscan to off;\n> > > > > set enable_bitmapscan to off;\n> > > > > set enable_indexonlyscan to off;\n> > > > > create table tbl (a int primary key);\n> > > > > insert into tbl values (1);\n> > > > >\n> > > > > update tbl set a = a + 100 where a = 1;\n> > > > >\n> > > > > explain analyze select * from tbl where a < 200;\n> > > > >\n> > > > > -- revive deleted tuple on heap\n> > > > > select heap_force_freeze('tbl', array['(0,1)'::tid]);\n> > > > >\n> > > > > -- index scan returns 2 tuples\n> > > > > explain analyze select * from tbl where a < 200;\n> > > > >\n> > > > > -- seq scan returns 1 tuple\n> > > > > set enable_seqscan to on;\n> > > > > explain analyze select * from tbl;\n> > > > >\n> > > >\n> > > > I am not sure if this is the right use-case of pg_force_freeze\n> > > > function. I think we should only be running pg_force_freeze function\n> > > > on a tuple for which VACUUM reports \"found xmin ABC from before\n> > > > relfrozenxid PQR\" sort of error otherwise it might worsen the things\n> > > > instead of making it better.\n> > >\n> > > Should this also be documented? I think that it's hard to force the\n> > > user to always use this module in the right situation but we need to\n> > > show at least when to use.\n> > >\n> >\n> > I've already added some examples in the documentation explaining the\n> > use-case of force_freeze function. If required, I will also add a note\n> > about it.\n> >\n> > > > > Also, if a tuple updated and moved to another partition is revived by\n> > > > > heap_force_freeze(), its ctid still has special values:\n> > > > > MovedPartitionsOffsetNumber and MovedPartitionsBlockNumber. I don't\n> > > > > see a problem yet caused by a visible tuple having the special ctid\n> > > > > value, but it might be worth considering either to reset ctid value as\n> > > > > well or to not freezing already-deleted tuple.\n> > > > >\n> > > >\n> > > > For this as well, the answer remains the same as above.\n> > >\n> > > Perhaps the same is true when a tuple header is corrupted including\n> > > xmin and ctid for some reason and the user wants to fix it? I'm\n> > > concerned that a live tuple having the wrong ctid will cause SEGV or\n> > > PANIC error in the future.\n> > >\n> >\n> > If a tuple header itself is corrupted, then I think we must kill that\n> > tuple. If only xmin and t_ctid fields are corrupted, then probably we\n> > can think of resetting the ctid value of that tuple. However, it won't\n> > be always possible to detect the corrupted ctid value. It's quite\n> > possible that the corrupted ctid field has valid values for block\n> > number and offset number in it, but it's actually corrupted and it\n> > would be difficult to consider such ctid as corrupted. Hence, we can't\n> > do anything about such types of corruption. Probably in such cases we\n> > need to run VACUUM FULL on such tables so that new ctid gets generated\n> > for each tuple in the table.\n>\n> Understood.\n>\n> Perhaps such corruption will be able to be detected by new heapam\n> check functions discussed on another thread. My point was that it\n> might be better to attempt making the tuple header sane state as much\n> as possible when fixing a live tuple in order to prevent further\n> problems such as databases crash by malicious attackers.\n>\n\nAgreed. So, to handle the ctid related concern that you raised, I'm\nplanning to do the following changes to ensure that the tuple being\nmarked as frozen contains the correct item pointer value. Please let\nme know if you are okay with these changes.\n\n HeapTupleHeader htup;\n+ ItemPointerData ctid;\n\n Assert(heap_force_opt == HEAP_FORCE_FREEZE);\n\n+ ItemPointerSet(&ctid, blkno, offnos[i]);\n+\n htup = (HeapTupleHeader)\nPageGetItem(page, itemid);\n\n+ /*\n+ * Make sure that this tuple holds the\ncorrect item pointer\n+ * value.\n+ */\n+ if\n(!HeapTupleHeaderIndicatesMovedPartitions(htup) &&\n+ !ItemPointerEquals(&ctid, &htup->t_ctid))\n+ ItemPointerSet(&htup->t_ctid,\nblkno, offnos[i]);\n+\n HeapTupleHeaderSetXmin(htup,\nFrozenTransactionId);\n HeapTupleHeaderSetXmax(htup,\nInvalidTransactionId);\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 19 Aug 2020 17:15:06 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Wed, 19 Aug 2020 at 20:45, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> On Wed, Aug 19, 2020 at 3:55 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Wed, 19 Aug 2020 at 15:09, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > >\n> > > On Wed, Aug 19, 2020 at 9:27 AM Masahiko Sawada\n> > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > >\n> > > > On Mon, 17 Aug 2020 at 15:05, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > > > >\n> > > > > > pg_force_freeze() can revival a tuple that is already deleted but not\n> > > > > > vacuumed yet. Therefore, the user might need to reindex indexes after\n> > > > > > using that function. For instance, with the following script, the last\n> > > > > > two queries: index scan and seq scan, will return different results.\n> > > > > >\n> > > > > > set enable_seqscan to off;\n> > > > > > set enable_bitmapscan to off;\n> > > > > > set enable_indexonlyscan to off;\n> > > > > > create table tbl (a int primary key);\n> > > > > > insert into tbl values (1);\n> > > > > >\n> > > > > > update tbl set a = a + 100 where a = 1;\n> > > > > >\n> > > > > > explain analyze select * from tbl where a < 200;\n> > > > > >\n> > > > > > -- revive deleted tuple on heap\n> > > > > > select heap_force_freeze('tbl', array['(0,1)'::tid]);\n> > > > > >\n> > > > > > -- index scan returns 2 tuples\n> > > > > > explain analyze select * from tbl where a < 200;\n> > > > > >\n> > > > > > -- seq scan returns 1 tuple\n> > > > > > set enable_seqscan to on;\n> > > > > > explain analyze select * from tbl;\n> > > > > >\n> > > > >\n> > > > > I am not sure if this is the right use-case of pg_force_freeze\n> > > > > function. I think we should only be running pg_force_freeze function\n> > > > > on a tuple for which VACUUM reports \"found xmin ABC from before\n> > > > > relfrozenxid PQR\" sort of error otherwise it might worsen the things\n> > > > > instead of making it better.\n> > > >\n> > > > Should this also be documented? I think that it's hard to force the\n> > > > user to always use this module in the right situation but we need to\n> > > > show at least when to use.\n> > > >\n> > >\n> > > I've already added some examples in the documentation explaining the\n> > > use-case of force_freeze function. If required, I will also add a note\n> > > about it.\n> > >\n> > > > > > Also, if a tuple updated and moved to another partition is revived by\n> > > > > > heap_force_freeze(), its ctid still has special values:\n> > > > > > MovedPartitionsOffsetNumber and MovedPartitionsBlockNumber. I don't\n> > > > > > see a problem yet caused by a visible tuple having the special ctid\n> > > > > > value, but it might be worth considering either to reset ctid value as\n> > > > > > well or to not freezing already-deleted tuple.\n> > > > > >\n> > > > >\n> > > > > For this as well, the answer remains the same as above.\n> > > >\n> > > > Perhaps the same is true when a tuple header is corrupted including\n> > > > xmin and ctid for some reason and the user wants to fix it? I'm\n> > > > concerned that a live tuple having the wrong ctid will cause SEGV or\n> > > > PANIC error in the future.\n> > > >\n> > >\n> > > If a tuple header itself is corrupted, then I think we must kill that\n> > > tuple. If only xmin and t_ctid fields are corrupted, then probably we\n> > > can think of resetting the ctid value of that tuple. However, it won't\n> > > be always possible to detect the corrupted ctid value. It's quite\n> > > possible that the corrupted ctid field has valid values for block\n> > > number and offset number in it, but it's actually corrupted and it\n> > > would be difficult to consider such ctid as corrupted. Hence, we can't\n> > > do anything about such types of corruption. Probably in such cases we\n> > > need to run VACUUM FULL on such tables so that new ctid gets generated\n> > > for each tuple in the table.\n> >\n> > Understood.\n> >\n> > Perhaps such corruption will be able to be detected by new heapam\n> > check functions discussed on another thread. My point was that it\n> > might be better to attempt making the tuple header sane state as much\n> > as possible when fixing a live tuple in order to prevent further\n> > problems such as databases crash by malicious attackers.\n> >\n>\n> Agreed. So, to handle the ctid related concern that you raised, I'm\n> planning to do the following changes to ensure that the tuple being\n> marked as frozen contains the correct item pointer value. Please let\n> me know if you are okay with these changes.\n\nGiven that a live tuple never indicates to ve moved partitions, I\nguess the first condition in the if statement is not necessary. The\nrest looks good to me, although other hackers might think differently.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 20 Aug 2020 14:33:44 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Thu, Aug 20, 2020 at 11:04 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Wed, 19 Aug 2020 at 20:45, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> >\n> > On Wed, Aug 19, 2020 at 3:55 PM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > > On Wed, 19 Aug 2020 at 15:09, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > > >\n> > > > On Wed, Aug 19, 2020 at 9:27 AM Masahiko Sawada\n> > > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > > >\n> > > > > On Mon, 17 Aug 2020 at 15:05, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > > > > >\n> > > > > > > pg_force_freeze() can revival a tuple that is already deleted but not\n> > > > > > > vacuumed yet. Therefore, the user might need to reindex indexes after\n> > > > > > > using that function. For instance, with the following script, the last\n> > > > > > > two queries: index scan and seq scan, will return different results.\n> > > > > > >\n> > > > > > > set enable_seqscan to off;\n> > > > > > > set enable_bitmapscan to off;\n> > > > > > > set enable_indexonlyscan to off;\n> > > > > > > create table tbl (a int primary key);\n> > > > > > > insert into tbl values (1);\n> > > > > > >\n> > > > > > > update tbl set a = a + 100 where a = 1;\n> > > > > > >\n> > > > > > > explain analyze select * from tbl where a < 200;\n> > > > > > >\n> > > > > > > -- revive deleted tuple on heap\n> > > > > > > select heap_force_freeze('tbl', array['(0,1)'::tid]);\n> > > > > > >\n> > > > > > > -- index scan returns 2 tuples\n> > > > > > > explain analyze select * from tbl where a < 200;\n> > > > > > >\n> > > > > > > -- seq scan returns 1 tuple\n> > > > > > > set enable_seqscan to on;\n> > > > > > > explain analyze select * from tbl;\n> > > > > > >\n> > > > > >\n> > > > > > I am not sure if this is the right use-case of pg_force_freeze\n> > > > > > function. I think we should only be running pg_force_freeze function\n> > > > > > on a tuple for which VACUUM reports \"found xmin ABC from before\n> > > > > > relfrozenxid PQR\" sort of error otherwise it might worsen the things\n> > > > > > instead of making it better.\n> > > > >\n> > > > > Should this also be documented? I think that it's hard to force the\n> > > > > user to always use this module in the right situation but we need to\n> > > > > show at least when to use.\n> > > > >\n> > > >\n> > > > I've already added some examples in the documentation explaining the\n> > > > use-case of force_freeze function. If required, I will also add a note\n> > > > about it.\n> > > >\n> > > > > > > Also, if a tuple updated and moved to another partition is revived by\n> > > > > > > heap_force_freeze(), its ctid still has special values:\n> > > > > > > MovedPartitionsOffsetNumber and MovedPartitionsBlockNumber. I don't\n> > > > > > > see a problem yet caused by a visible tuple having the special ctid\n> > > > > > > value, but it might be worth considering either to reset ctid value as\n> > > > > > > well or to not freezing already-deleted tuple.\n> > > > > > >\n> > > > > >\n> > > > > > For this as well, the answer remains the same as above.\n> > > > >\n> > > > > Perhaps the same is true when a tuple header is corrupted including\n> > > > > xmin and ctid for some reason and the user wants to fix it? I'm\n> > > > > concerned that a live tuple having the wrong ctid will cause SEGV or\n> > > > > PANIC error in the future.\n> > > > >\n> > > >\n> > > > If a tuple header itself is corrupted, then I think we must kill that\n> > > > tuple. If only xmin and t_ctid fields are corrupted, then probably we\n> > > > can think of resetting the ctid value of that tuple. However, it won't\n> > > > be always possible to detect the corrupted ctid value. It's quite\n> > > > possible that the corrupted ctid field has valid values for block\n> > > > number and offset number in it, but it's actually corrupted and it\n> > > > would be difficult to consider such ctid as corrupted. Hence, we can't\n> > > > do anything about such types of corruption. Probably in such cases we\n> > > > need to run VACUUM FULL on such tables so that new ctid gets generated\n> > > > for each tuple in the table.\n> > >\n> > > Understood.\n> > >\n> > > Perhaps such corruption will be able to be detected by new heapam\n> > > check functions discussed on another thread. My point was that it\n> > > might be better to attempt making the tuple header sane state as much\n> > > as possible when fixing a live tuple in order to prevent further\n> > > problems such as databases crash by malicious attackers.\n> > >\n> >\n> > Agreed. So, to handle the ctid related concern that you raised, I'm\n> > planning to do the following changes to ensure that the tuple being\n> > marked as frozen contains the correct item pointer value. Please let\n> > me know if you are okay with these changes.\n>\n> Given that a live tuple never indicates to ve moved partitions, I\n> guess the first condition in the if statement is not necessary. The\n> rest looks good to me, although other hackers might think differently.\n>\n\nOkay, thanks for confirming. I am planning to go ahead with this\napproach. Will later see what others have to say about it.\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 20 Aug 2020 11:43:49 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Hi Masahiko-san,\n\nPlease find the updated patch with the following new changes:\n\n1) It adds the code changes in heap_force_kill function to clear an\nall-visible bit on the visibility map corresponding to the page that\nis marked all-visible. Along the way it also clears PD_ALL_VISIBLE\nflag on the page header.\n\n2) It adds the code changes in heap_force_freeze function to reset the\nctid value in a tuple header if it is corrupted.\n\n3) It adds several notes and examples in the documentation stating\nwhen and how we need to use the functions provided by this module.\n\nPlease have a look and let me know for any other concern.\n\nThanks,\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\nOn Thu, Aug 20, 2020 at 11:43 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> On Thu, Aug 20, 2020 at 11:04 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Wed, 19 Aug 2020 at 20:45, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > >\n> > > On Wed, Aug 19, 2020 at 3:55 PM Masahiko Sawada\n> > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > >\n> > > > On Wed, 19 Aug 2020 at 15:09, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > > > >\n> > > > > On Wed, Aug 19, 2020 at 9:27 AM Masahiko Sawada\n> > > > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > > > >\n> > > > > > On Mon, 17 Aug 2020 at 15:05, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > > > > > >\n> > > > > > > > pg_force_freeze() can revival a tuple that is already deleted but not\n> > > > > > > > vacuumed yet. Therefore, the user might need to reindex indexes after\n> > > > > > > > using that function. For instance, with the following script, the last\n> > > > > > > > two queries: index scan and seq scan, will return different results.\n> > > > > > > >\n> > > > > > > > set enable_seqscan to off;\n> > > > > > > > set enable_bitmapscan to off;\n> > > > > > > > set enable_indexonlyscan to off;\n> > > > > > > > create table tbl (a int primary key);\n> > > > > > > > insert into tbl values (1);\n> > > > > > > >\n> > > > > > > > update tbl set a = a + 100 where a = 1;\n> > > > > > > >\n> > > > > > > > explain analyze select * from tbl where a < 200;\n> > > > > > > >\n> > > > > > > > -- revive deleted tuple on heap\n> > > > > > > > select heap_force_freeze('tbl', array['(0,1)'::tid]);\n> > > > > > > >\n> > > > > > > > -- index scan returns 2 tuples\n> > > > > > > > explain analyze select * from tbl where a < 200;\n> > > > > > > >\n> > > > > > > > -- seq scan returns 1 tuple\n> > > > > > > > set enable_seqscan to on;\n> > > > > > > > explain analyze select * from tbl;\n> > > > > > > >\n> > > > > > >\n> > > > > > > I am not sure if this is the right use-case of pg_force_freeze\n> > > > > > > function. I think we should only be running pg_force_freeze function\n> > > > > > > on a tuple for which VACUUM reports \"found xmin ABC from before\n> > > > > > > relfrozenxid PQR\" sort of error otherwise it might worsen the things\n> > > > > > > instead of making it better.\n> > > > > >\n> > > > > > Should this also be documented? I think that it's hard to force the\n> > > > > > user to always use this module in the right situation but we need to\n> > > > > > show at least when to use.\n> > > > > >\n> > > > >\n> > > > > I've already added some examples in the documentation explaining the\n> > > > > use-case of force_freeze function. If required, I will also add a note\n> > > > > about it.\n> > > > >\n> > > > > > > > Also, if a tuple updated and moved to another partition is revived by\n> > > > > > > > heap_force_freeze(), its ctid still has special values:\n> > > > > > > > MovedPartitionsOffsetNumber and MovedPartitionsBlockNumber. I don't\n> > > > > > > > see a problem yet caused by a visible tuple having the special ctid\n> > > > > > > > value, but it might be worth considering either to reset ctid value as\n> > > > > > > > well or to not freezing already-deleted tuple.\n> > > > > > > >\n> > > > > > >\n> > > > > > > For this as well, the answer remains the same as above.\n> > > > > >\n> > > > > > Perhaps the same is true when a tuple header is corrupted including\n> > > > > > xmin and ctid for some reason and the user wants to fix it? I'm\n> > > > > > concerned that a live tuple having the wrong ctid will cause SEGV or\n> > > > > > PANIC error in the future.\n> > > > > >\n> > > > >\n> > > > > If a tuple header itself is corrupted, then I think we must kill that\n> > > > > tuple. If only xmin and t_ctid fields are corrupted, then probably we\n> > > > > can think of resetting the ctid value of that tuple. However, it won't\n> > > > > be always possible to detect the corrupted ctid value. It's quite\n> > > > > possible that the corrupted ctid field has valid values for block\n> > > > > number and offset number in it, but it's actually corrupted and it\n> > > > > would be difficult to consider such ctid as corrupted. Hence, we can't\n> > > > > do anything about such types of corruption. Probably in such cases we\n> > > > > need to run VACUUM FULL on such tables so that new ctid gets generated\n> > > > > for each tuple in the table.\n> > > >\n> > > > Understood.\n> > > >\n> > > > Perhaps such corruption will be able to be detected by new heapam\n> > > > check functions discussed on another thread. My point was that it\n> > > > might be better to attempt making the tuple header sane state as much\n> > > > as possible when fixing a live tuple in order to prevent further\n> > > > problems such as databases crash by malicious attackers.\n> > > >\n> > >\n> > > Agreed. So, to handle the ctid related concern that you raised, I'm\n> > > planning to do the following changes to ensure that the tuple being\n> > > marked as frozen contains the correct item pointer value. Please let\n> > > me know if you are okay with these changes.\n> >\n> > Given that a live tuple never indicates to ve moved partitions, I\n> > guess the first condition in the if statement is not necessary. The\n> > rest looks good to me, although other hackers might think differently.\n> >\n>\n> Okay, thanks for confirming. I am planning to go ahead with this\n> approach. Will later see what others have to say about it.\n>\n> --\n> With Regards,\n> Ashutosh Sharma\n> EnterpriseDB:http://www.enterprisedb.com", "msg_date": "Fri, 21 Aug 2020 18:54:58 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Tue, Aug 18, 2020 at 12:14 PM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n> It makes sense to recommend VACUUM after fixing the page, but I agree\n> with Sawada-san that it would be sensible to reset the VM bit while\n> doing surgery, since that's the state that the page would be in. We\n> should certainly *strongly recommend* to do VACUUM DISABLE_PAGE_SKIPPING,\n> but if users fail to do so, then leaving the VM bit set just means that\n> we know *for certain* that there will be further corruption as soon as\n> the XID counter advances sufficiently.\n\n+1.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 24 Aug 2020 09:45:42 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Hi!\n\n> 21 авг. 2020 г., в 18:24, Ashutosh Sharma <ashu.coek88@gmail.com> написал(а):\n> \n> Please find the updated patch with the following new changes:\n\nDo you have plans to support pg_surgery as external extension? For example for earlier versions of Postgres and for new features, like amcheck_next is maintained.\nISTM that I'll have to use something like that tomorrow and I'm in doubt - should I resurrect our pg_dirty_hands or try your new pg_surgey...\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Mon, 24 Aug 2020 22:30:19 +0500", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Mon, Aug 24, 2020 at 7:15 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Aug 18, 2020 at 12:14 PM Alvaro Herrera\n> <alvherre@2ndquadrant.com> wrote:\n> > It makes sense to recommend VACUUM after fixing the page, but I agree\n> > with Sawada-san that it would be sensible to reset the VM bit while\n> > doing surgery, since that's the state that the page would be in. We\n> > should certainly *strongly recommend* to do VACUUM DISABLE_PAGE_SKIPPING,\n> > but if users fail to do so, then leaving the VM bit set just means that\n> > we know *for certain* that there will be further corruption as soon as\n> > the XID counter advances sufficiently.\n>\n> +1.\n>\n\nThis has been taken care of in the latest v7 patch.\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 25 Aug 2020 10:13:38 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Mon, Aug 24, 2020 at 11:00 PM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>\n> Hi!\n>\n> > 21 авг. 2020 г., в 18:24, Ashutosh Sharma <ashu.coek88@gmail.com> написал(а):\n> >\n> > Please find the updated patch with the following new changes:\n>\n> Do you have plans to support pg_surgery as external extension? For example for earlier versions of Postgres and for new features, like amcheck_next is maintained.\n> ISTM that I'll have to use something like that tomorrow and I'm in doubt - should I resurrect our pg_dirty_hands or try your new pg_surgey...\n>\n\nAFAICS, we don't have any plans to support pg_surgery as an external\nextension as of now. Based on the discussion that has happened earlier\nin this thread, I think we might also back-patch this contrib module.\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 25 Aug 2020 10:27:44 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Fri, 21 Aug 2020 at 22:25, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Hi Masahiko-san,\n>\n> Please find the updated patch with the following new changes:\n>\n\nThank you for updating the patch!\n\n> 1) It adds the code changes in heap_force_kill function to clear an\n> all-visible bit on the visibility map corresponding to the page that\n> is marked all-visible. Along the way it also clears PD_ALL_VISIBLE\n> flag on the page header.\n\nI think we need to clear all visibility map bits by using\nVISIBILITYMAP_VALID_BITS. Otherwise, the page has all-frozen bit but\nnot all-visible bit, which is not a valid state.\n\n>\n> 2) It adds the code changes in heap_force_freeze function to reset the\n> ctid value in a tuple header if it is corrupted.\n>\n> 3) It adds several notes and examples in the documentation stating\n> when and how we need to use the functions provided by this module.\n>\n> Please have a look and let me know for any other concern.\n>\n\nAnd here are small comments on the heap_surgery.c:\n\n+ /*\n+ * Get the offset numbers from the tids belonging to one particular page\n+ * and process them one by one.\n+ */\n+ blkno = tids_same_page_fetch_offnums(tids, ntids, &next_start_ptr,\n+ offnos);\n+\n+ /* Calculate the number of offsets stored in offnos array. */\n+ noffs = next_start_ptr - curr_start_ptr;\n+\n+ /*\n+ * Update the current start pointer so that next time when\n+ * tids_same_page_fetch_offnums() is called, we can calculate the number\n+ * of offsets present in the offnos array.\n+ */\n+ curr_start_ptr = next_start_ptr;\n+\n+ /* Check whether the block number is valid. */\n+ if (blkno >= nblocks)\n+ {\n+ ereport(NOTICE,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"skipping block %u for relation \\\"%s\\\"\nbecause the block number is out of range\",\n+ blkno, RelationGetRelationName(rel))));\n+ continue;\n+ }\n+\n+ CHECK_FOR_INTERRUPTS();\n\nI guess it would be better to call CHECK_FOR_INTERRUPTS() at the top\nof the do loop for safety. I think it's unlikely to happen but the\nuser might mistakenly specify a lot of wrong block numbers.\n\n----\n+ offnos = (OffsetNumber *) palloc(ntids * sizeof(OffsetNumber));\n+ noffs = curr_start_ptr = next_start_ptr = 0;\n+ nblocks = RelationGetNumberOfBlocks(rel);\n+\n+ do\n+ {\n\n(snip)\n\n+\n+ /*\n+ * Get the offset numbers from the tids belonging to one particular page\n+ * and process them one by one.\n+ */\n+ blkno = tids_same_page_fetch_offnums(tids, ntids, &next_start_ptr,\n+ offnos);\n+\n+ /* Calculate the number of offsets stored in offnos array. */\n+ noffs = next_start_ptr - curr_start_ptr;\n+\n\n(snip)\n\n+ /* No ereport(ERROR) from here until all the changes are logged. */\n+ START_CRIT_SECTION();\n+\n+ for (i = 0; i < noffs; i++)\n\nYou copy all offset numbers belonging to the same page to palloc'd\narray, offnos, and iterate it while processing the tuples. I might be\nmissing something but I think we can do that without allocating the\nspace for offset numbers. Is there any reason for this? I guess we can\ndo that by just iterating the sorted tids array.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 25 Aug 2020 17:08:53 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Tue, 25 Aug 2020 at 17:08, Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Fri, 21 Aug 2020 at 22:25, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> >\n> > Hi Masahiko-san,\n> >\n> > Please find the updated patch with the following new changes:\n> >\n>\n> Thank you for updating the patch!\n>\n> > 1) It adds the code changes in heap_force_kill function to clear an\n> > all-visible bit on the visibility map corresponding to the page that\n> > is marked all-visible. Along the way it also clears PD_ALL_VISIBLE\n> > flag on the page header.\n>\n> I think we need to clear all visibility map bits by using\n> VISIBILITYMAP_VALID_BITS. Otherwise, the page has all-frozen bit but\n> not all-visible bit, which is not a valid state.\n>\n> >\n> > 2) It adds the code changes in heap_force_freeze function to reset the\n> > ctid value in a tuple header if it is corrupted.\n> >\n> > 3) It adds several notes and examples in the documentation stating\n> > when and how we need to use the functions provided by this module.\n> >\n> > Please have a look and let me know for any other concern.\n> >\n>\n> And here are small comments on the heap_surgery.c:\n>\n> + /*\n> + * Get the offset numbers from the tids belonging to one particular page\n> + * and process them one by one.\n> + */\n> + blkno = tids_same_page_fetch_offnums(tids, ntids, &next_start_ptr,\n> + offnos);\n> +\n> + /* Calculate the number of offsets stored in offnos array. */\n> + noffs = next_start_ptr - curr_start_ptr;\n> +\n> + /*\n> + * Update the current start pointer so that next time when\n> + * tids_same_page_fetch_offnums() is called, we can calculate the number\n> + * of offsets present in the offnos array.\n> + */\n> + curr_start_ptr = next_start_ptr;\n> +\n> + /* Check whether the block number is valid. */\n> + if (blkno >= nblocks)\n> + {\n> + ereport(NOTICE,\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"skipping block %u for relation \\\"%s\\\"\n> because the block number is out of range\",\n> + blkno, RelationGetRelationName(rel))));\n> + continue;\n> + }\n> +\n> + CHECK_FOR_INTERRUPTS();\n>\n> I guess it would be better to call CHECK_FOR_INTERRUPTS() at the top\n> of the do loop for safety. I think it's unlikely to happen but the\n> user might mistakenly specify a lot of wrong block numbers.\n>\n> ----\n> + offnos = (OffsetNumber *) palloc(ntids * sizeof(OffsetNumber));\n> + noffs = curr_start_ptr = next_start_ptr = 0;\n> + nblocks = RelationGetNumberOfBlocks(rel);\n> +\n> + do\n> + {\n>\n> (snip)\n>\n> +\n> + /*\n> + * Get the offset numbers from the tids belonging to one particular page\n> + * and process them one by one.\n> + */\n> + blkno = tids_same_page_fetch_offnums(tids, ntids, &next_start_ptr,\n> + offnos);\n> +\n> + /* Calculate the number of offsets stored in offnos array. */\n> + noffs = next_start_ptr - curr_start_ptr;\n> +\n>\n> (snip)\n>\n> + /* No ereport(ERROR) from here until all the changes are logged. */\n> + START_CRIT_SECTION();\n> +\n> + for (i = 0; i < noffs; i++)\n>\n> You copy all offset numbers belonging to the same page to palloc'd\n> array, offnos, and iterate it while processing the tuples. I might be\n> missing something but I think we can do that without allocating the\n> space for offset numbers. Is there any reason for this? I guess we can\n> do that by just iterating the sorted tids array.\n>\n\nLet me share other comments on the latest version patch:\n\nSome words need to be tagged. For instance, I found the following words:\n\nVACUUM\nDISABLE_PAGE_SKIPPING\nHEAP_XMIN_FROZEN\nHEAP_XMAX_INVALID\n\n---\n+test=# select ctid from t1 where xmin = 507;\n+ ctid\n+-------\n+ (0,3)\n+(1 row)\n+\n+test=# select heap_force_freeze('t1'::regclass, ARRAY['(0, 3)']::tid[]);\n+-[ RECORD 1 ]-----+-\n+heap_force_freeze |\n\nI think it's better to use a consistent output format. The former uses\nthe normal format whereas the latter uses the expanded format.\n\n---\n+ <note>\n+ <para>\n+ While performing surgery on a damaged relation, we must not be doing anything\n+ else on that relation in parallel. This is to ensure that when we are\n+ operating on a damaged tuple there is no other transaction trying to modify\n+ that tuple.\n+ </para>\n+ </note>\n\nIf we prefer to avoid concurrent operations on the target relation why\ndon't we use AccessExclusiveLock?\n\n---\n+CREATE FUNCTION heap_force_kill(reloid regclass, tids tid[])\n+RETURNS VOID\n+AS 'MODULE_PATHNAME', 'heap_force_kill'\n+LANGUAGE C STRICT;\n\n+CREATE FUNCTION heap_force_freeze(reloid regclass, tids tid[])\n+RETURNS VOID\n+AS 'MODULE_PATHNAME', 'heap_force_freeze'\n+LANGUAGE C STRICT;\n\nI think these functions should be PARALLEL UNSAFE.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 25 Aug 2020 21:17:05 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Hi Masahiko-san,\n\nThank you for the review. Please check my comments inline below:\n\nOn Tue, Aug 25, 2020 at 1:39 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Fri, 21 Aug 2020 at 22:25, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> >\n> > Hi Masahiko-san,\n> >\n> > Please find the updated patch with the following new changes:\n> >\n>\n> Thank you for updating the patch!\n>\n> > 1) It adds the code changes in heap_force_kill function to clear an\n> > all-visible bit on the visibility map corresponding to the page that\n> > is marked all-visible. Along the way it also clears PD_ALL_VISIBLE\n> > flag on the page header.\n>\n> I think we need to clear all visibility map bits by using\n> VISIBILITYMAP_VALID_BITS. Otherwise, the page has all-frozen bit but\n> not all-visible bit, which is not a valid state.\n>\n\nYeah, makes sense, I will do that change in the next version of patch.\n\n> >\n> > 2) It adds the code changes in heap_force_freeze function to reset the\n> > ctid value in a tuple header if it is corrupted.\n> >\n> > 3) It adds several notes and examples in the documentation stating\n> > when and how we need to use the functions provided by this module.\n> >\n> > Please have a look and let me know for any other concern.\n> >\n>\n> And here are small comments on the heap_surgery.c:\n>\n> + /*\n> + * Get the offset numbers from the tids belonging to one particular page\n> + * and process them one by one.\n> + */\n> + blkno = tids_same_page_fetch_offnums(tids, ntids, &next_start_ptr,\n> + offnos);\n> +\n> + /* Calculate the number of offsets stored in offnos array. */\n> + noffs = next_start_ptr - curr_start_ptr;\n> +\n> + /*\n> + * Update the current start pointer so that next time when\n> + * tids_same_page_fetch_offnums() is called, we can calculate the number\n> + * of offsets present in the offnos array.\n> + */\n> + curr_start_ptr = next_start_ptr;\n> +\n> + /* Check whether the block number is valid. */\n> + if (blkno >= nblocks)\n> + {\n> + ereport(NOTICE,\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"skipping block %u for relation \\\"%s\\\"\n> because the block number is out of range\",\n> + blkno, RelationGetRelationName(rel))));\n> + continue;\n> + }\n> +\n> + CHECK_FOR_INTERRUPTS();\n>\n> I guess it would be better to call CHECK_FOR_INTERRUPTS() at the top\n> of the do loop for safety. I think it's unlikely to happen but the\n> user might mistakenly specify a lot of wrong block numbers.\n>\n\nOkay, np, will shift it to top of the do loop.\n\n> ----\n> + offnos = (OffsetNumber *) palloc(ntids * sizeof(OffsetNumber));\n> + noffs = curr_start_ptr = next_start_ptr = 0;\n> + nblocks = RelationGetNumberOfBlocks(rel);\n> +\n> + do\n> + {\n>\n> (snip)\n>\n> +\n> + /*\n> + * Get the offset numbers from the tids belonging to one particular page\n> + * and process them one by one.\n> + */\n> + blkno = tids_same_page_fetch_offnums(tids, ntids, &next_start_ptr,\n> + offnos);\n> +\n> + /* Calculate the number of offsets stored in offnos array. */\n> + noffs = next_start_ptr - curr_start_ptr;\n> +\n>\n> (snip)\n>\n> + /* No ereport(ERROR) from here until all the changes are logged. */\n> + START_CRIT_SECTION();\n> +\n> + for (i = 0; i < noffs; i++)\n>\n> You copy all offset numbers belonging to the same page to palloc'd\n> array, offnos, and iterate it while processing the tuples. I might be\n> missing something but I think we can do that without allocating the\n> space for offset numbers. Is there any reason for this? I guess we can\n> do that by just iterating the sorted tids array.\n>\n\nHmmm.. okay, I see your point. I think probably what you are trying to\nsuggest here is to make use of the current and next start pointers to\nget the tids belonging to the same page and process them one by one\ninstead of fetching the offset numbers of all tids belonging to one\npage into the offnos array and then iterate through the offnos array.\nI think that is probably possible and I will try to do that in the\nnext version of patch. If there is something else that you have in\nyour mind, please let me know.\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 25 Aug 2020 18:08:38 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Tue, Aug 25, 2020 at 8:17 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n> + <note>\n> + <para>\n> + While performing surgery on a damaged relation, we must not be doing anything\n> + else on that relation in parallel. This is to ensure that when we are\n> + operating on a damaged tuple there is no other transaction trying to modify\n> + that tuple.\n> + </para>\n> + </note>\n>\n> If we prefer to avoid concurrent operations on the target relation why\n> don't we use AccessExclusiveLock?\n\nI disagree with the content of the note. It's up to the user whether\nto perform any concurrent operations on the target relation, and in\nmany cases it would be fine to do so. Users who can afford to take the\ntable off-line to repair the problem don't really need this tool in\nthe first place.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 25 Aug 2020 14:21:23 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Tue, Aug 25, 2020 at 11:51 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Aug 25, 2020 at 8:17 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> > + <note>\n> > + <para>\n> > + While performing surgery on a damaged relation, we must not be doing anything\n> > + else on that relation in parallel. This is to ensure that when we are\n> > + operating on a damaged tuple there is no other transaction trying to modify\n> > + that tuple.\n> > + </para>\n> > + </note>\n> >\n> > If we prefer to avoid concurrent operations on the target relation why\n> > don't we use AccessExclusiveLock?\n>\n> I disagree with the content of the note. It's up to the user whether\n> to perform any concurrent operations on the target relation, and in\n> many cases it would be fine to do so. Users who can afford to take the\n> table off-line to repair the problem don't really need this tool in\n> the first place.\n>\n\nThe only reason I added this note was to ensure that we do not revive\nthe tuple that is deleted but not yet vacuumed. There is one\ncorner-case scenario as reported by you in - [1] where you have\nexplained a scenario under which vacuum can report \"found xmin ...\nfrom before relfrozenxid ...\" sort of error for the deleted tuples.\nAnd as per the explanation provided there, it can happen when there\nare multiple transactions operating on the same tuple. However, I\nthink we can take care of this scenario by doing some code changes in\nheap_force_freeze to identify the deleted tuples and maybe skip such\ntuples. So, yeah, I will do the code changes for handling this and\nremove the note added in the documentation. Thank you Robert and\nMasahiko-san for pointing this out.\n\n[1] - https://www.postgresql.org/message-id/CA%2BTgmobfJ8CkabKJZ-1FGfvbSz%2Bb8bBX807Y6hHEtVfzVe%2Bg6A%40mail.gmail.com\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 26 Aug 2020 07:18:49 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Thanks for the review. Please find my comments inline below:\n\nOn Tue, Aug 25, 2020 at 5:47 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> Let me share other comments on the latest version patch:\n>\n> Some words need to be tagged. For instance, I found the following words:\n>\n> VACUUM\n> DISABLE_PAGE_SKIPPING\n> HEAP_XMIN_FROZEN\n> HEAP_XMAX_INVALID\n>\n\nOkay, done.\n\n> ---\n> +test=# select ctid from t1 where xmin = 507;\n> + ctid\n> +-------\n> + (0,3)\n> +(1 row)\n> +\n> +test=# select heap_force_freeze('t1'::regclass, ARRAY['(0, 3)']::tid[]);\n> +-[ RECORD 1 ]-----+-\n> +heap_force_freeze |\n>\n> I think it's better to use a consistent output format. The former uses\n> the normal format whereas the latter uses the expanded format.\n>\n\nYep, makes sense, done.\n\n> ---\n> + <note>\n> + <para>\n> + While performing surgery on a damaged relation, we must not be doing anything\n> + else on that relation in parallel. This is to ensure that when we are\n> + operating on a damaged tuple there is no other transaction trying to modify\n> + that tuple.\n> + </para>\n> + </note>\n>\n> If we prefer to avoid concurrent operations on the target relation why\n> don't we use AccessExclusiveLock?\n>\n\nRemoved this note from the documentation and added a note saying: \"The\nuser needs to ensure that they do not operate pg_force_freeze function\non a deleted tuple because it may revive the deleted tuple.\"\n\n> ---\n> +CREATE FUNCTION heap_force_kill(reloid regclass, tids tid[])\n> +RETURNS VOID\n> +AS 'MODULE_PATHNAME', 'heap_force_kill'\n> +LANGUAGE C STRICT;\n>\n> +CREATE FUNCTION heap_force_freeze(reloid regclass, tids tid[])\n> +RETURNS VOID\n> +AS 'MODULE_PATHNAME', 'heap_force_freeze'\n> +LANGUAGE C STRICT;\n>\n> I think these functions should be PARALLEL UNSAFE.\n>\n\nBy default the functions are marked PARALLEL UNSAFE, so I think there\nis nothing to do here.\n\nAttached patch with above changes.\n\nThis patch also takes care of all the other review comments from - [1].\n\n[1] - https://www.postgresql.org/message-id/CA%2Bfd4k6%2BJWq2MfQt5b7fSJ2wMvCes9TRfbDhVO_fQP9B8JJRAA%40mail.gmail.com\n\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com", "msg_date": "Wed, 26 Aug 2020 17:06:09 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Wed, Aug 26, 2020 at 7:36 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> Removed this note from the documentation and added a note saying: \"The\n> user needs to ensure that they do not operate pg_force_freeze function\n> on a deleted tuple because it may revive the deleted tuple.\"\n\nI do not agree with that note, either. I believe that trying to tell\npeople what things specifically they should do or avoid doing with the\ntool is the wrong approach. Instead, the thrust of the message should\nbe to tell people that if you use this, it may corrupt your database,\nand that's your problem. The difficulty with telling people what\nspecifically they ought to avoid doing is that experts will be annoyed\nto be told that something is not safe when they know that it is fine,\nand non-experts will think that some uses are safer than they really\nare.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 26 Aug 2020 11:49:01 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Wed, Aug 26, 2020 at 9:19 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Aug 26, 2020 at 7:36 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > Removed this note from the documentation and added a note saying: \"The\n> > user needs to ensure that they do not operate pg_force_freeze function\n> > on a deleted tuple because it may revive the deleted tuple.\"\n>\n> I do not agree with that note, either. I believe that trying to tell\n> people what things specifically they should do or avoid doing with the\n> tool is the wrong approach. Instead, the thrust of the message should\n> be to tell people that if you use this, it may corrupt your database,\n> and that's your problem. The difficulty with telling people what\n> specifically they ought to avoid doing is that experts will be annoyed\n> to be told that something is not safe when they know that it is fine,\n> and non-experts will think that some uses are safer than they really\n> are.\n>\n\nOkay, point noted.\n\nThanks,\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 27 Aug 2020 07:56:21 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Wed, Aug 26, 2020 at 10:26 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> Okay, point noted.\n\nI spent some time today working on this patch. I'm fairly happy with\nit now and intend to commit it if nobody sees a big problem with that.\nPer discussion, I do not intend to back-patch at this time. The two\nmost significant changes I made to your version are:\n\n1. I changed things around to avoid using any form of ereport() in a\ncritical section. I'm not actually sure whether it is project policy\nto avoid ereport(NOTICE, ...) or similar in a critical section, but it\nseems prudent, because if anything fails in a critical section, we\nwill PANIC, so doing fewer things there seems prudent.\n\n2. I changed the code so that it does not try to follow redirected\nline pointers; instead, it skips them with an appropriate message, as\nwe were already doing for dead and unused line pointers. I think the\nway you had it coded might've been my suggestion originally, but the\nmore I looked into it the less I liked it. One problem is that it\ndidn't match the docs. A second is that following a corrupted line\npointer might index off the end of the line pointer array, and while\nthat probably shouldn't happen, we are talking about corruption\nrecovery here. Then I realized that, as you coded it, if the line\npointer was redirected to a line pointer that is in turn dead (or\nunused, if there's corruption) the user would get a NOTICE complaining\nabout a TID they hadn't specified, which seems like it would be very\nconfusing. I thought about trying to fix all that stuff, but it just\ndidn't seem worth it, because I can't think of a good reason to pass\nthis function the TID of a redirected line pointer in the first place.\nIf you're doing surgery, you should probably specify the exact thing\nupon which you want to operate, not some other thing that points to\nit.\n\nHere is a list of other changes I made:\n\n* Added a .gitignore file.\n* Renamed the regression test file from pg_surgery to heap_surgery to\nmatch the name of the single C source file we currently have.\n* Capitalized TID in a few places.\n* Ran pgindent.\n* Adjusted various comments.\n* Removed the check for an empty TID array. I don't see any reason why\nthis should be an error case and I don't see much precedent for having\nsuch a check.\n* Fixed the code to work properly with that change and added a test case.\n* Added a check that the array is not multi-dimensional.\n* Put the AM type check before the relkind check, following existing precedent.\n* Adjusted the check to use the AM OID rather than the handler OID,\nfollowing existing precedent. Fixed the message wording accordingly.\n* Changed the documentation wording to say less about specific\nrecovery procedures and focus more on the general idea that this is\ndangerous.\n* Removed all but one of the test cases that checked what happens if\nyou use this on a non-heap; three tests for basically the same thing\nseemed excessive.\n* Added some additional tests to improve code coverage. There are now\nonly a handful of lines not covered.\n* Reorganized the test cases somewhat.\n\nNew patch attached.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 27 Aug 2020 16:14:02 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "\n\n> On Aug 26, 2020, at 4:36 AM, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> \n> This patch also takes care of all the other review comments from - [1].\n> \n> [1] - https://www.postgresql.org/message-id/CA%2Bfd4k6%2BJWq2MfQt5b7fSJ2wMvCes9TRfbDhVO_fQP9B8JJRAA%40mail.gmail.com\n> \n> \n> --\n> With Regards,\n> Ashutosh Sharma\n> EnterpriseDB:http://www.enterprisedb.com\n> <v8-0001-Add-contrib-pg_surgery-to-perform-surgery-on-a-damag.patch>\n\n\nHi Ashutosh,\n\nI took a look at the v8 patch, created a commitfest entry [1] because I did not find one already existent, and have the following review comments:\n\n\nHeapTupleForceOption should be added to src/tools/pgindent/typedefs.list.\n\n\nThe tidcmp function can be removed, and ItemPointerCompare used directly by qsort as:\n\n- qsort((void*) tids, ntids, sizeof(ItemPointerData), tidcmp);\n+ qsort((void*) tids, ntids, sizeof(ItemPointerData),\n+ (int (*) (const void *, const void *)) ItemPointerCompare);\n\n\nsanity_check_tid_array() has two error messages:\n\n \"array must not contain nulls\"\n \"empty tid array\"\n\nI would change the first to say \"tid array must not contain nulls\", as \"tid\" is the name of the parameter being checked. It is also more consistent with the second error message, but that doesn't matter to me so much, as I'd argue for removing the second check. I don't see why an empty array should draw an error. It seems more reasonable to just return early since there is no work to do. Consider if somebody uses a function that returns the tids for all corrupt tuples in a table, aggregates that into an array, and hands that to this function. It doesn't seem like an error for that aggregated array to have zero elements in it. I suppose you could emit a NOTICE in this case?\n\n\nUpthread:\n> On Aug 13, 2020, at 12:03 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n>> This looks like a very good suggestion to me. I will do this change in\n>> the next version. Just wondering if we should be doing similar changes\n>> in other contrib modules (like pgrowlocks, pageinspect and\n>> pgstattuple) as well?\n> \n> It seems like it should be consistent, but I'm not sure the proposed\n> change is really an improvement.\n\nYou have used Asim's proposed check:\n\n if (rel->rd_amhandler != HEAP_TABLE_AM_HANDLER_OID)\n ereport(ERROR,\n (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n errmsg(\"only the relation using heap_tableam_handler is supported\")));\n\nwhich Robert seems unenthusiastic about, but if you are going that direction, I think at least the language of the error message should be changed. I recommend something like:\n\n if (rel->rd_amhandler != HEAP_TABLE_AM_HANDLER_OID)\n ereport(ERROR,\n (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n- errmsg(\"only the relation using heap_tableam_handler is supported\")));\n+ errmsg(\"\\\"%s\\\" does not use a heap access method\",\n+ RelationGetRelationName(rel))));\n\nwhere \"a heap access method\" could also be written as \"a heap table type access method\", \"a heap table compatible access method\", and so forth. There doesn't seem to be enough precedent to dictate exactly how to phrase this, or perhaps I'm just not looking in the right place.\n\n\nThe header comment for function find_tids_one_page should state the requirement that the tids array must be sorted.\n\n\nThe heap_force_common function contains multiple ereport(NOTICE,...) within a critical section. I don't think that is normal practice. Can those reports be buffered until after the critical section is exited? You also have a CHECK_FOR_INTERRUPTS within the critical section.\n\n[1] https://commitfest.postgresql.org/29/2700/\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 27 Aug 2020 13:41:31 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Fri, Aug 28, 2020 at 1:44 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Aug 26, 2020 at 10:26 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > Okay, point noted.\n>\n> I spent some time today working on this patch. I'm fairly happy with\n> it now and intend to commit it if nobody sees a big problem with that.\n> Per discussion, I do not intend to back-patch at this time. The two\n> most significant changes I made to your version are:\n>\n> 1. I changed things around to avoid using any form of ereport() in a\n> critical section. I'm not actually sure whether it is project policy\n> to avoid ereport(NOTICE, ...) or similar in a critical section, but it\n> seems prudent, because if anything fails in a critical section, we\n> will PANIC, so doing fewer things there seems prudent.\n>\n> 2. I changed the code so that it does not try to follow redirected\n> line pointers; instead, it skips them with an appropriate message, as\n> we were already doing for dead and unused line pointers. I think the\n> way you had it coded might've been my suggestion originally, but the\n> more I looked into it the less I liked it. One problem is that it\n> didn't match the docs. A second is that following a corrupted line\n> pointer might index off the end of the line pointer array, and while\n> that probably shouldn't happen, we are talking about corruption\n> recovery here. Then I realized that, as you coded it, if the line\n> pointer was redirected to a line pointer that is in turn dead (or\n> unused, if there's corruption) the user would get a NOTICE complaining\n> about a TID they hadn't specified, which seems like it would be very\n> confusing. I thought about trying to fix all that stuff, but it just\n> didn't seem worth it, because I can't think of a good reason to pass\n> this function the TID of a redirected line pointer in the first place.\n> If you're doing surgery, you should probably specify the exact thing\n> upon which you want to operate, not some other thing that points to\n> it.\n>\n> Here is a list of other changes I made:\n>\n> * Added a .gitignore file.\n> * Renamed the regression test file from pg_surgery to heap_surgery to\n> match the name of the single C source file we currently have.\n> * Capitalized TID in a few places.\n> * Ran pgindent.\n> * Adjusted various comments.\n> * Removed the check for an empty TID array. I don't see any reason why\n> this should be an error case and I don't see much precedent for having\n> such a check.\n> * Fixed the code to work properly with that change and added a test case.\n> * Added a check that the array is not multi-dimensional.\n> * Put the AM type check before the relkind check, following existing precedent.\n> * Adjusted the check to use the AM OID rather than the handler OID,\n> following existing precedent. Fixed the message wording accordingly.\n> * Changed the documentation wording to say less about specific\n> recovery procedures and focus more on the general idea that this is\n> dangerous.\n> * Removed all but one of the test cases that checked what happens if\n> you use this on a non-heap; three tests for basically the same thing\n> seemed excessive.\n> * Added some additional tests to improve code coverage. There are now\n> only a handful of lines not covered.\n> * Reorganized the test cases somewhat.\n>\n> New patch attached.\n>\n\nThank you Robert for the patch. I've looked into the changes you've\nmade to the v8 patch and they all look good to me.\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 28 Aug 2020 08:22:50 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Hi Mark,\n\nThanks for the review. Please find my comments inline below:\n\n> HeapTupleForceOption should be added to src/tools/pgindent/typedefs.list.\n>\n\nThis has been fixed in the v9 patch.\n\n>\n> The tidcmp function can be removed, and ItemPointerCompare used directly by qsort as:\n>\n> - qsort((void*) tids, ntids, sizeof(ItemPointerData), tidcmp);\n> + qsort((void*) tids, ntids, sizeof(ItemPointerData),\n> + (int (*) (const void *, const void *)) ItemPointerCompare);\n>\n\nWill have a look into this.\n\n>\n> sanity_check_tid_array() has two error messages:\n>\n> \"array must not contain nulls\"\n> \"empty tid array\"\n>\n> I would change the first to say \"tid array must not contain nulls\", as \"tid\" is the name of the parameter being checked. It is also more consistent with the second error message, but that doesn't matter to me so much, as I'd argue for removing the second check. I don't see why an empty array should draw an error. It seems more reasonable to just return early since there is no work to do. Consider if somebody uses a function that returns the tids for all corrupt tuples in a table, aggregates that into an array, and hands that to this function. It doesn't seem like an error for that aggregated array to have zero elements in it. I suppose you could emit a NOTICE in this case?\n>\n\nThis comment is no more valid as per the changes done in the v9 patch.\n\n>\n> Upthread:\n> > On Aug 13, 2020, at 12:03 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> >> This looks like a very good suggestion to me. I will do this change in\n> >> the next version. Just wondering if we should be doing similar changes\n> >> in other contrib modules (like pgrowlocks, pageinspect and\n> >> pgstattuple) as well?\n> >\n> > It seems like it should be consistent, but I'm not sure the proposed\n> > change is really an improvement.\n>\n> You have used Asim's proposed check:\n>\n> if (rel->rd_amhandler != HEAP_TABLE_AM_HANDLER_OID)\n> ereport(ERROR,\n> (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> errmsg(\"only the relation using heap_tableam_handler is supported\")));\n>\n> which Robert seems unenthusiastic about, but if you are going that direction, I think at least the language of the error message should be changed. I recommend something like:\n>\n> if (rel->rd_amhandler != HEAP_TABLE_AM_HANDLER_OID)\n> ereport(ERROR,\n> (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> - errmsg(\"only the relation using heap_tableam_handler is supported\")));\n> + errmsg(\"\\\"%s\\\" does not use a heap access method\",\n> + RelationGetRelationName(rel))));\n>\n> where \"a heap access method\" could also be written as \"a heap table type access method\", \"a heap table compatible access method\", and so forth. There doesn't seem to be enough precedent to dictate exactly how to phrase this, or perhaps I'm just not looking in the right place.\n>\n\nSame here. This also looks invalid as per the changes done in v9 patch.\n\n>\n> The header comment for function find_tids_one_page should state the requirement that the tids array must be sorted.\n>\n\nOkay, will add a comment for this.\n\n>\n> The heap_force_common function contains multiple ereport(NOTICE,...) within a critical section. I don't think that is normal practice. Can those reports be buffered until after the critical section is exited? You also have a CHECK_FOR_INTERRUPTS within the critical section.\n>\n\nThis has been fixed in the v9 patch.\n\n> [1] https://commitfest.postgresql.org/29/2700/\n> —\n\nThanks for adding a commitfest entry for this.\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 28 Aug 2020 08:40:46 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Fri, 28 Aug 2020 at 05:14, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Aug 26, 2020 at 10:26 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > Okay, point noted.\n>\n> I spent some time today working on this patch. I'm fairly happy with\n> it now and intend to commit it if nobody sees a big problem with that.\n> Per discussion, I do not intend to back-patch at this time. The two\n> most significant changes I made to your version are:\n\nThank you for updating the patch.\n\n>\n> 1. I changed things around to avoid using any form of ereport() in a\n> critical section. I'm not actually sure whether it is project policy\n> to avoid ereport(NOTICE, ...) or similar in a critical section, but it\n> seems prudent, because if anything fails in a critical section, we\n> will PANIC, so doing fewer things there seems prudent.\n>\n> 2. I changed the code so that it does not try to follow redirected\n> line pointers; instead, it skips them with an appropriate message, as\n> we were already doing for dead and unused line pointers. I think the\n> way you had it coded might've been my suggestion originally, but the\n> more I looked into it the less I liked it. One problem is that it\n> didn't match the docs. A second is that following a corrupted line\n> pointer might index off the end of the line pointer array, and while\n> that probably shouldn't happen, we are talking about corruption\n> recovery here. Then I realized that, as you coded it, if the line\n> pointer was redirected to a line pointer that is in turn dead (or\n> unused, if there's corruption) the user would get a NOTICE complaining\n> about a TID they hadn't specified, which seems like it would be very\n> confusing. I thought about trying to fix all that stuff, but it just\n> didn't seem worth it, because I can't think of a good reason to pass\n> this function the TID of a redirected line pointer in the first place.\n> If you're doing surgery, you should probably specify the exact thing\n> upon which you want to operate, not some other thing that points to\n> it.\n>\n> Here is a list of other changes I made:\n>\n> * Added a .gitignore file.\n> * Renamed the regression test file from pg_surgery to heap_surgery to\n> match the name of the single C source file we currently have.\n> * Capitalized TID in a few places.\n> * Ran pgindent.\n> * Adjusted various comments.\n> * Removed the check for an empty TID array. I don't see any reason why\n> this should be an error case and I don't see much precedent for having\n> such a check.\n> * Fixed the code to work properly with that change and added a test case.\n> * Added a check that the array is not multi-dimensional.\n> * Put the AM type check before the relkind check, following existing precedent.\n> * Adjusted the check to use the AM OID rather than the handler OID,\n> following existing precedent. Fixed the message wording accordingly.\n> * Changed the documentation wording to say less about specific\n> recovery procedures and focus more on the general idea that this is\n> dangerous.\n\nYou've removed the description about executing VACUUM with\nDISABLE_PAGE_SKIPPING option on the target relation after using\npg_surgery functions from the doc but I guess it’s better to recommend\nthat in the doc for safety. Could you please tell me the reason for\nremoving that?\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 28 Aug 2020 17:06:27 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "> > The tidcmp function can be removed, and ItemPointerCompare used\ndirectly by qsort as:\n> >\n> > - qsort((void*) tids, ntids, sizeof(ItemPointerData),\ntidcmp);\n> > + qsort((void*) tids, ntids, sizeof(ItemPointerData),\n> > + (int (*) (const void *, const void *))\nItemPointerCompare);\n> >\n>\n> Will have a look into this.\n>\n\nWe can certainly do this way, but I would still prefer having a comparator\nfunction (tidcmp) here for the reasons that it makes the code look a bit\ncleaner, it also makes us more consistent with the way the comparator\nfunction argument is being passed to qsort at several other places in\npostgres which kinda of increases the code readability and simplicity. For\ne.g. there is a comparator function for gin that does the same thing as\ntidcmp is doing here. See below:\n\nstatic int\nqsortCompareItemPointers(const void *a, const void *b)\n{\n int res = ginCompareItemPointers((ItemPointer) a, (ItemPointer)\nb);\n\n /* Assert that there are no equal item pointers being sorted */\n Assert(res != 0);\n return res;\n}\n\nIn this case as well, it could have been done the way you are suggesting,\nbut it seems like writing a small comparator function with the prototype\nthat qsort accepts looked like a better option. Considering this, I am just\nleaving this as-it-is. Please let me know if you feel the other way round.\n\n> > The header comment for function find_tids_one_page should state the\nrequirement that the tids array must be sorted.\n> >\n>\n> Okay, will add a comment for this.\n>\n\nAdded a comment for this in the attached patch.\n\nPlease have a look into the attached patch for the changes and let me know\nfor any other concerns. Thank you.\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com", "msg_date": "Fri, 28 Aug 2020 15:25:19 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Fri, Aug 28, 2020 at 5:55 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> We can certainly do this way, but I would still prefer having a comparator function (tidcmp) here for the reasons that it makes the code look a bit cleaner, it also makes us more consistent with the way the comparator function argument is being passed to qsort at several other places in postgres which kinda of increases the code readability and simplicity. For e.g. there is a comparator function for gin that does the same thing as tidcmp is doing here.\n\nMe too. Casting one kind of function pointer to another kind of\nfunction pointer assumes that the compiler is using the same\nargument-passing conventions in both cases, which seems slightly\nrisky. It also means that if the signature for the function were to\ndiverge further from the signature that we need in the future, the\ncompiler might not warn us about it. Perhaps there is some case where\nthe performance gains would be sufficiently to justify those risks,\nbut this is certainly not that case.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 28 Aug 2020 10:08:55 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Fri, Aug 28, 2020 at 4:07 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n> You've removed the description about executing VACUUM with\n> DISABLE_PAGE_SKIPPING option on the target relation after using\n> pg_surgery functions from the doc but I guess it’s better to recommend\n> that in the doc for safety. Could you please tell me the reason for\n> removing that?\n\nWell, I think that was added because there wasn't code to clear the\nvisibility map bits, either page-level in the map, but we added code\nfor that, so now I don't really see why it's necessary or even\ndesirable.\n\nHere are a few example scenarios:\n\n1. My table is not corrupt. For no particular reason, I force-freeze\nor force-kill a tuple which is neither dead nor all-visible.\nConcurrent queries might return wrong answers, but the table is not\ncorrupt. It does not require VACUUM and would not benefit from it.\nActually, it doesn't need anything at all.\n\n2. My table is not corrupt. For no particular reason, I force-freeze a\ntuple which is dead. I believe it's possible that the index entries\nfor that tuple might be gone already, but VACUUM won't fix that.\nREINDEX or a table rewrite would, though. It's also possible, if the\ndead tuple was added by an aborted transaction which added columns to\nthe table, that the tuple might have been created using a tuple\ndescriptor that differs from the table's current tuple descriptor. If\nso, I think scanning the table could produce a crash. VACUUM won't fix\nthis, either. I would need to delete or force-kill the offending\ntuple.\n\n3. I have one or more tuples in my table that are intact except that\nthey have garbage values for xmin, resulting in VACUUM failure or\npossibly even SELECT failure if the CLOG entries are also missing. I\nforce-kill or force-freeze them. If by chance the affected tuples were\nalso omitted from one or more indexes, a REINDEX or table rewrite is\nneeded to fix them, but a VACUUM will not help. On the other hand, if\nthose tuples are present in the indexes, there's no remaining problem\nand VACUUM is not needed for the purpose of restoring the integrity of\nthe table. If the problem has been ongoing for a while, VACUUM might\nbe needed to advance relfrozenxid, but that doesn't require\nDISABLE_PAGE_SKIPPING.\n\n4. I have some pages in my table that have incorrect visibility map\nbits. In this case, I need VACUUM (DISABLE_PAGE_SKIPPING). However, I\ndon't need the functions we're talking about here at all unless I also\nhave tuples with corrupted visibility information. If I do happen to\nhave both tuples with corrupted visibility information and also pages\nwith incorrect visibility map bits, then I suppose I need both these\ntools and also VACUUM (DISABLE_PAGE_SKIPPING). Probably, I'll want to\ndo the VACUUM second. But, if I happened to do the VACUUM first and\nthen use these functions afterward, the worst thing that could happen\nis that I might end up with a some dead tuples that could've gotten\nremoved faster if I'd switched the order. And that's not a disaster.\n\nBasically, I can see no real reason to recommend VACUUM\n(DISABLE_PAGE_SKIPPING) here. There are problems that can be fixed\nwith that command, and there are problems that can be fixed by this\nmethod, but they are mostly independent of each other. We should not\nrecommend that people run VACUUM \"just in case.\" That kind of fuzzy\nthinking seems relatively prevalent already, and it leads to people\nspending a lot of time running slow maintenance commands that do\nnothing to help them, and which occasionally make things worse.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 28 Aug 2020 10:38:56 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Fri, 28 Aug 2020 at 23:39, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Aug 28, 2020 at 4:07 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> > You've removed the description about executing VACUUM with\n> > DISABLE_PAGE_SKIPPING option on the target relation after using\n> > pg_surgery functions from the doc but I guess it’s better to recommend\n> > that in the doc for safety. Could you please tell me the reason for\n> > removing that?\n>\n> Well, I think that was added because there wasn't code to clear the\n> visibility map bits, either page-level in the map, but we added code\n> for that, so now I don't really see why it's necessary or even\n> desirable.\n>\n> Here are a few example scenarios:\n>\n> 1. My table is not corrupt. For no particular reason, I force-freeze\n> or force-kill a tuple which is neither dead nor all-visible.\n> Concurrent queries might return wrong answers, but the table is not\n> corrupt. It does not require VACUUM and would not benefit from it.\n> Actually, it doesn't need anything at all.\n>\n> 2. My table is not corrupt. For no particular reason, I force-freeze a\n> tuple which is dead. I believe it's possible that the index entries\n> for that tuple might be gone already, but VACUUM won't fix that.\n> REINDEX or a table rewrite would, though. It's also possible, if the\n> dead tuple was added by an aborted transaction which added columns to\n> the table, that the tuple might have been created using a tuple\n> descriptor that differs from the table's current tuple descriptor. If\n> so, I think scanning the table could produce a crash. VACUUM won't fix\n> this, either. I would need to delete or force-kill the offending\n> tuple.\n>\n> 3. I have one or more tuples in my table that are intact except that\n> they have garbage values for xmin, resulting in VACUUM failure or\n> possibly even SELECT failure if the CLOG entries are also missing. I\n> force-kill or force-freeze them. If by chance the affected tuples were\n> also omitted from one or more indexes, a REINDEX or table rewrite is\n> needed to fix them, but a VACUUM will not help. On the other hand, if\n> those tuples are present in the indexes, there's no remaining problem\n> and VACUUM is not needed for the purpose of restoring the integrity of\n> the table. If the problem has been ongoing for a while, VACUUM might\n> be needed to advance relfrozenxid, but that doesn't require\n> DISABLE_PAGE_SKIPPING.\n>\n> 4. I have some pages in my table that have incorrect visibility map\n> bits. In this case, I need VACUUM (DISABLE_PAGE_SKIPPING). However, I\n> don't need the functions we're talking about here at all unless I also\n> have tuples with corrupted visibility information. If I do happen to\n> have both tuples with corrupted visibility information and also pages\n> with incorrect visibility map bits, then I suppose I need both these\n> tools and also VACUUM (DISABLE_PAGE_SKIPPING). Probably, I'll want to\n> do the VACUUM second. But, if I happened to do the VACUUM first and\n> then use these functions afterward, the worst thing that could happen\n> is that I might end up with a some dead tuples that could've gotten\n> removed faster if I'd switched the order. And that's not a disaster.\n>\n> Basically, I can see no real reason to recommend VACUUM\n> (DISABLE_PAGE_SKIPPING) here. There are problems that can be fixed\n> with that command, and there are problems that can be fixed by this\n> method, but they are mostly independent of each other. We should not\n> recommend that people run VACUUM \"just in case.\" That kind of fuzzy\n> thinking seems relatively prevalent already, and it leads to people\n> spending a lot of time running slow maintenance commands that do\n> nothing to help them, and which occasionally make things worse.\n>\n\nThank you for your explanation. That very makes sense to me.\n\nIf vacuum could fix the particular kind of problem by using together\nwith pg_surgery we could recommend using vacuum. But I agree that the\ncorruption of heap table is not the case.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 1 Sep 2020 18:16:06 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Fri, Aug 28, 2020 at 5:55 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> Please have a look into the attached patch for the changes and let me know for any other concerns. Thank you.\n\nI have committed this version.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 10 Sep 2020 11:21:02 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Thu, Sep 10, 2020 at 11:21:02AM -0400, Robert Haas wrote:\n> On Fri, Aug 28, 2020 at 5:55 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > Please have a look into the attached patch for the changes and let me know for any other concerns. Thank you.\n> \n> I have committed this version.\n\nThanks ; I marked it as such in CF app.\nhttps://commitfest.postgresql.org/29/2700/\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 10 Sep 2020 12:54:24 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I have committed this version.\n\nThis failure says that the test case is not entirely stable:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2020-09-12%2005%3A13%3A12\n\ndiff -U3 /home/nm/farm/gcc64/HEAD/pgsql.build/contrib/pg_surgery/expected/heap_surgery.out /home/nm/farm/gcc64/HEAD/pgsql.build/contrib/pg_surgery/results/heap_surgery.out\n--- /home/nm/farm/gcc64/HEAD/pgsql.build/contrib/pg_surgery/expected/heap_surgery.out\t2020-09-11 06:31:36.000000000 +0000\n+++ /home/nm/farm/gcc64/HEAD/pgsql.build/contrib/pg_surgery/results/heap_surgery.out\t2020-09-12 11:40:26.000000000 +0000\n@@ -116,7 +116,6 @@\n vacuum freeze htab2;\n -- unused TIDs should be skipped\n select heap_force_kill('htab2'::regclass, ARRAY['(0, 2)']::tid[]);\n- NOTICE: skipping tid (0, 2) for relation \"htab2\" because it is marked unused\n heap_force_kill \n -----------------\n \n\nsungazer's first run after pg_surgery went in was successful, so it's\nnot a hard failure. I'm guessing that it's timing dependent.\n\nThe most obvious theory for the cause is that what VACUUM does with\na tuple depends on whether the tuple's xmin is below global xmin,\nand a concurrent autovacuum could very easily be holding back global\nxmin. While I can't easily get autovac to run at just the right\ntime, I did verify that a concurrent regular session holding back\nglobal xmin produces the symptom seen above. (To replicate, insert\n\"select pg_sleep(...)\" in heap_surgery.sql before \"-- now create an unused\nline pointer\"; run make installcheck; and use the delay to connect\nto the database manually, start a serializable transaction, and do\nany query to acquire a snapshot.)\n\nI suggest that the easiest way to make this test reliable is to\nmake the test tables be temp tables (which allows dropping the\nautovacuum_enabled = off property, too). In the wake of commit\na7212be8b, that should guarantee that vacuum has stable tuple-level\nbehavior regardless of what is happening concurrently.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 12 Sep 2020 18:00:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Sun, Sep 13, 2020 at 3:30 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > I have committed this version.\n>\n> This failure says that the test case is not entirely stable:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2020-09-12%2005%3A13%3A12\n>\n> diff -U3 /home/nm/farm/gcc64/HEAD/pgsql.build/contrib/pg_surgery/expected/heap_surgery.out /home/nm/farm/gcc64/HEAD/pgsql.build/contrib/pg_surgery/results/heap_surgery.out\n> --- /home/nm/farm/gcc64/HEAD/pgsql.build/contrib/pg_surgery/expected/heap_surgery.out 2020-09-11 06:31:36.000000000 +0000\n> +++ /home/nm/farm/gcc64/HEAD/pgsql.build/contrib/pg_surgery/results/heap_surgery.out 2020-09-12 11:40:26.000000000 +0000\n> @@ -116,7 +116,6 @@\n> vacuum freeze htab2;\n> -- unused TIDs should be skipped\n> select heap_force_kill('htab2'::regclass, ARRAY['(0, 2)']::tid[]);\n> - NOTICE: skipping tid (0, 2) for relation \"htab2\" because it is marked unused\n> heap_force_kill\n> -----------------\n>\n>\n> sungazer's first run after pg_surgery went in was successful, so it's\n> not a hard failure. I'm guessing that it's timing dependent.\n>\n> The most obvious theory for the cause is that what VACUUM does with\n> a tuple depends on whether the tuple's xmin is below global xmin,\n> and a concurrent autovacuum could very easily be holding back global\n> xmin. While I can't easily get autovac to run at just the right\n> time, I did verify that a concurrent regular session holding back\n> global xmin produces the symptom seen above. (To replicate, insert\n> \"select pg_sleep(...)\" in heap_surgery.sql before \"-- now create an unused\n> line pointer\"; run make installcheck; and use the delay to connect\n> to the database manually, start a serializable transaction, and do\n> any query to acquire a snapshot.)\n>\n\nThanks for reporting. I'm able to reproduce the issue by creating some\ndelay just before \"-- now create an unused line pointer\" and use the\ndelay to start a new session either with repeatable read or\nserializable transaction isolation level and run some query on the\ntest table. To fix this, as you suggested I've converted the test\ntable to the temp table. Attached is the patch with the changes.\nPlease have a look and let me know about any concerns.\n\nThanks,\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com", "msg_date": "Mon, 14 Sep 2020 15:56:07 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Mon, Sep 14, 2020 at 6:26 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> Thanks for reporting. I'm able to reproduce the issue by creating some\n> delay just before \"-- now create an unused line pointer\" and use the\n> delay to start a new session either with repeatable read or\n> serializable transaction isolation level and run some query on the\n> test table. To fix this, as you suggested I've converted the test\n> table to the temp table. Attached is the patch with the changes.\n> Please have a look and let me know about any concerns.\n\nTom, do you have any concerns about this fix?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 15 Sep 2020 15:36:22 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Sep 14, 2020 at 6:26 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>> Thanks for reporting. I'm able to reproduce the issue by creating some\n>> delay just before \"-- now create an unused line pointer\" and use the\n>> delay to start a new session either with repeatable read or\n>> serializable transaction isolation level and run some query on the\n>> test table. To fix this, as you suggested I've converted the test\n>> table to the temp table. Attached is the patch with the changes.\n>> Please have a look and let me know about any concerns.\n\n> Tom, do you have any concerns about this fix?\n\nIt seems OK as far as it goes. Two thoughts:\n\n* Do we need a comment in the test pointing out that the table must be\ntemp to ensure that we get stable vacuum results? Or will the commit\nlog message be enough documentation?\n\n* Should any of the other tables in the test be converted to temp?\nI see that the other test cases are kluging around related issues\nby dint of never committing their tables at all. It's not clear\nto me how badly those test cases have been distorted by that, or\nwhether it means they're testing less-than-typical situations.\n\nAnyway, if you're satisfied with leaving the other cases as-is,\nI have no objections.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 15 Sep 2020 15:54:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Wed, Sep 16, 2020 at 1:25 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Mon, Sep 14, 2020 at 6:26 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> >> Thanks for reporting. I'm able to reproduce the issue by creating some\n> >> delay just before \"-- now create an unused line pointer\" and use the\n> >> delay to start a new session either with repeatable read or\n> >> serializable transaction isolation level and run some query on the\n> >> test table. To fix this, as you suggested I've converted the test\n> >> table to the temp table. Attached is the patch with the changes.\n> >> Please have a look and let me know about any concerns.\n>\n> > Tom, do you have any concerns about this fix?\n>\n> It seems OK as far as it goes. Two thoughts:\n>\n> * Do we need a comment in the test pointing out that the table must be\n> temp to ensure that we get stable vacuum results? Or will the commit\n> log message be enough documentation?\n>\n\nI'll add a note for this.\n\n> * Should any of the other tables in the test be converted to temp?\n> I see that the other test cases are kluging around related issues\n> by dint of never committing their tables at all. It's not clear\n> to me how badly those test cases have been distorted by that, or\n> whether it means they're testing less-than-typical situations.\n>\n\nAre you trying to say that we can achieve the things being done in\ntest-case 1 and 2 by having a single temp table and we should aim for\nit because it will make the test-case more efficient and easy to\nmaintain? If so, I will try to do the necessary changes and submit a\nnew patch for it. please confirm.\n\nThanks,\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 16 Sep 2020 08:47:00 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Ashutosh Sharma <ashu.coek88@gmail.com> writes:\n> On Wed, Sep 16, 2020 at 1:25 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> * Should any of the other tables in the test be converted to temp?\n\n> Are you trying to say that we can achieve the things being done in\n> test-case 1 and 2 by having a single temp table and we should aim for\n> it because it will make the test-case more efficient and easy to\n> maintain?\n\nWell, I'm just looking at the comment that says the reason for the\nbegin/rollback structure is to keep autovacuum's hands off the table.\nIn most if not all of the other places where we need that, the preferred\nmethod is to make the table temp or mark it with (autovacuum = off).\nWhile this way isn't wrong exactly, nor inefficient, it does seem\na little restrictive. For instance, you can't easily test cases that\ninvolve intentional errors.\n\nAnother point is that we have a few optimizations that apply to tables\ncreated in the current transaction. I'm not sure whether any of them\nwould fire in this test case, but if they do (now or in the future)\nthat might mean you aren't testing the usual scenario for use of\npg_surgery, which is surely not going to be a new-in-transaction\ntable. (That might be an argument for preferring autovacuum = off\nover a temp table, too.)\n\nLike I said, I don't have a big problem with leaving the rest of the\ntest as-is. It just seems to be doing things in an unusual way for\nno very good reason.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 15 Sep 2020 23:44:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Wed, Sep 16, 2020 at 9:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Ashutosh Sharma <ashu.coek88@gmail.com> writes:\n> > On Wed, Sep 16, 2020 at 1:25 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> * Should any of the other tables in the test be converted to temp?\n>\n> > Are you trying to say that we can achieve the things being done in\n> > test-case 1 and 2 by having a single temp table and we should aim for\n> > it because it will make the test-case more efficient and easy to\n> > maintain?\n>\n> Well, I'm just looking at the comment that says the reason for the\n> begin/rollback structure is to keep autovacuum's hands off the table.\n> In most if not all of the other places where we need that, the preferred\n> method is to make the table temp or mark it with (autovacuum = off).\n> While this way isn't wrong exactly, nor inefficient, it does seem\n> a little restrictive. For instance, you can't easily test cases that\n> involve intentional errors.\n>\n> Another point is that we have a few optimizations that apply to tables\n> created in the current transaction. I'm not sure whether any of them\n> would fire in this test case, but if they do (now or in the future)\n> that might mean you aren't testing the usual scenario for use of\n> pg_surgery, which is surely not going to be a new-in-transaction\n> table. (That might be an argument for preferring autovacuum = off\n> over a temp table, too.)\n>\n\nI agree with you on both the above points. I'll try to make the\nnecessary changes to address all your comments. Also, I'd prefer\ncreating a normal heap table with autovacuum = off over the temp table\nfor the reasons you mentioned in the second point.\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 16 Sep 2020 10:40:26 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Wed, Sep 16, 2020 at 10:40 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> On Wed, Sep 16, 2020 at 9:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Ashutosh Sharma <ashu.coek88@gmail.com> writes:\n> > > On Wed, Sep 16, 2020 at 1:25 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >> * Should any of the other tables in the test be converted to temp?\n> >\n> > > Are you trying to say that we can achieve the things being done in\n> > > test-case 1 and 2 by having a single temp table and we should aim for\n> > > it because it will make the test-case more efficient and easy to\n> > > maintain?\n> >\n> > Well, I'm just looking at the comment that says the reason for the\n> > begin/rollback structure is to keep autovacuum's hands off the table.\n> > In most if not all of the other places where we need that, the preferred\n> > method is to make the table temp or mark it with (autovacuum = off).\n> > While this way isn't wrong exactly, nor inefficient, it does seem\n> > a little restrictive. For instance, you can't easily test cases that\n> > involve intentional errors.\n> >\n> > Another point is that we have a few optimizations that apply to tables\n> > created in the current transaction. I'm not sure whether any of them\n> > would fire in this test case, but if they do (now or in the future)\n> > that might mean you aren't testing the usual scenario for use of\n> > pg_surgery, which is surely not going to be a new-in-transaction\n> > table. (That might be an argument for preferring autovacuum = off\n> > over a temp table, too.)\n> >\n>\n> I agree with you on both the above points. I'll try to make the\n> necessary changes to address all your comments. Also, I'd prefer\n> creating a normal heap table with autovacuum = off over the temp table\n> for the reasons you mentioned in the second point.\n>\n\nAttached is the patch with the changes suggested here. I've tried to\nuse a normal heap table with (autovacuum = off) wherever possible.\nPlease have a look and let me know for any other issues.\n\nThanks,\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com", "msg_date": "Wed, 16 Sep 2020 11:18:46 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Wed, Sep 16, 2020 at 1:48 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> Attached is the patch with the changes suggested here. I've tried to\n> use a normal heap table with (autovacuum = off) wherever possible.\n> Please have a look and let me know for any other issues.\n\nI think the comment needs some wordsmithing -- \"unlike other cases\" is\nnot that informative, and \"we get a stable vacuum results\" isn't\neither very clear or all that grammatical. If we're going to add a\ncomment add here, why not just \"use a temp table, so autovacuum can't\ninterfere\"?\n\nTom, I know that you often have strong feelings about the exact\nwording and details of this kind of stuff, so if you feel moved to\ncommit something that is fine with me. If not, I will take my best\nshot at it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 16 Sep 2020 11:13:12 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Tom, I know that you often have strong feelings about the exact\n> wording and details of this kind of stuff, so if you feel moved to\n> commit something that is fine with me. If not, I will take my best\n> shot at it.\n\nI'm not feeling terribly picky about it --- so it's all yours.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 16 Sep 2020 11:48:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Wed, Sep 16, 2020 at 11:48 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > Tom, I know that you often have strong feelings about the exact\n> > wording and details of this kind of stuff, so if you feel moved to\n> > commit something that is fine with me. If not, I will take my best\n> > shot at it.\n>\n> I'm not feeling terribly picky about it --- so it's all yours.\n\nOK. After some more study of the thread and some more experimentation,\nI came up with the attached. I'll go ahead and commit this if nobody\nobjects.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 16 Sep 2020 14:34:37 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> OK. After some more study of the thread and some more experimentation,\n> I came up with the attached. I'll go ahead and commit this if nobody\n> objects.\n\nThis is OK by me.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 16 Sep 2020 14:44:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Thu, Sep 17, 2020 at 12:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > OK. After some more study of the thread and some more experimentation,\n> > I came up with the attached. I'll go ahead and commit this if nobody\n> > objects.\n>\n> This is OK by me.\n>\n\nLooks good to me too.\n\nThanks,\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Sep 2020 07:57:27 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Wed, Sep 16, 2020 at 10:27 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > This is OK by me.\n>\n> Looks good to me too.\n\nCool, thanks to both of you for looking. Committed.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 18 Sep 2020 13:27:29 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Cool, thanks to both of you for looking. Committed.\n\nHmph, according to whelk this is worse not better:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=whelk&dt=2020-09-18%2017%3A42%3A11\n\nI'm at a loss to understand what's going on there.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 18 Sep 2020 18:32:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Sat, Sep 19, 2020 at 4:03 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > Cool, thanks to both of you for looking. Committed.\n>\n> Hmph, according to whelk this is worse not better:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=whelk&dt=2020-09-18%2017%3A42%3A11\n>\n> I'm at a loss to understand what's going on there.\n>\n\nI think our assumption that changing the tests to have temp tables\nwill make them safe w.r.t concurrent activity doesn't seem to be\ncorrect. We do set OldestXmin for temp tables aggressive enough that\nit allows us to remove all dead tuples but the test case behavior lies\non whether we are able to prune the chain. AFAICS, we are using\ndifferent cut-offs in heap_page_prune when it is called via\nlazy_scan_heap. So that seems to be causing both the failures.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 19 Sep 2020 17:58:14 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> I think our assumption that changing the tests to have temp tables\n> will make them safe w.r.t concurrent activity doesn't seem to be\n> correct. We do set OldestXmin for temp tables aggressive enough that\n> it allows us to remove all dead tuples but the test case behavior lies\n> on whether we are able to prune the chain. AFAICS, we are using\n> different cut-offs in heap_page_prune when it is called via\n> lazy_scan_heap. So that seems to be causing both the failures.\n\nHm, reasonable theory.\n\nI was able to partially reproduce whelk's failure here. I got a\ncouple of cases of \"cannot freeze committed xmax\", which then leads\nto the second NOTICE diff; but I couldn't reproduce the first\nNOTICE diff. That was out of about a thousand tries :-( so it's not\nlooking like a promising thing to reproduce without modifying the test.\n\nI wonder whether \"cannot freeze committed xmax\" doesn't represent an\nactual bug, ie is a7212be8b setting the cutoff *too* aggressively?\nBut if so, why's it so hard to reproduce?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 19 Sep 2020 10:52:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "I wrote:\n> I was able to partially reproduce whelk's failure here. I got a\n> couple of cases of \"cannot freeze committed xmax\", which then leads\n> to the second NOTICE diff; but I couldn't reproduce the first\n> NOTICE diff. That was out of about a thousand tries :-( so it's not\n> looking like a promising thing to reproduce without modifying the test.\n\n... however, it's trivial to reproduce via manual interference,\nusing the same strategy discussed recently for another case:\nadd a pg_sleep at the start of the heap_surgery.sql script,\nrun \"make installcheck\", and while that's running start another\nsession in which you begin a serializable transaction, execute\nany old SELECT, and wait. AFAICT this reproduces all of whelk's\nsymptoms with 100% reliability.\n\nWith a little more effort, this could be automated by putting\nsome long-running transaction (likely, it needn't be any more\ncomplicated than \"select pg_sleep(10)\") in a second test script\nlaunched in parallel with heap_surgery.sql.\n\nSo this confirms the suspicion that the cause of the buildfarm\nfailures is a concurrently-open transaction, presumably from\nautovacuum. I don't have time to poke further right now.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 19 Sep 2020 16:19:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "I wrote:\n> So this confirms the suspicion that the cause of the buildfarm\n> failures is a concurrently-open transaction, presumably from\n> autovacuum. I don't have time to poke further right now.\n\nI spent some more time analyzing this, and there seem to be two distinct\nissues:\n\n1. My patch a7212be8b does indeed have a problem. It will allow\nvacuum_set_xid_limits to compute freezeLimit = nextXid for a temp\ntable if freeze_min_age is zero (ie VACUUM FREEZE). If there's\nany concurrent transactions, this falls foul of\nheap_prepare_freeze_tuple's expectation that\n\n * NB: cutoff_xid *must* be <= the current global xmin, to ensure that any\n * XID older than it could neither be running nor seen as running by any\n * open transaction. This ensures that the replacement will not change\n * anyone's idea of the tuple state.\n\nThe \"cannot freeze committed xmax\" error message appears to be banking on\nthe assumption that we'd not reach heap_prepare_freeze_tuple for any\ncommitted-dead tuple unless its xmax is past the specified cutoff_xid.\n\n2. As Amit suspected, there's an inconsistency between pruneheap.c's\nrules for which tuples are removable and vacuum.c's rules for that.\nThis seems like a massive bug in its own right: what's the point of\npruneheap.c going to huge effort to decide whether it should keep a\ntuple if vacuum will then kill it anyway? I do not understand why\nwhoever put in the GlobalVisState stuff only applied it in pruneheap.c\nand not VACUUM proper.\n\nThese two points interact, in that we don't get to the \"cannot freeze\"\nfailure except when pruneheap has decided not to remove something that\nis removable according to VACUUM's rules. (VACUUM doesn't actually\nremove it, because lazy_scan_heap won't try to remove HeapOnly tuples\neven when it thinks they're HEAPTUPLE_DEAD; but then it tries to freeze\nthe tuple, and heap_prepare_freeze_tuple spits up.) However, if I revert\na7212be8b then the pg_surgery test still fails in the presence of a\nconcurrent transaction (both of the expected \"skipping TID\" notices\ndisappear). So reverting that patch wouldn't get us out of trouble.\n\nI think to move forward, we need to figure out what the freezing\nbehavior ought to be for temp tables. We could make it the same\nas it was before a7212be8b, which'd just require some more complexity\nin vacuum_set_xid_limits. However, that negates the idea that we'd\nlike VACUUM's behavior on a temp table to be fully independent of\nwhether concurrent transactions exist. I'd prefer to allow a7212be8b's\nbehavior to stand, but then it seems we need to lobotomize the error\ncheck in heap_prepare_freeze_tuple to some extent.\n\nIndependently of that, it seems like we need to fix things so that\nwhen pruneheap.c is called by vacuum, it makes EXACTLY the same\ndead-or-not-dead decisions that the main vacuum code makes. This\nbusiness with applying some GlobalVisState rule or other instead\nseems just as unsafe as can be.\n\nAFAICS, there is no chance of the existing pg_surgery regression test\nbeing fully stable if we don't fix both things.\n\nBTW, attached is a quick-hack patch to allow automated testing\nof this scenario, along the lines I sketched yesterday. This\ntest passes if you run the two scripts serially, but not when\nyou run them in parallel. I'm not proposing this for commit;\nit's a hack and its timing behavior is probably not stable enough\nfor the buildfarm. But it's pretty useful for poking at these\nissues.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 20 Sep 2020 13:13:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Hi Tom,\n\nOn Sun, Sep 20, 2020 at 10:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > So this confirms the suspicion that the cause of the buildfarm\n> > failures is a concurrently-open transaction, presumably from\n> > autovacuum. I don't have time to poke further right now.\n>\n> I spent some more time analyzing this, and there seem to be two distinct\n> issues:\n>\n> 1. My patch a7212be8b does indeed have a problem. It will allow\n> vacuum_set_xid_limits to compute freezeLimit = nextXid for a temp\n> table if freeze_min_age is zero (ie VACUUM FREEZE). If there's\n> any concurrent transactions, this falls foul of\n> heap_prepare_freeze_tuple's expectation that\n>\n> * NB: cutoff_xid *must* be <= the current global xmin, to ensure that any\n> * XID older than it could neither be running nor seen as running by any\n> * open transaction. This ensures that the replacement will not change\n> * anyone's idea of the tuple state.\n>\n> The \"cannot freeze committed xmax\" error message appears to be banking on\n> the assumption that we'd not reach heap_prepare_freeze_tuple for any\n> committed-dead tuple unless its xmax is past the specified cutoff_xid.\n>\n> 2. As Amit suspected, there's an inconsistency between pruneheap.c's\n> rules for which tuples are removable and vacuum.c's rules for that.\n> This seems like a massive bug in its own right: what's the point of\n> pruneheap.c going to huge effort to decide whether it should keep a\n> tuple if vacuum will then kill it anyway? I do not understand why\n> whoever put in the GlobalVisState stuff only applied it in pruneheap.c\n> and not VACUUM proper.\n>\n> These two points interact, in that we don't get to the \"cannot freeze\"\n> failure except when pruneheap has decided not to remove something that\n> is removable according to VACUUM's rules. (VACUUM doesn't actually\n> remove it, because lazy_scan_heap won't try to remove HeapOnly tuples\n> even when it thinks they're HEAPTUPLE_DEAD; but then it tries to freeze\n> the tuple, and heap_prepare_freeze_tuple spits up.) However, if I revert\n> a7212be8b then the pg_surgery test still fails in the presence of a\n> concurrent transaction (both of the expected \"skipping TID\" notices\n> disappear). So reverting that patch wouldn't get us out of trouble.\n>\n> I think to move forward, we need to figure out what the freezing\n> behavior ought to be for temp tables. We could make it the same\n> as it was before a7212be8b, which'd just require some more complexity\n> in vacuum_set_xid_limits. However, that negates the idea that we'd\n> like VACUUM's behavior on a temp table to be fully independent of\n> whether concurrent transactions exist. I'd prefer to allow a7212be8b's\n> behavior to stand, but then it seems we need to lobotomize the error\n> check in heap_prepare_freeze_tuple to some extent.\n>\n> Independently of that, it seems like we need to fix things so that\n> when pruneheap.c is called by vacuum, it makes EXACTLY the same\n> dead-or-not-dead decisions that the main vacuum code makes. This\n> business with applying some GlobalVisState rule or other instead\n> seems just as unsafe as can be.\n>\n> AFAICS, there is no chance of the existing pg_surgery regression test\n> being fully stable if we don't fix both things.\n>\n\n From the explanation you provided above it seems like the test-case\nfor pg_surgery module is failing because there is some issue with the\nchanges done in a7212be8b commit (shown below). In other words, I\nbelieve that the test-case for pg_surgery has actually detected an\nissue in this commit.\n\ncommit a7212be8b9e0885ee769e8c55f99ef742cda487b\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Tue Sep 1 18:37:12 2020 -0400\n\n Set cutoff xmin more aggressively when vacuuming a temporary table.\n\n ....\n\nSo, do you mean to say that if the issues related to temp tables\ninduced by the above commit is fixed, it will make the regression test\nfor pg_surgery stable?\n\nPlease let me know if I am missing something here. Thank you.\n\n> BTW, attached is a quick-hack patch to allow automated testing\n> of this scenario, along the lines I sketched yesterday. This\n> test passes if you run the two scripts serially, but not when\n> you run them in parallel. I'm not proposing this for commit;\n> it's a hack and its timing behavior is probably not stable enough\n> for the buildfarm. But it's pretty useful for poking at these\n> issues.\n>\n\nYeah, understood, thanks for sharing this.\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 21 Sep 2020 08:09:20 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Sun, Sep 20, 2020 at 10:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > So this confirms the suspicion that the cause of the buildfarm\n> > failures is a concurrently-open transaction, presumably from\n> > autovacuum. I don't have time to poke further right now.\n>\n..\n> 2. As Amit suspected, there's an inconsistency between pruneheap.c's\n> rules for which tuples are removable and vacuum.c's rules for that.\n> This seems like a massive bug in its own right: what's the point of\n> pruneheap.c going to huge effort to decide whether it should keep a\n> tuple if vacuum will then kill it anyway? I do not understand why\n> whoever put in the GlobalVisState stuff only applied it in pruneheap.c\n> and not VACUUM proper.\n>\n> These two points interact, in that we don't get to the \"cannot freeze\"\n> failure except when pruneheap has decided not to remove something that\n> is removable according to VACUUM's rules. (VACUUM doesn't actually\n> remove it, because lazy_scan_heap won't try to remove HeapOnly tuples\n> even when it thinks they're HEAPTUPLE_DEAD; but then it tries to freeze\n> the tuple, and heap_prepare_freeze_tuple spits up.) However, if I revert\n> a7212be8b then the pg_surgery test still fails in the presence of a\n> concurrent transaction (both of the expected \"skipping TID\" notices\n> disappear). So reverting that patch wouldn't get us out of trouble.\n>\n> I think to move forward, we need to figure out what the freezing\n> behavior ought to be for temp tables. We could make it the same\n> as it was before a7212be8b, which'd just require some more complexity\n> in vacuum_set_xid_limits. However, that negates the idea that we'd\n> like VACUUM's behavior on a temp table to be fully independent of\n> whether concurrent transactions exist. I'd prefer to allow a7212be8b's\n> behavior to stand, but then it seems we need to lobotomize the error\n> check in heap_prepare_freeze_tuple to some extent.\n>\n> Independently of that, it seems like we need to fix things so that\n> when pruneheap.c is called by vacuum, it makes EXACTLY the same\n> dead-or-not-dead decisions that the main vacuum code makes. This\n> business with applying some GlobalVisState rule or other instead\n> seems just as unsafe as can be.\n>\n\nYeah, on a quick look it seems before commit dc7420c2c9 the\npruneheap.c and the main Vacuum code use to make the same decision and\nthat is commit which has introduced GlobalVisState stuff.\n\n> AFAICS, there is no chance of the existing pg_surgery regression test\n> being fully stable if we don't fix both things.\n>\n\nWhat if ensure that it runs with autovacuum = off and there is no\nparallel test running? I am not sure about the second part but if we\ncan do that then the test will be probably stable.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 21 Sep 2020 09:01:48 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Sun, Sep 20, 2020 at 10:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> AFAICS, there is no chance of the existing pg_surgery regression test\n>> being fully stable if we don't fix both things.\n\n> What if ensure that it runs with autovacuum = off and there is no\n> parallel test running? I am not sure about the second part but if we\n> can do that then the test will be probably stable.\n\nThen it'll not be usable under \"make installcheck\", which is not\nvery nice. It's also arguable that you aren't testing pg_surgery\nunder real-world conditions if you do it like that.\n\nMoreover, I think that both of these points need to be addressed\nanyway, as they represent bugs that are reachable independently\nof pg_surgery. Admittedly, we do not have a test case that\nproves that the inconsistency between pruneheap and vacuum has\nany bad effects in the absence of a7212be8b. But do you really\nwant to bet that there are none?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Sep 2020 10:27:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Sun, Sep 20, 2020 at 1:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> 1. My patch a7212be8b does indeed have a problem. It will allow\n> vacuum_set_xid_limits to compute freezeLimit = nextXid for a temp\n> table if freeze_min_age is zero (ie VACUUM FREEZE). If there's\n> any concurrent transactions, this falls foul of\n> heap_prepare_freeze_tuple's expectation that\n>\n> * NB: cutoff_xid *must* be <= the current global xmin, to ensure that any\n> * XID older than it could neither be running nor seen as running by any\n> * open transaction. This ensures that the replacement will not change\n> * anyone's idea of the tuple state.\n>\n> The \"cannot freeze committed xmax\" error message appears to be banking on\n> the assumption that we'd not reach heap_prepare_freeze_tuple for any\n> committed-dead tuple unless its xmax is past the specified cutoff_xid.\n>\n> 2. As Amit suspected, there's an inconsistency between pruneheap.c's\n> rules for which tuples are removable and vacuum.c's rules for that.\n> This seems like a massive bug in its own right: what's the point of\n> pruneheap.c going to huge effort to decide whether it should keep a\n> tuple if vacuum will then kill it anyway? I do not understand why\n> whoever put in the GlobalVisState stuff only applied it in pruneheap.c\n> and not VACUUM proper.\n\nI am not sure I fully understand why you're contrasting pruneheap.c\nwith vacuum here, because vacuum just does HOT pruning to remove dead\ntuples - maybe calling the relevant functions with different\narguments, but it doesn't have its own independent logic for that.\n\nThe key point is that the freezing code isn't, or at least\nhistorically wasn't, very smart about dead tuples. For example, I\nthink if you told it to freeze something that was dead it would just\ndo it, which is obviously bad. And that's why Andres stuck those\nsanity checks in there. But it's still pretty fragile. I think perhaps\nthe pruning code should be rewritten in such a way that it can be\ncombined with the code that freezes and marks pages all-visible, so\nthat there's not so much action at a distance, but such an endeavor is\nin itself pretty scary, and certainly not back-patchable.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 21 Sep 2020 14:04:33 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Sun, Sep 20, 2020 at 1:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> 2. As Amit suspected, there's an inconsistency between pruneheap.c's\n>> rules for which tuples are removable and vacuum.c's rules for that.\n>> This seems like a massive bug in its own right: what's the point of\n>> pruneheap.c going to huge effort to decide whether it should keep a\n>> tuple if vacuum will then kill it anyway? I do not understand why\n>> whoever put in the GlobalVisState stuff only applied it in pruneheap.c\n>> and not VACUUM proper.\n\n> I am not sure I fully understand why you're contrasting pruneheap.c\n> with vacuum here, because vacuum just does HOT pruning to remove dead\n> tuples - maybe calling the relevant functions with different\n> arguments, but it doesn't have its own independent logic for that.\n\nRight, but what we end up with is that the very same tuple xmin and\nxmax might result in pruning/deletion, or not, depending on whether\nit's part of a HOT chain or not. That's at best pretty weird, and\nat worst it means that corner-case bugs in other places are triggered\nin only one of the two scenarios ... which is what we have here.\n\n> The key point is that the freezing code isn't, or at least\n> historically wasn't, very smart about dead tuples. For example, I\n> think if you told it to freeze something that was dead it would just\n> do it, which is obviously bad. And that's why Andres stuck those\n> sanity checks in there. But it's still pretty fragile. I think perhaps\n> the pruning code should be rewritten in such a way that it can be\n> combined with the code that freezes and marks pages all-visible, so\n> that there's not so much action at a distance, but such an endeavor is\n> in itself pretty scary, and certainly not back-patchable.\n\nNot sure. The pruning code is trying to serve two masters, that is\nboth VACUUM and on-the-fly cleanup during ordinary queries. If you\ntry to merge it with other tasks that VACUUM does, you're going to\nhave a mess for the second usage. I fear there's going to be pretty\nstrong conservation of cruft either way.\n\nFWIW, weakening the sanity checks in heap_prepare_freeze_tuple is\n*not* my preferred fix here. But it'll take some work in other\nplaces to preserve them.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Sep 2020 14:21:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Mon, Sep 21, 2020 at 2:21 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Right, but what we end up with is that the very same tuple xmin and\n> xmax might result in pruning/deletion, or not, depending on whether\n> it's part of a HOT chain or not. That's at best pretty weird, and\n> at worst it means that corner-case bugs in other places are triggered\n> in only one of the two scenarios ... which is what we have here.\n\nI'm not sure I really understand how that's happening, because surely\nHOT chains and non-HOT chains are pruned by the same code, but it\ndoesn't sound good.\n\n> FWIW, weakening the sanity checks in heap_prepare_freeze_tuple is\n> *not* my preferred fix here. But it'll take some work in other\n> places to preserve them.\n\nMake sense.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 21 Sep 2020 16:02:29 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Hi,\n\nOn 2020-09-21 16:02:29 -0400, Robert Haas wrote:\n> On Mon, Sep 21, 2020 at 2:21 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Right, but what we end up with is that the very same tuple xmin and\n> > xmax might result in pruning/deletion, or not, depending on whether\n> > it's part of a HOT chain or not. That's at best pretty weird, and\n> > at worst it means that corner-case bugs in other places are triggered\n> > in only one of the two scenarios ... which is what we have here.\n> \n> I'm not sure I really understand how that's happening, because surely\n> HOT chains and non-HOT chains are pruned by the same code, but it\n> doesn't sound good.\n\nNot necessarily, unfortunately:\n\n case HEAPTUPLE_DEAD:\n\n /*\n * Ordinarily, DEAD tuples would have been removed by\n * heap_page_prune(), but it's possible that the tuple\n * state changed since heap_page_prune() looked. In\n * particular an INSERT_IN_PROGRESS tuple could have\n * changed to DEAD if the inserter aborted. So this\n * cannot be considered an error condition.\n *\n * If the tuple is HOT-updated then it must only be\n * removed by a prune operation; so we keep it just as if\n * it were RECENTLY_DEAD. Also, if it's a heap-only\n * tuple, we choose to keep it, because it'll be a lot\n * cheaper to get rid of it in the next pruning pass than\n * to treat it like an indexed tuple. Finally, if index\n * cleanup is disabled, the second heap pass will not\n * execute, and the tuple will not get removed, so we must\n * treat it like any other dead tuple that we choose to\n * keep.\n *\n * If this were to happen for a tuple that actually needed\n * to be deleted, we'd be in trouble, because it'd\n * possibly leave a tuple below the relation's xmin\n * horizon alive. heap_prepare_freeze_tuple() is prepared\n * to detect that case and abort the transaction,\n * preventing corruption.\n */\n if (HeapTupleIsHotUpdated(&tuple) ||\n HeapTupleIsHeapOnly(&tuple) ||\n params->index_cleanup == VACOPT_TERNARY_DISABLED)\n nkeep += 1;\n else\n tupgone = true; /* we can delete the tuple */\n all_visible = false;\n\n\nSo if e.g. a transaction aborts between the heap_page_prune and this\ncheck the pruning behaviour depends on whether the tuple is part of a\nHOT chain or not.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 21 Sep 2020 13:11:20 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Sep 21, 2020 at 2:21 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Right, but what we end up with is that the very same tuple xmin and\n>> xmax might result in pruning/deletion, or not, depending on whether\n>> it's part of a HOT chain or not. That's at best pretty weird, and\n>> at worst it means that corner-case bugs in other places are triggered\n>> in only one of the two scenarios ... which is what we have here.\n\n> I'm not sure I really understand how that's happening, because surely\n> HOT chains and non-HOT chains are pruned by the same code, but it\n> doesn't sound good.\n\nNo, they're not. lazy_scan_heap() will never remove a tuple that\nis HeapTupleIsHotUpdated or HeapTupleIsHeapOnly, even if it thinks\nit's DEAD -- cf. vacuumlazy.c, about line 1350. So tuples in\na HOT chain are deleted exactly when pruneheap.c sees fit to do so.\nOTOH, for tuples not in a HOT chain, the decision is (I believe)\nentirely on lazy_scan_heap(). And the core of my complaint is that\npruneheap.c's decisions about what is DEAD are not reliably identical\nto what HeapTupleSatisfiesVacuum thinks.\n\nI don't mind if a free-standing prune operation has its own rules,\nbut when it's invoked by VACUUM it ought to follow VACUUM's rules about\nwhat is dead or alive. What remains unclear at this point is whether\nwe ought to import some of the intelligence added by the GlobalVisState\npatch into VACUUM's behavior, or just dumb down pruneheap.c so that\nit applies exactly the HeapTupleSatisfiesVacuum rules when invoked\nby VACUUM.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Sep 2020 16:22:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Hi,\n\nOn 2020-09-20 13:13:16 -0400, Tom Lane wrote:\n> 2. As Amit suspected, there's an inconsistency between pruneheap.c's\n> rules for which tuples are removable and vacuum.c's rules for that.\n> This seems like a massive bug in its own right: what's the point of\n> pruneheap.c going to huge effort to decide whether it should keep a\n> tuple if vacuum will then kill it anyway? I do not understand why\n> whoever put in the GlobalVisState stuff only applied it in pruneheap.c\n> and not VACUUM proper.\n\nThe reason for that is that the GlobalVisState stuff is computed\nheuristically (and then re-checked if that's not sufficient to prune a\ntuple, unless already done so). That's done so GetSnapshotData() doesn't\nhave to look at each backends ->xmin, which is quite a massive speedup\nat higher connection counts, as each backends ->xmin changes much more\noften than each backend's xid.\n\nBut for VACUUM we need to do the accurate scan of the procarray anyway,\nbecause we need an accurate value for things other than HOT pruning\ndecisions.\n\nWhat do you exactly mean with the \"going to huge effort to decide\" bit?\n\n\n> I think to move forward, we need to figure out what the freezing\n> behavior ought to be for temp tables. We could make it the same\n> as it was before a7212be8b, which'd just require some more complexity\n> in vacuum_set_xid_limits. However, that negates the idea that we'd\n> like VACUUM's behavior on a temp table to be fully independent of\n> whether concurrent transactions exist. I'd prefer to allow a7212be8b's\n> behavior to stand, but then it seems we need to lobotomize the error\n> check in heap_prepare_freeze_tuple to some extent.\n\nI think that's an argument for what I suggested elsewhere, which is that\nwe should move the logic for a different horizon for temp tables out of\nvacuum_set_xid_limits, and into procarray.\n\n\n> Independently of that, it seems like we need to fix things so that\n> when pruneheap.c is called by vacuum, it makes EXACTLY the same\n> dead-or-not-dead decisions that the main vacuum code makes. This\n> business with applying some GlobalVisState rule or other instead\n> seems just as unsafe as can be.\n\nIt's not great, I agree. Not sure there is a super nice answer\nthough. Note that, even before my changes, vacuumlazy can decide\ndifferently than pruning whether a tuple is live. E.g. when an inserting\ntransaction aborts. That's pretty much unavoidable as long as we have\nmultiple HTSV calls for a tuple, since none of our locking can (nor\nshould) prevent concurrent transactions from aborting.\n\nBefore your new code avoiding the GetOldestNonRemovableTransactionId()\ncall for temp tables, the GlobalVis* can never be more pessimistic than\ndecisions based ona prior GetOldestNonRemovableTransactionId call (as\nthat internally updates the heuristic horizons used by GlobalVis).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 21 Sep 2020 13:32:01 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> The reason for that is that the GlobalVisState stuff is computed\n> heuristically (and then re-checked if that's not sufficient to prune a\n> tuple, unless already done so). That's done so GetSnapshotData() doesn't\n> have to look at each backends ->xmin, which is quite a massive speedup\n> at higher connection counts, as each backends ->xmin changes much more\n> often than each backend's xid.\n\nOK.\n\n> What do you exactly mean with the \"going to huge effort to decide\" bit?\n\nI'd looked at all the complexity around GlobalVisState, but failed to\nregister that it should be pretty cheap on a per-tuple basis. So never\nmind that complaint. The point that remains is just that it's different\nfrom HeapTupleSatisfiesVacuum's rules.\n\n>> I think to move forward, we need to figure out what the freezing\n>> behavior ought to be for temp tables. We could make it the same\n>> as it was before a7212be8b, which'd just require some more complexity\n>> in vacuum_set_xid_limits. However, that negates the idea that we'd\n>> like VACUUM's behavior on a temp table to be fully independent of\n>> whether concurrent transactions exist. I'd prefer to allow a7212be8b's\n>> behavior to stand, but then it seems we need to lobotomize the error\n>> check in heap_prepare_freeze_tuple to some extent.\n\n> I think that's an argument for what I suggested elsewhere, which is that\n> we should move the logic for a different horizon for temp tables out of\n> vacuum_set_xid_limits, and into procarray.\n\nBut procarray does not seem like a great place for\ntable-persistence-dependent decisions either?\n\n>> Independently of that, it seems like we need to fix things so that\n>> when pruneheap.c is called by vacuum, it makes EXACTLY the same\n>> dead-or-not-dead decisions that the main vacuum code makes. This\n>> business with applying some GlobalVisState rule or other instead\n>> seems just as unsafe as can be.\n\n> It's not great, I agree. Not sure there is a super nice answer\n> though. Note that, even before my changes, vacuumlazy can decide\n> differently than pruning whether a tuple is live. E.g. when an inserting\n> transaction aborts. That's pretty much unavoidable as long as we have\n> multiple HTSV calls for a tuple, since none of our locking can (nor\n> should) prevent concurrent transactions from aborting.\n\nIt's clear that if the environment changes between test A and test B,\nwe might get different results. What I'm not happy about is that the\nrules are different, so we might get different results even if the\nenvironment did not change.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Sep 2020 16:40:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Hi,\n\nOn 2020-09-21 16:40:40 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> >> I think to move forward, we need to figure out what the freezing\n> >> behavior ought to be for temp tables. We could make it the same\n> >> as it was before a7212be8b, which'd just require some more complexity\n> >> in vacuum_set_xid_limits. However, that negates the idea that we'd\n> >> like VACUUM's behavior on a temp table to be fully independent of\n> >> whether concurrent transactions exist. I'd prefer to allow a7212be8b's\n> >> behavior to stand, but then it seems we need to lobotomize the error\n> >> check in heap_prepare_freeze_tuple to some extent.\n> \n> > I think that's an argument for what I suggested elsewhere, which is that\n> > we should move the logic for a different horizon for temp tables out of\n> > vacuum_set_xid_limits, and into procarray.\n> \n> But procarray does not seem like a great place for\n> table-persistence-dependent decisions either?\n\nThat ship has sailed a long long time ago though. GetOldestXmin() has\nlooked at the passed in relation for a quite a while, and even before\nthat we had logic about 'allDbs' etc. It doesn't easily seem possible\nto avoid that, given how intimately that's coupled with how snapshots\nare built and used, database & vacuumFlags checks etc.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 21 Sep 2020 14:01:09 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-09-21 16:40:40 -0400, Tom Lane wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n>>> I think that's an argument for what I suggested elsewhere, which is that\n>>> we should move the logic for a different horizon for temp tables out of\n>>> vacuum_set_xid_limits, and into procarray.\n\n>> But procarray does not seem like a great place for\n>> table-persistence-dependent decisions either?\n\n> That ship has sailed a long long time ago though. GetOldestXmin() has\n> looked at the passed in relation for a quite a while, and even before\n> that we had logic about 'allDbs' etc. It doesn't easily seem possible\n> to avoid that, given how intimately that's coupled with how snapshots\n> are built and used, database & vacuumFlags checks etc.\n\nOK. Given that you've got strong feelings about this, do you want to\npropose a patch? I'm happy to fix it, since it's at least in part my\nbug, but I probably won't do it exactly like you would.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Sep 2020 17:03:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On 2020-09-21 17:03:53 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2020-09-21 16:40:40 -0400, Tom Lane wrote:\n> >> Andres Freund <andres@anarazel.de> writes:\n> >>> I think that's an argument for what I suggested elsewhere, which is that\n> >>> we should move the logic for a different horizon for temp tables out of\n> >>> vacuum_set_xid_limits, and into procarray.\n> \n> >> But procarray does not seem like a great place for\n> >> table-persistence-dependent decisions either?\n> \n> > That ship has sailed a long long time ago though. GetOldestXmin() has\n> > looked at the passed in relation for a quite a while, and even before\n> > that we had logic about 'allDbs' etc. It doesn't easily seem possible\n> > to avoid that, given how intimately that's coupled with how snapshots\n> > are built and used, database & vacuumFlags checks etc.\n> \n> OK. Given that you've got strong feelings about this, do you want to\n> propose a patch? I'm happy to fix it, since it's at least in part my\n> bug, but I probably won't do it exactly like you would.\n\nI can give it a try. I can see several paths of varying invasiveness,\nnot sure yet what the best approach is. Let me think about if for a bit.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 21 Sep 2020 14:20:03 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Hi,\n\nOn 2020-09-21 14:20:03 -0700, Andres Freund wrote:\n> I can give it a try. I can see several paths of varying invasiveness,\n> not sure yet what the best approach is. Let me think about if for a bit.\n\nUgh, sorry for taking so long to get around to this.\n\nAttached is a *prototype* implemention of this concept, which clearly is\nlacking some comment work (and is intentionally lacking some\nre-indentation).\n\nI described my thoughts about how to limit the horizons for temp tables in\nhttps://www.postgresql.org/message-id/20201014203103.72oke6hqywcyhx7s%40alap3.anarazel.de\n\nBesides comments this probably mainly needs a bit more tests around temp\ntable vacuuming. Should have at least an isolation test that verifies\nthat temp table rows can be a) vacuumed b) pruned away in the presence\nof other sessions with xids.\n\nGreetings,\n\nAndres Freund", "msg_date": "Thu, 15 Oct 2020 01:37:35 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Hi,\n\nOn 2020-10-15 01:37:35 -0700, Andres Freund wrote:\n> Attached is a *prototype* implemention of this concept, which clearly is\n> lacking some comment work (and is intentionally lacking some\n> re-indentation).\n> \n> I described my thoughts about how to limit the horizons for temp tables in\n> https://www.postgresql.org/message-id/20201014203103.72oke6hqywcyhx7s%40alap3.anarazel.de\n\nAttached is an updated version of this patch. Quite a bit of polish,\nadded removal of the isTopLevel arguments added a7212be8b9e that are now\nunnecessary, and changed the initialization of the temp table horizons\nto be latestCompletedXid + 1 instead of just latestCompletedXid when no\nxid is assigned.\n\n\n> Besides comments this probably mainly needs a bit more tests around temp\n> table vacuuming. Should have at least an isolation test that verifies\n> that temp table rows can be a) vacuumed b) pruned away in the presence\n> of other sessions with xids.\n\nI added an isolationtester test for this. It verifies that dead rows in\ntemp tables get vacuumed and pruned despite concurrent sessions having\nolder snapshots. It does so by forcing an IOS and checking the number of\nheap fetches reported by EXPLAIN. I also added a companion test for\npermanent relations, ensuring that such rows do not get removed.\n\n\nAny comments? Otherwise I'll push that patch tomorrow.\n\nGreetings,\n\nAndres Freund", "msg_date": "Tue, 27 Oct 2020 20:51:10 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Hi,\n\nOn 2020-10-27 20:51:10 -0700, Andres Freund wrote:\n> On 2020-10-15 01:37:35 -0700, Andres Freund wrote:\n> > Attached is a *prototype* implemention of this concept, which clearly is\n> > lacking some comment work (and is intentionally lacking some\n> > re-indentation).\n> > \n> > I described my thoughts about how to limit the horizons for temp tables in\n> > https://www.postgresql.org/message-id/20201014203103.72oke6hqywcyhx7s%40alap3.anarazel.de\n> \n> Attached is an updated version of this patch. Quite a bit of polish,\n> added removal of the isTopLevel arguments added a7212be8b9e that are now\n> unnecessary, and changed the initialization of the temp table horizons\n> to be latestCompletedXid + 1 instead of just latestCompletedXid when no\n> xid is assigned.\n> \n> \n> > Besides comments this probably mainly needs a bit more tests around temp\n> > table vacuuming. Should have at least an isolation test that verifies\n> > that temp table rows can be a) vacuumed b) pruned away in the presence\n> > of other sessions with xids.\n> \n> I added an isolationtester test for this. It verifies that dead rows in\n> temp tables get vacuumed and pruned despite concurrent sessions having\n> older snapshots. It does so by forcing an IOS and checking the number of\n> heap fetches reported by EXPLAIN. I also added a companion test for\n> permanent relations, ensuring that such rows do not get removed.\n> \n> \n> Any comments? Otherwise I'll push that patch tomorrow.\n\nJust pushed this. Let's see what the BF says...\n\nIt's kinda cool how much more aggressive hot pruning / killtuples now is\nfor temp tables.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 28 Oct 2020 18:13:44 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On 2020-10-28 18:13:44 -0700, Andres Freund wrote:\n> Just pushed this. Let's see what the BF says...\n\nIt says that apparently something is unstable about my new test. It\nfirst passed on a few animals, but then failed a lot in a row. Looking.\n\n\n", "msg_date": "Wed, 28 Oct 2020 19:09:14 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Hi,\n\nOn 2020-10-28 19:09:14 -0700, Andres Freund wrote:\n> On 2020-10-28 18:13:44 -0700, Andres Freund wrote:\n> > Just pushed this. Let's see what the BF says...\n> \n> It says that apparently something is unstable about my new test. It\n> first passed on a few animals, but then failed a lot in a row. Looking.\n\nThe differentiating factor is force_parallel_mode=regress.\n\nUgh, this is nasty: The problem is that we can end up computing the\nhorizons the first time before MyDatabaseId is even set. Which leads us\nto compute a too aggressive horizon for plain tables, because we skip\nover them, as MyDatabaseId still is InvalidOid:\n\n\t\t/*\n\t\t * Normally queries in other databases are ignored for anything but\n\t\t * the shared horizon. But in recovery we cannot compute an accurate\n\t\t * per-database horizon as all xids are managed via the\n\t\t * KnownAssignedXids machinery.\n\t\t */\n\t\tif (in_recovery ||\n\t\t\tproc->databaseId == MyDatabaseId ||\n\t\t\tproc->databaseId == 0)\t/* always include WalSender */\n\t\t\th->data_oldest_nonremovable =\n\t\t\t\tTransactionIdOlder(h->data_oldest_nonremovable, xmin);\n\nThat then subsequently leads us consider a row fully dead in\nheap_hot_search_buffers(). Triggering the killtuples logic. Causing the\ntest to fail.\n\nWith force_parallel_mode=regress we constantly start parallel workers,\nwhich makes it much more likely that this case is hit.\n\nIt's trivial to fix luckily...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 28 Oct 2020 21:00:30 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Hi,\n\nOn 2020-10-28 21:00:30 -0700, Andres Freund wrote:\n> On 2020-10-28 19:09:14 -0700, Andres Freund wrote:\n> > On 2020-10-28 18:13:44 -0700, Andres Freund wrote:\n> > > Just pushed this. Let's see what the BF says...\n> > \n> > It says that apparently something is unstable about my new test. It\n> > first passed on a few animals, but then failed a lot in a row. Looking.\n> \n> The differentiating factor is force_parallel_mode=regress.\n> \n> Ugh, this is nasty: The problem is that we can end up computing the\n> horizons the first time before MyDatabaseId is even set. Which leads us\n> to compute a too aggressive horizon for plain tables, because we skip\n> over them, as MyDatabaseId still is InvalidOid:\n> \n> \t\t/*\n> \t\t * Normally queries in other databases are ignored for anything but\n> \t\t * the shared horizon. But in recovery we cannot compute an accurate\n> \t\t * per-database horizon as all xids are managed via the\n> \t\t * KnownAssignedXids machinery.\n> \t\t */\n> \t\tif (in_recovery ||\n> \t\t\tproc->databaseId == MyDatabaseId ||\n> \t\t\tproc->databaseId == 0)\t/* always include WalSender */\n> \t\t\th->data_oldest_nonremovable =\n> \t\t\t\tTransactionIdOlder(h->data_oldest_nonremovable, xmin);\n> \n> That then subsequently leads us consider a row fully dead in\n> heap_hot_search_buffers(). Triggering the killtuples logic. Causing the\n> test to fail.\n> \n> With force_parallel_mode=regress we constantly start parallel workers,\n> which makes it much more likely that this case is hit.\n> \n> It's trivial to fix luckily...\n\nPushed that fix, hopefully that calms the BF.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 28 Oct 2020 21:57:22 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "Hi hackers!\n\nDoes anyone maintain opensource pg_surgery analogs for released versions of PG?\nIt seems to me I'll have to use something like this and I just though that I should consider pg_surgery in favour of our pg_dirty_hands.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Sat, 16 Jan 2021 20:41:20 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Sat, Jan 16, 2021 at 10:41 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> Does anyone maintain opensource pg_surgery analogs for released versions of PG?\n> It seems to me I'll have to use something like this and I just though that I should consider pg_surgery in favour of our pg_dirty_hands.\n\nI do not. I'm still of the opinion that we ought to back-patch\npg_surgery. This didn't attract a consensus before, and it's hard to\ndispute that it's a new feature in what would be a back branch. But\nit's unclear to me how users are otherwise supposed to recover from\nsome of the bugs that are or have been present in those back branches.\nI'm not sure that I see the logic in telling people we'll try to\nprevent them from getting hosed in the future but if they're already\nhosed they can wait for v14 to fix it. They can't wait that long, and\na dump-and-restore cycle is awfully painful.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 18 Jan 2021 08:54:10 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Mon, Jan 18, 2021 at 08:54:10AM -0500, Robert Haas wrote:\n> On Sat, Jan 16, 2021 at 10:41 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> > Does anyone maintain opensource pg_surgery analogs for released\n> > versions of PG? It seems to me I'll have to use something like this\n> > and I just though that I should consider pg_surgery in favour of our\n> > pg_dirty_hands.\n> \n> I do not. I'm still of the opinion that we ought to back-patch\n> pg_surgery. This didn't attract a consensus before, and it's hard to\n> dispute that it's a new feature in what would be a back branch. But\n> it's unclear to me how users are otherwise supposed to recover from\n> some of the bugs that are or have been present in those back branches.\n\nOne other possiblity would be to push a version of pg_surgery that is\ncompatible with the back-branches somewhere external (e.g. either\ngit.postgresql.org and/or Github), so that it can be picked up by\ndistributions and/or individual users in need.\n\nThat is Assuming it does not need assorted server changes to go with; I\ndid not read the thread in detail but I was under the assumption it is a\nclient program?\n\n\nMichael\n\n-- \nMichael Banck\nProjektleiter / Senior Berater\nTel.: +49 2166 9901-171\nFax: +49 2166 9901-100\nEmail: michael.banck@credativ.de\n\ncredativ GmbH, HRB M�nchengladbach 12080\nUSt-ID-Nummer: DE204566209\nTrompeterallee 108, 41189 M�nchengladbach\nGesch�ftsf�hrung: Dr. Michael Meskes, J�rg Folz, Sascha Heuer\n\nUnser Umgang mit personenbezogenen Daten unterliegt\nfolgenden Bestimmungen: https://www.credativ.de/datenschutz\n\n\n", "msg_date": "Mon, 18 Jan 2021 15:25:38 +0100", "msg_from": "Michael Banck <michael.banck@credativ.de>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Mon, Jan 18, 2021 at 9:25 AM Michael Banck <michael.banck@credativ.de> wrote:\n> One other possiblity would be to push a version of pg_surgery that is\n> compatible with the back-branches somewhere external (e.g. either\n> git.postgresql.org and/or Github), so that it can be picked up by\n> distributions and/or individual users in need.\n\nSure, but I don't see how that's better.\n\n> That is Assuming it does not need assorted server changes to go with; I\n> did not read the thread in detail but I was under the assumption it is a\n> client program?\n\nIt's a server extension. It does not require core changes.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 18 Jan 2021 09:54:01 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "\n\n> 18 янв. 2021 г., в 18:54, Robert Haas <robertmhaas@gmail.com> написал(а):\n> \n> On Sat, Jan 16, 2021 at 10:41 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>> Does anyone maintain opensource pg_surgery analogs for released versions of PG?\n>> It seems to me I'll have to use something like this and I just though that I should consider pg_surgery in favour of our pg_dirty_hands.\n> \n> I do not. I'm still of the opinion that we ought to back-patch\n> pg_surgery.\n+1.\nYesterday I spent a few hours packaging pg_dirty_hands and pg_surgery(BTW it works fine for 12).\nIt's a kind of a 911 tool, one doesn't think they will need it until they actually do. And clocks are ticking.\nOTOH, it opens new ways to shoot in the foot.\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Mon, 18 Jan 2021 20:58:04 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" }, { "msg_contents": "On Mon, Sep 21, 2020 at 1:11 PM Andres Freund <andres@anarazel.de> wrote:\n> > I'm not sure I really understand how that's happening, because surely\n> > HOT chains and non-HOT chains are pruned by the same code, but it\n> > doesn't sound good.\n>\n> Not necessarily, unfortunately:\n>\n> case HEAPTUPLE_DEAD:\n\n> So if e.g. a transaction aborts between the heap_page_prune and this\n> check the pruning behaviour depends on whether the tuple is part of a\n> HOT chain or not.\n\nI have a proposal that includes removing this \"tupgone = true\" special case:\n\nhttps://postgr.es/m/CAH2-Wzm7Y=_g3FjVHv7a85AfUbuSYdggDnEqN1hodVeOctL+Ow@mail.gmail.com\n\nOf course this won't change the fact that vacuumlazy.c can disagree\nwith pruning about what is dead -- that is a necessary consequence of\nhaving multiple HTSV calls for the same tuple in vacuumlazy.c (it can\nchange in the presence of concurrently aborted transactions). But\nremoving the \"tupgone = true\" special case does seem much more\nconsistent, and simpler overall. We have lots of code that is only\nneeded to make that special case work. For example, the whole idea of\na dedicated XLOG_HEAP2_CLEANUP_INFO record for recovery conflicts can\ngo -- we can get by with only XLOG_HEAP2_CLEAN records from pruning,\nIIRC.\n\nHave I missed something? The special case in question seems pretty\nawful to me, so I have to wonder why somebody else didn't remove it\nlong ago...\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Sun, 14 Mar 2021 20:17:28 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: recovering from \"found xmin ... from before relfrozenxid ...\"" } ]
[ { "msg_contents": "On the Debian s390x buildd, the 13beta2 build is crashing:\n\n2020-07-15 01:19:59.149 UTC [859] LOG: server process (PID 1415) was terminated by signal 11: Segmentation fault\n2020-07-15 01:19:59.149 UTC [859] DETAIL: Failed process was running: create table gs_group_1 as\n\tselect g100, g10, sum(g::numeric), count(*), max(g::text)\n\tfrom gs_data_1 group by cube (g1000, g100,g10);\n\nFull build log at https://buildd.debian.org/status/fetch.php?pkg=postgresql-13&arch=s390x&ver=13%7Ebeta2-1&stamp=1594776007&raw=0\n\nThe failure is reproducible there: https://buildd.debian.org/status/logs.php?pkg=postgresql-13&ver=13%7Ebeta2-1&arch=s390x\n\nI tried a manual build on a s390x machine, but that one went through\nfine, so I can't provide a backtrace at the moment.\n\nChristoph\n\n\n", "msg_date": "Wed, 15 Jul 2020 11:15:09 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": true, "msg_subject": "gs_group_1 crashing on 13beta2/s390x" }, { "msg_contents": "Re: To PostgreSQL Hackers\n> On the Debian s390x buildd, the 13beta2 build is crashing:\n> \n> 2020-07-15 01:19:59.149 UTC [859] LOG: server process (PID 1415) was terminated by signal 11: Segmentation fault\n> 2020-07-15 01:19:59.149 UTC [859] DETAIL: Failed process was running: create table gs_group_1 as\n> \tselect g100, g10, sum(g::numeric), count(*), max(g::text)\n> \tfrom gs_data_1 group by cube (g1000, g100,g10);\n\nI wired gdb into the build process and got this backtrace:\n\n2020-07-15 16:03:38.310 UTC [21073] LOG: server process (PID 21575) was terminated by signal 11: Segmentation fault\n2020-07-15 16:03:38.310 UTC [21073] DETAIL: Failed process was running: create table gs_group_1 as\n\tselect g100, g10, sum(g::numeric), count(*), max(g::text)\n\tfrom gs_data_1 group by cube (g1000, g100,g10);\n\n******** build/src/bin/pg_upgrade/tmp_check/data.old/core ********\n[New LWP 21575]\n[Thread debugging using libthread_db enabled]\nUsing host libthread_db library \"/lib/s390x-linux-gnu/libthread_db.so.1\".\nCore was generated by `postgres: buildd regression [local] CREATE TABLE AS '.\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0 datumCopy (typByVal=false, typLen=-1, value=0) at ./build/../src/backend/utils/adt/datum.c:142\n142\t\t\tif (VARATT_IS_EXTERNAL_EXPANDED(vl))\n#0 datumCopy (typByVal=false, typLen=-1, value=0) at ./build/../src/backend/utils/adt/datum.c:142\n vl = 0x0\n res = <optimized out>\n res = <optimized out>\n vl = <optimized out>\n eoh = <optimized out>\n resultsize = <optimized out>\n resultptr = <optimized out>\n realSize = <optimized out>\n resultptr = <optimized out>\n realSize = <optimized out>\n resultptr = <optimized out>\n#1 datumCopy (value=0, typByVal=false, typLen=-1) at ./build/../src/backend/utils/adt/datum.c:131\n res = <optimized out>\n vl = <optimized out>\n eoh = <optimized out>\n resultsize = <optimized out>\n resultptr = <optimized out>\n realSize = <optimized out>\n resultptr = <optimized out>\n#2 0x000002aa04423af8 in finalize_aggregate (aggstate=aggstate@entry=0x2aa05775920, peragg=peragg@entry=0x2aa056e02f0, resultVal=0x2aa056e0208, resultIsNull=0x2aa056e022a, pergroupstate=<optimized out>, pergroupstate=<optimized out>) at ./build/../src/backend/executor/nodeAgg.c:1128\n fcinfodata = {fcinfo = {flinfo = 0x2aa056e0250, context = 0x2aa05775920, resultinfo = 0x0, fncollation = 0, isnull = false, nargs = 1, args = 0x3fff6a7b578}, fcinfo_data = \"\\000\\000\\002\\252\\005n\\002P\\000\\000\\002\\252\\005wY \", '\\000' <repeats 13 times>, \"\\247\\000\\001\\000\\000\\002\\252\\005t\\265\\250\\000\\000\\003\\377\\211\\341\\207F\\000\\000\\003\\377\\000\\000\\002\\001\\000\\000\\000\\000\\000\\000\\003\\376\\000\\000\\000\\000\\000\\000\\017\\370\\000\\000\\000\\000\\000\\000\\001\\377\\000\\000\\000\\000\\000\\000\\000\\260\\000\\000\\000k\\000\\000\\000k\\000\\000\\000\\000\\000\\000 \\000\\000\\000\\003\\377\\213\\016J \\000\\000\\000p\\000\\000\\000k\\000\\000\\000\\000\\000\\000\\000\\200\\000\\000\\000\\000\\000\\000\\000\\020\", '\\000' <repeats 11 times>, \"w\\000\\000\\000|\\000\\000\\000\\000\\000\\000\\000\\002\\000\\000\\002\\252\\006&9\\250\\000\\000\\002\\252\\005wZh\\000\\000\\002\\252\\005wZH\\000\\000\\003\\377\\213\\n_\\210\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\"...}\n fcinfo = 0x3fff6a7b558\n anynull = <optimized out>\n oldContext = <optimized out>\n i = <optimized out>\n lc = <optimized out>\n pertrans = <error reading variable pertrans (value has been optimized out)>\n#3 0x000002aa04423ff4 in finalize_aggregates (aggstate=aggstate@entry=0x2aa05775920, peraggs=peraggs@entry=0x2aa056e0240, pergroup=0x2aa056c8ed8) at ./build/../src/backend/executor/nodeAgg.c:1345\n peragg = 0x2aa056e02f0\n transno = <optimized out>\n pergroupstate = 0x2aa056c8ef8\n econtext = <optimized out>\n aggvalues = 0x2aa056e01f8\n aggnulls = 0x2aa056e0228\n aggno = 2\n transno = <optimized out>\n#4 0x000002aa04424f5c in agg_retrieve_direct (aggstate=0x2aa05775920) at ./build/../src/backend/executor/nodeAgg.c:2480\n econtext = 0x2aa05776080\n firstSlot = 0x2aa062639a8\n numGroupingSets = <optimized out>\n node = <optimized out>\n tmpcontext = 0x2aa05775d60\n peragg = 0x2aa056e0240\n outerslot = <optimized out>\n nextSetSize = <optimized out>\n pergroups = 0x2aa056c8ea8\n result = <optimized out>\n hasGroupingSets = <optimized out>\n currentSet = <optimized out>\n numReset = <optimized out>\n i = <optimized out>\n node = <optimized out>\n econtext = <optimized out>\n tmpcontext = <optimized out>\n peragg = <optimized out>\n pergroups = <optimized out>\n outerslot = <optimized out>\n firstSlot = <optimized out>\n result = <optimized out>\n hasGroupingSets = <optimized out>\n numGroupingSets = <optimized out>\n currentSet = <optimized out>\n nextSetSize = <optimized out>\n numReset = <optimized out>\n i = <optimized out>\n#5 ExecAgg (pstate=0x2aa05775920) at ./build/../src/backend/executor/nodeAgg.c:2140\n node = 0x2aa05775920\n result = 0x0\n#6 0x000002aa0441001a in ExecProcNode (node=0x2aa05775920) at ./build/../src/include/executor/executor.h:245\nNo locals.\n#7 ExecutePlan (execute_once=<optimized out>, dest=0x2aa0565fa58, direction=<optimized out>, numberTuples=0, sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x2aa05775920, estate=0x2aa057756f8) at ./build/../src/backend/executor/execMain.c:1646\n slot = <optimized out>\n current_tuple_count = 0\n slot = <optimized out>\n current_tuple_count = <optimized out>\n#8 standard_ExecutorRun (queryDesc=0x2aa062df508, direction=<optimized out>, count=0, execute_once=<optimized out>) at ./build/../src/backend/executor/execMain.c:364\n estate = 0x2aa057756f8\n operation = CMD_SELECT\n dest = 0x2aa0565fa58\n sendTuples = <optimized out>\n oldcontext = 0x2aa0565f830\n __func__ = \"standard_ExecutorRun\"\n#9 0x000002aa043933fa in ExecCreateTableAs (pstate=pstate@entry=0x2aa0565f948, stmt=stmt@entry=0x2aa055bea28, params=params@entry=0x0, queryEnv=queryEnv@entry=0x0, qc=0x3fff6a7ca30) at ./build/../src/backend/commands/createas.c:354\n query = <optimized out>\n into = <optimized out>\n is_matview = <optimized out>\n dest = 0x2aa0565fa58\n save_userid = 0\n save_sec_context = 0\n save_nestlevel = 0\n address = {classId = 70885274, objectId = 1023, objectSubId = -156778696}\n rewritten = <optimized out>\n plan = 0x2aa062df3f8\n queryDesc = 0x2aa062df508\n __func__ = \"ExecCreateTableAs\"\n#10 0x000002aa0459f378 in ProcessUtilitySlow (pstate=pstate@entry=0x2aa0565f948, pstmt=pstmt@entry=0x2aa055bead8, queryString=queryString@entry=0x2aa055bcfb8 \"create table gs_group_1 as\\nselect g100, g10, sum(g::numeric), count(*), max(g::text)\\nfrom gs_data_1 group by cube (g1000, g100,g10);\", context=context@entry=PROCESS_UTILITY_TOPLEVEL, params=params@entry=0x0, queryEnv=0x0, qc=0x3fff6a7ca30, dest=<error reading variable: value has been optimized out>) at ./build/../src/backend/tcop/utility.c:1600\n _save_exception_stack = 0x3fff6a7c820\n _save_context_stack = 0x0\n _local_sigjmp_buf = {{__jmpbuf = {{__gregs = {0, 2929258264904, 2929257605848, 0, 2929257605672, 2929257598904, 4396084256648, 1, 18739560736704083, 18738779339296963}, __fpregs = {2929259797352, 2929259797352, 2929257598904, 2929258351376, 0, 4397889735684, 4397889735216, 0}}}, __mask_was_saved = 0, __saved_mask = {__val = {4397889733480, 10, 2929257605400, 0, 0, 2929257606120, 4096, 1, 0, 4396084256648, 2929242951088, 2929239505962, 4397889733120, 18736975094525723, 2929257598904, 18446744069489843728}}}}\n _do_rethrow = false\n parsetree = 0x2aa055bea28\n isTopLevel = true\n isCompleteQuery = true\n needCleanup = false\n commandCollected = false\n address = {classId = 1023, objectId = 4138189632, objectSubId = 682}\n secondaryObject = {classId = 0, objectId = 0, objectSubId = 0}\n __func__ = \"ProcessUtilitySlow\"\n#11 0x000002aa0459dd36 in standard_ProcessUtility (pstmt=0x2aa055bead8, queryString=0x2aa055bcfb8 \"create table gs_group_1 as\\nselect g100, g10, sum(g::numeric), count(*), max(g::text)\\nfrom gs_data_1 group by cube (g1000, g100,g10);\", context=<optimized out>, params=0x0, queryEnv=<optimized out>, dest=0x2aa057d5b68, qc=0x3fff6a7ca30) at ./build/../src/backend/tcop/utility.c:1069\n parsetree = 0x2aa055bea28\n isTopLevel = <optimized out>\n isAtomicContext = <optimized out>\n pstate = 0x2aa0565f948\n readonly_flags = <optimized out>\n __func__ = \"standard_ProcessUtility\"\n#12 0x000002aa0459e874 in ProcessUtility (pstmt=pstmt@entry=0x2aa055bead8, queryString=<optimized out>, context=context@entry=PROCESS_UTILITY_TOPLEVEL, params=<optimized out>, queryEnv=queryEnv@entry=0x0, dest=0x2aa057d5b68, qc=0x3fff6a7ca30) at ./build/../src/backend/tcop/utility.c:524\nNo locals.\n#13 0x000002aa0459b210 in PortalRunUtility (portal=0x2aa05620008, pstmt=0x2aa055bead8, isTopLevel=<optimized out>, setHoldSnapshot=<optimized out>, dest=<optimized out>, qc=0x3fff6a7ca30) at ./build/../src/backend/tcop/pquery.c:1157\n utilityStmt = <optimized out>\n snapshot = 0x2aa05674c58\n#14 0x000002aa0459bca0 in PortalRunMulti (portal=portal@entry=0x2aa05620008, isTopLevel=isTopLevel@entry=true, setHoldSnapshot=setHoldSnapshot@entry=false, dest=<optimized out>, dest@entry=0x2aa057d5b68, altdest=<optimized out>, altdest@entry=0x2aa057d5b68, qc=0x3fff6a7ca30) at ./build/../src/backend/tcop/pquery.c:1303\n pstmt = <optimized out>\n stmtlist_item__state = {l = 0x2aa057d5b18, i = 0}\n active_snapshot_set = false\n stmtlist_item = 0x2aa057d5b30\n#15 0x000002aa0459c9f4 in PortalRun (portal=portal@entry=0x2aa05620008, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true, dest=dest@entry=0x2aa057d5b68, altdest=0x2aa057d5b68, qc=0x3fff6a7ca30) at ./build/../src/backend/tcop/pquery.c:779\n _save_exception_stack = 0x3fff6a7ccc0\n _save_context_stack = 0x0\n _local_sigjmp_buf = {{__jmpbuf = {{__gregs = {2929258004488, 2, 2929258004488, 2929167695872, 2929257605768, 1, 4396084256648, 4397889736544, 18739560736712267, 18738779339319315}, __fpregs = {89, 0, 2929257598904, 2929258351376, 4397889735216, 4397889735684, 4397889735214, 0}}}, __mask_was_saved = 0, __saved_mask = {__val = {0, 8192, 2, 1, 1, 2929243650206, 2929258004488, 2929257605768, 2, 4396084256648, 4397889736544, 2929237235946, 4397889734816, 4397889736544, 2929239411110, 4397889734656}}}}\n _do_rethrow = <optimized out>\n result = <optimized out>\n nprocessed = <optimized out>\n saveTopTransactionResourceOwner = 0x2aa055e9bf0\n saveTopTransactionContext = 0x2aa05674b10\n saveActivePortal = 0x0\n saveResourceOwner = 0x2aa055e9bf0\n savePortalContext = 0x0\n saveMemoryContext = 0x2aa05674b10\n __func__ = \"PortalRun\"\n#16 0x000002aa0459830a in exec_simple_query (query_string=<optimized out>) at ./build/../src/backend/tcop/postgres.c:1239\n snapshot_set = <optimized out>\n per_parsetree_context = <optimized out>\n plantree_list = <optimized out>\n parsetree = 0x2aa055bea58\n commandTag = <optimized out>\n qc = {commandTag = CMDTAG_UNKNOWN, nprocessed = 0}\n querytree_list = <optimized out>\n portal = 0x2aa05620008\n receiver = 0x2aa057d5b68\n format = 0\n parsetree_item__state = {l = 0x2aa055bea88, i = 0}\n dest = DestRemote\n oldcontext = <optimized out>\n parsetree_list = 0x2aa055bea88\n parsetree_item = 0x2aa055beaa0\n save_log_statement_stats = false\n was_logged = false\n use_implicit_block = false\n msec_str = \"\\000\\000\\002\\252\\000\\000\\000Q\\000\\000\\002\\252\\005[ϸ\\000\\000\\002\\252\\000\\000\\000\\000\\000\\000\\003\\377\\213\\n_\\210\"\n __func__ = \"exec_simple_query\"\n#17 0x000002aa0459a05e in PostgresMain (argc=<optimized out>, argv=argv@entry=0x2aa055e8190, dbname=<optimized out>, username=<optimized out>) at ./build/../src/backend/tcop/postgres.c:4315\n query_string = 0x2aa055bcfb8 \"create table gs_group_1 as\\nselect g100, g10, sum(g::numeric), count(*), max(g::text)\\nfrom gs_data_1 group by cube (g1000, g100,g10);\"\n firstchar = 81\n input_message = {data = 0x2aa055bcfb8 \"create table gs_group_1 as\\nselect g100, g10, sum(g::numeric), count(*), max(g::text)\\nfrom gs_data_1 group by cube (g1000, g100,g10);\", len = 133, maxlen = 1024, cursor = 133}\n local_sigjmp_buf = {{__jmpbuf = {{__gregs = {8388608, 64, 2929244860962, 2929257583240, 2929244863032, 2929244863024, 4396084256648, 4397889736544, 18739560736695925, 18738779339318659}, __fpregs = {4397889736544, 4397889736532, 6, 4397419976768, 2929690573168, 2930182562592, 4397889736696, 0}}}, __mask_was_saved = 1, __saved_mask = {__val = {0, 2929239505782, 2929257748144, 4397889736288, 4396084256648, 4397889736544, 2929242274994, 4397889735960, 1024, 4396084256648, 4397889736544, 2929242135450, 4, 0, 4397889736320, 4397889736160}}}}\n send_ready_for_query = false\n disable_idle_in_transaction_timeout = false\n __func__ = \"PostgresMain\"\n#18 0x000002aa04512066 in BackendRun (port=0x2aa055e16b0, port=0x2aa055e16b0) at ./build/../src/backend/postmaster/postmaster.c:4523\n av = 0x2aa055e8190\n maxac = <optimized out>\n ac = 1\n i = 1\n av = <optimized out>\n maxac = <optimized out>\n ac = <optimized out>\n i = <optimized out>\n __func__ = \"BackendRun\"\n __errno_location = <optimized out>\n __errno_location = <optimized out>\n __errno_location = <optimized out>\n#19 BackendStartup (port=0x2aa055e16b0) at ./build/../src/backend/postmaster/postmaster.c:4215\n bn = <optimized out>\n pid = <optimized out>\n bn = <optimized out>\n pid = <optimized out>\n __func__ = \"BackendStartup\"\n __errno_location = <optimized out>\n __errno_location = <optimized out>\n save_errno = <optimized out>\n __errno_location = <optimized out>\n __errno_location = <optimized out>\n#20 ServerLoop () at ./build/../src/backend/postmaster/postmaster.c:1727\n port = 0x2aa055e16b0\n i = <optimized out>\n rmask = {fds_bits = {32, 0 <repeats 15 times>}}\n selres = <optimized out>\n now = <optimized out>\n readmask = {fds_bits = {32, 0 <repeats 15 times>}}\n nSockets = 0\n last_lockfile_recheck_time = 1594829011\n last_touch_time = 1594829011\n __func__ = \"ServerLoop\"\n#21 0x000002aa04513128 in PostmasterMain (argc=<optimized out>, argv=<optimized out>) at ./build/../src/backend/postmaster/postmaster.c:1400\n opt = <optimized out>\n status = <optimized out>\n userDoption = <optimized out>\n listen_addr_saved = false\n i = <optimized out>\n output_config_variable = <optimized out>\n __func__ = \"PostmasterMain\"\n#22 0x000002aa04243fb4 in main (argc=<optimized out>, argv=0x2aa055b71b0) at ./build/../src/backend/main/main.c:210\n do_check_root = <optimized out>\n\nChristoph\n\n\n", "msg_date": "Wed, 15 Jul 2020 22:29:00 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": true, "msg_subject": "Re: gs_group_1 crashing on 13beta2/s390x" }, { "msg_contents": "Christoph Berg <myon@debian.org> writes:\n>> On the Debian s390x buildd, the 13beta2 build is crashing:\n\n> I wired gdb into the build process and got this backtrace:\n\n> #0 datumCopy (typByVal=false, typLen=-1, value=0) at ./build/../src/backend/utils/adt/datum.c:142\n> vl = 0x0\n> res = <optimized out>\n> res = <optimized out>\n> vl = <optimized out>\n> eoh = <optimized out>\n> resultsize = <optimized out>\n> resultptr = <optimized out>\n> realSize = <optimized out>\n> resultptr = <optimized out>\n> realSize = <optimized out>\n> resultptr = <optimized out>\n> #1 datumCopy (value=0, typByVal=false, typLen=-1) at ./build/../src/backend/utils/adt/datum.c:131\n> res = <optimized out>\n> vl = <optimized out>\n> eoh = <optimized out>\n> resultsize = <optimized out>\n> resultptr = <optimized out>\n> realSize = <optimized out>\n> resultptr = <optimized out>\n> #2 0x000002aa04423af8 in finalize_aggregate (aggstate=aggstate@entry=0x2aa05775920, peragg=peragg@entry=0x2aa056e02f0, resultVal=0x2aa056e0208, resultIsNull=0x2aa056e022a, pergroupstate=<optimized out>, pergroupstate=<optimized out>) at ./build/../src/backend/executor/nodeAgg.c:1128\n\nHmm. If gdb isn't lying to us, that has to be coming from here:\n\n /*\n * If result is pass-by-ref, make sure it is in the right context.\n */\n if (!peragg->resulttypeByVal && !*resultIsNull &&\n !MemoryContextContains(CurrentMemoryContext,\n DatumGetPointer(*resultVal)))\n *resultVal = datumCopy(*resultVal,\n peragg->resulttypeByVal,\n peragg->resulttypeLen);\n\nThe line numbers in HEAD are a bit different, but that's the only\ncall of datumCopy() in finalize_aggregate().\n\nIt's hardly surprising that datumCopy would segfault when given\na null \"value\" and told it is pass-by-reference. However, to get to\nthe datumCopy call, we must have passed the MemoryContextContains\ncheck on that very same pointer value, and that would surely have\nsegfaulted as well, one would think.\n\nGiven the apparently-can't-happen situation at the call site,\nand the fact that we're not seeing similar failures reported\nelsewhere (and note that every line shown above is at least\nfive years old), I'm kind of forced to the conclusion that this\nis a compiler bug. Does adjusting the -O level make it go away?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 15 Jul 2020 17:45:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: gs_group_1 crashing on 13beta2/s390x" }, { "msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n Tom> It's hardly surprising that datumCopy would segfault when given a\n Tom> null \"value\" and told it is pass-by-reference. However, to get to\n Tom> the datumCopy call, we must have passed the MemoryContextContains\n Tom> check on that very same pointer value, and that would surely have\n Tom> segfaulted as well, one would think.\n\nNope, because MemoryContextContains just returns \"false\" if passed a\nNULL pointer.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n", "msg_date": "Thu, 16 Jul 2020 07:42:14 +0100", "msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>", "msg_from_op": false, "msg_subject": "Re: gs_group_1 crashing on 13beta2/s390x" }, { "msg_contents": "Re: Tom Lane\n> Given the apparently-can't-happen situation at the call site,\n> and the fact that we're not seeing similar failures reported\n> elsewhere (and note that every line shown above is at least\n> five years old), I'm kind of forced to the conclusion that this\n> is a compiler bug. Does adjusting the -O level make it go away?\n\nThe problem is that a manual build doesn't crash, and I'm somewhat\nreluctant to do a full new package upload (which will keep buildds for\nall architectures busy) just for a -O0 test unless we are sure it\nhelps.\n\nI'd rather play more with the manual build artifacts (which should be\nusing the same compiler and everything), if anyone has ideas what I\nshould be trying.\n\nChristoph\n\n\n", "msg_date": "Thu, 16 Jul 2020 11:33:58 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": true, "msg_subject": "Re: gs_group_1 crashing on 13beta2/s390x" }, { "msg_contents": "Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n> Tom> It's hardly surprising that datumCopy would segfault when given a\n> Tom> null \"value\" and told it is pass-by-reference. However, to get to\n> Tom> the datumCopy call, we must have passed the MemoryContextContains\n> Tom> check on that very same pointer value, and that would surely have\n> Tom> segfaulted as well, one would think.\n\n> Nope, because MemoryContextContains just returns \"false\" if passed a\n> NULL pointer.\n\nAh, right. So you could imagine getting here if the finalfn had returned\nPointerGetDatum(NULL) with isnull = false. We have some aggregate\ntransfns that are capable of doing that for internal-type transvalues,\nI think, but the finalfn never should do it.\n\nIn any case we still have the fact that this isn't being seen in our\nbuildfarm; and that's not for lack of s390 machines. So I still think\nthe most likely explanation is a compiler bug in bleeding-edge gcc.\n\nProbably what Christoph should be trying to figure out is why he can't\nreproduce it manually. There must be some discrepancy between his\nenvironment and the build machines; but what?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 Jul 2020 11:08:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: gs_group_1 crashing on 13beta2/s390x" }, { "msg_contents": "Re: Tom Lane\n> > Tom> It's hardly surprising that datumCopy would segfault when given a\n> > Tom> null \"value\" and told it is pass-by-reference. However, to get to\n> > Tom> the datumCopy call, we must have passed the MemoryContextContains\n> > Tom> check on that very same pointer value, and that would surely have\n> > Tom> segfaulted as well, one would think.\n> \n> > Nope, because MemoryContextContains just returns \"false\" if passed a\n> > NULL pointer.\n> \n> Ah, right. So you could imagine getting here if the finalfn had returned\n> PointerGetDatum(NULL) with isnull = false. We have some aggregate\n> transfns that are capable of doing that for internal-type transvalues,\n> I think, but the finalfn never should do it.\n\nSo I had another stab at this. As expected, the 13.0 upload to\nDebian/unstable crashed again on the buildd, while a manual\neverything-should-be-the-same build succeeded. I don't know why I\ndidn't try this before, but this time I took this manual build and\nstarted a PG instance from it. Pasting the gs_group_1 queries made it\nsegfault instantly.\n\nSo here we are:\n\n#0 datumCopy (value=0, typLen=-1, typByVal=false) at ./build/../src/backend/utils/adt/datum.c:142\n#1 0x000002aa3bf6322e in datumCopy (value=<optimized out>, typByVal=<optimized out>, typLen=<optimized out>)\n at ./build/../src/backend/utils/adt/datum.c:178\n#2 0x000002aa3bda6dd6 in finalize_aggregate (aggstate=aggstate@entry=0x2aa3defbfd0, peragg=peragg@entry=0x2aa3e0671f0,\n pergroupstate=pergroupstate@entry=0x2aa3e026b78, resultVal=resultVal@entry=0x2aa3e067108, resultIsNull=0x2aa3e06712a)\n at ./build/../src/backend/executor/nodeAgg.c:1153\n\n(gdb) p *resultVal\n$3 = 0\n(gdb) p *resultIsNull\n$6 = false\n\n(gdb) p *peragg\n$7 = {aggref = 0x2aa3deef218, transno = 2, finalfn_oid = 0, finalfn = {fn_addr = 0x0, fn_oid = 0, fn_nargs = 0, fn_strict = false,\n fn_retset = false, fn_stats = 0 '\\000', fn_extra = 0x0, fn_mcxt = 0x0, fn_expr = 0x0}, numFinalArgs = 1, aggdirectargs = 0x0,\n resulttypeLen = -1, resulttypeByVal = false, shareable = true}\n\nSince finalfn_oid is 0, resultVal/resultIsNull were set by the `else`\nbranch of the if (OidIsValid) in finalize_aggregate():\n\n else\n {\n /* Don't need MakeExpandedObjectReadOnly; datumCopy will copy it */\n *resultVal = pergroupstate->transValue;\n *resultIsNull = pergroupstate->transValueIsNull;\n }\n\n(gdb) p *pergroupstate\n$12 = {transValue = 0, transValueIsNull = false, noTransValue = false}\n\nThat comes from finalize_aggregates:\n\n#3 0x000002aa3bda7e10 in finalize_aggregates (aggstate=aggstate@entry=0x2aa3defbfd0, peraggs=peraggs@entry=0x2aa3e067140,\n pergroup=0x2aa3e026b58) at ./build/../src/backend/executor/nodeAgg.c:1369\n\n /*\n * Run the final functions.\n */\n for (aggno = 0; aggno < aggstate->numaggs; aggno++)\n {\n AggStatePerAgg peragg = &peraggs[aggno];\n int transno = peragg->transno;\n AggStatePerGroup pergroupstate;\n\n pergroupstate = &pergroup[transno];\n\n if (DO_AGGSPLIT_SKIPFINAL(aggstate->aggsplit))\n finalize_partialaggregate(aggstate, peragg, pergroupstate,\n &aggvalues[aggno], &aggnulls[aggno]);\n else\n finalize_aggregate(aggstate, peragg, pergroupstate,\n &aggvalues[aggno], &aggnulls[aggno]);\n }\n\n... but at that point I'm lost. Maybe \"aggno\" and \"transno\" got mixed\nup here?\n\n(I'll leave the gdb session open for further suggestions.)\n\nChristoph\n\n\n", "msg_date": "Fri, 25 Sep 2020 16:30:32 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": true, "msg_subject": "Re: gs_group_1 crashing on 13beta2/s390x" }, { "msg_contents": "I poked around with the SET in the offending tests, and the crash is\nonly present if `set jit_above_cost = 0;` is present. Removing that\nmakes it pass. Removing work_mem or enable_hashagg does not make a\ndifference. llvm version is 10.0.1.\n\n\nTest file:\n\n--\n-- Compare results between plans using sorting and plans using hash\n-- aggregation. Force spilling in both cases by setting work_mem low\n-- and altering the statistics.\n--\n\ncreate table gs_data_1 as\nselect g%1000 as g1000, g%100 as g100, g%10 as g10, g\n from generate_series(0,1999) g;\n\nanalyze gs_data_1;\nalter table gs_data_1 set (autovacuum_enabled = 'false');\nupdate pg_class set reltuples = 10 where relname='gs_data_1';\n\nSET work_mem='64kB';\n\n-- Produce results with sorting.\n\nset enable_hashagg = false;\nset jit_above_cost = 0; -- remove this to remove crash\n\nexplain (costs off)\nselect g100, g10, sum(g::numeric), count(*), max(g::text)\nfrom gs_data_1 group by cube (g1000, g100,g10);\n\ncreate table gs_group_1 as\nselect g100, g10, sum(g::numeric), count(*), max(g::text)\nfrom gs_data_1 group by cube (g1000, g100,g10);\n\n\n\nChristoph\n\n\n", "msg_date": "Fri, 25 Sep 2020 16:37:35 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": true, "msg_subject": "Re: gs_group_1 crashing on 13beta2/s390x" }, { "msg_contents": "Re: To Tom Lane\n> I poked around with the SET in the offending tests, and the crash is\n> only present if `set jit_above_cost = 0;` is present. Removing that\n> makes it pass. Removing work_mem or enable_hashagg does not make a\n> difference. llvm version is 10.0.1.\n\nI put jit_above_cost=0 into postgresql.conf and ran \"make installcheck\"\nagain. There are more crashes:\n\n From src/test/regress/sql/interval.sql:\n\n2020-09-25 17:00:25.130 CEST [8135] LOG: Serverprozess (PID 8369) wurde von Signal 11 beendet: Speicherzugriffsfehler\n2020-09-25 17:00:25.130 CEST [8135] DETAIL: Der fehlgeschlagene Prozess f�hrte aus: select avg(f1) from interval_tbl;\n\n From src/test/regress/sql/tid.sql:\n\n2020-09-25 17:01:20.593 CEST [8135] LOG: Serverprozess (PID 9015) wurde von Signal 11 beendet: Speicherzugriffsfehler\n2020-09-25 17:01:20.593 CEST [8135] DETAIL: Der fehlgeschlagene Prozess f�hrte aus: SELECT max(ctid) FROM tid_tab;\n\n From src/test/regress/sql/collate*.sql:\n\n2020-09-25 17:07:17.852 CEST [8135] LOG: Serverprozess (PID 12232) wurde von Signal 11 beendet: Speicherzugriffsfehler\n2020-09-25 17:07:17.852 CEST [8135] DETAIL: Der fehlgeschlagene Prozess f�hrte aus: SELECT min(b), max(b) FROM collate_test1;\n\n From src/test/regress/sql/plpgsql.sql:\n\n2020-09-25 17:11:56.495 CEST [8135] LOG: Serverprozess (PID 15562) wurde von Signal 11 beendet: Speicherzugriffsfehler\n2020-09-25 17:11:56.495 CEST [8135] DETAIL: Der fehlgeschlagene Prozess f�hrte aus: select * from returnqueryf();\n\nContrary to the others this one is not related to aggregates:\n\n -- test RETURN QUERY with dropped columns\n \n create table tabwithcols(a int, b int, c int, d int);\n insert into tabwithcols values(10,20,30,40),(50,60,70,80);\n \n create or replace function returnqueryf()\n returns setof tabwithcols as $$\n begin\n return query select * from tabwithcols;\n return query execute 'select * from tabwithcols';\n end;\n $$ language plpgsql;\n \n select * from returnqueryf();\n \n alter table tabwithcols drop column b;\n \n select * from returnqueryf();\n \n alter table tabwithcols drop column d;\n \n select * from returnqueryf();\n \n alter table tabwithcols add column d int;\n \n select * from returnqueryf();\n \n drop function returnqueryf();\n drop table tabwithcols;\n\nsrc/test/regress/sql/rangefuncs.sql:\n\n2020-09-25 17:16:04.209 CEST [8135] LOG: Serverprozess (PID 17372) wurde von Signal 11 beendet: Speicherzugriffsfehler\n2020-09-25 17:16:04.209 CEST [8135] DETAIL: Der fehlgeschlagene Prozess f�hrte aus: select * from usersview;\n\nsrc/test/regress/sql/alter_table.sql:\n\n2020-09-25 17:21:36.187 CEST [8135] LOG: Serverprozess (PID 19217) wurde von Signal 11 beendet: Speicherzugriffsfehler\n2020-09-25 17:21:36.187 CEST [8135] DETAIL: Der fehlgeschlagene Prozess f�hrte aus: update atacc3 set test2 = 4 where test2 is null;\n\nsrc/test/regress/sql/polymorphism.sql:\n\n2020-09-25 17:23:55.509 CEST [8135] LOG: Serverprozess (PID 21010) wurde von Signal 11 beendet: Speicherzugriffsfehler\n2020-09-25 17:23:55.509 CEST [8135] DETAIL: Der fehlgeschlagene Prozess f�hrte aus: select myleast(1.1, 0.22, 0.55);\n\n2020-09-25 17:28:26.222 CEST [8135] LOG: Serverprozess (PID 22325) wurde von Signal 11 beendet: Speicherzugriffsfehler\n2020-09-25 17:28:26.222 CEST [8135] DETAIL: Der fehlgeschlagene Prozess f�hrte aus: select f.longname from fullname f;\n\n(stopping here)\n\n\nThere are also a lot of these log lines (without prefix):\n\nORC error: No callback manager available for s390x-ibm-linux-gnu\n\nIs that worrying? I'm not sure but I think I've seen these on other\narchitectures as well.\n\n\nI guess that suggests two things:\n* jit is not ready for prime time on s390x and I should disable it\n* jit is not exercised enough by \"make installcheck\"\n\nChristoph\n\n\n", "msg_date": "Fri, 25 Sep 2020 17:29:07 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": true, "msg_subject": "Re: gs_group_1 crashing on 13beta2/s390x" }, { "msg_contents": "Hi,\n\nOn 2020-09-25 17:29:07 +0200, Christoph Berg wrote:\n> I guess that suggests two things:\n> * jit is not ready for prime time on s390x and I should disable it\n\nI don't know how good LLVMs support for s390x JITing is, and given that\nit's unrealistic for people to get access to s390x...\n\n\n> * jit is not exercised enough by \"make installcheck\"\n\nSo far we've exercised more widely it by setting up machines that use it\nfor all queries (by setting the config option). I'm doubtful it's worth\ndoing differently.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 25 Sep 2020 09:42:04 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: gs_group_1 crashing on 13beta2/s390x" }, { "msg_contents": "Am 25. September 2020 18:42:04 MESZ schrieb Andres Freund <andres@anarazel.de>\n>> * jit is not exercised enough by \"make installcheck\"\n>\n>So far we've exercised more widely it by setting up machines that use\n>it\n>for all queries (by setting the config option). I'm doubtful it's worth\n>doing differently.\n\nOk, but given that Debian is currently targeting 22 architectures, I doubt the PostgreSQL buildfarm covers all of them with the extra JIT option, so I should probably make sure to do that here when running tests.\n\n\n", "msg_date": "Fri, 25 Sep 2020 19:05:52 +0200", "msg_from": "Christoph Berg <cb@df7cb.de>", "msg_from_op": false, "msg_subject": "Re: gs_group_1 crashing on 13beta2/s390x" }, { "msg_contents": "Em sex., 25 de set. de 2020 às 11:30, Christoph Berg <myon@debian.org>\nescreveu:\n\n> Re: Tom Lane\n> > > Tom> It's hardly surprising that datumCopy would segfault when given a\n> > > Tom> null \"value\" and told it is pass-by-reference. However, to get to\n> > > Tom> the datumCopy call, we must have passed the MemoryContextContains\n> > > Tom> check on that very same pointer value, and that would surely have\n> > > Tom> segfaulted as well, one would think.\n> >\n> > > Nope, because MemoryContextContains just returns \"false\" if passed a\n> > > NULL pointer.\n> >\n> > Ah, right. So you could imagine getting here if the finalfn had returned\n> > PointerGetDatum(NULL) with isnull = false. We have some aggregate\n> > transfns that are capable of doing that for internal-type transvalues,\n> > I think, but the finalfn never should do it.\n>\n> So I had another stab at this. As expected, the 13.0 upload to\n> Debian/unstable crashed again on the buildd, while a manual\n> everything-should-be-the-same build succeeded. I don't know why I\n> didn't try this before, but this time I took this manual build and\n> started a PG instance from it. Pasting the gs_group_1 queries made it\n> segfault instantly.\n>\n> So here we are:\n>\n> #0 datumCopy (value=0, typLen=-1, typByVal=false) at\n> ./build/../src/backend/utils/adt/datum.c:142\n> #1 0x000002aa3bf6322e in datumCopy (value=<optimized out>,\n> typByVal=<optimized out>, typLen=<optimized out>)\n> at ./build/../src/backend/utils/adt/datum.c:178\n> #2 0x000002aa3bda6dd6 in finalize_aggregate (aggstate=aggstate@entry=0x2aa3defbfd0,\n> peragg=peragg@entry=0x2aa3e0671f0,\n> pergroupstate=pergroupstate@entry=0x2aa3e026b78,\n> resultVal=resultVal@entry=0x2aa3e067108, resultIsNull=0x2aa3e06712a)\n> at ./build/../src/backend/executor/nodeAgg.c:1153\n>\n> (gdb) p *resultVal\n> $3 = 0\n> (gdb) p *resultIsNull\n> $6 = false\n>\n> (gdb) p *peragg\n> $7 = {aggref = 0x2aa3deef218, transno = 2, finalfn_oid = 0, finalfn =\n> {fn_addr = 0x0, fn_oid = 0, fn_nargs = 0, fn_strict = false,\n> fn_retset = false, fn_stats = 0 '\\000', fn_extra = 0x0, fn_mcxt = 0x0,\n> fn_expr = 0x0}, numFinalArgs = 1, aggdirectargs = 0x0,\n> resulttypeLen = -1, resulttypeByVal = false, shareable = true}\n>\n> Since finalfn_oid is 0, resultVal/resultIsNull were set by the `else`\n> branch of the if (OidIsValid) in finalize_aggregate():\n>\n> else\n> {\n> /* Don't need MakeExpandedObjectReadOnly; datumCopy will copy it */\n> *resultVal = pergroupstate->transValue;\n> *resultIsNull = pergroupstate->transValueIsNull;\n> }\n>\n> (gdb) p *pergroupstate\n> $12 = {transValue = 0, transValueIsNull = false, noTransValue = false}\n>\nHere transValueIsNull shouldn't be \"true\"?\nthus, DatumCopy would be protected, for this test: \"!*resultIsNull\"\n\nregards,\nRanier Vilela\n\nEm sex., 25 de set. de 2020 às 11:30, Christoph Berg <myon@debian.org> escreveu:Re: Tom Lane\n> >  Tom> It's hardly surprising that datumCopy would segfault when given a\n> >  Tom> null \"value\" and told it is pass-by-reference. However, to get to\n> >  Tom> the datumCopy call, we must have passed the MemoryContextContains\n> >  Tom> check on that very same pointer value, and that would surely have\n> >  Tom> segfaulted as well, one would think.\n> \n> > Nope, because MemoryContextContains just returns \"false\" if passed a\n> > NULL pointer.\n> \n> Ah, right.  So you could imagine getting here if the finalfn had returned\n> PointerGetDatum(NULL) with isnull = false.  We have some aggregate\n> transfns that are capable of doing that for internal-type transvalues,\n> I think, but the finalfn never should do it.\n\nSo I had another stab at this. As expected, the 13.0 upload to\nDebian/unstable crashed again on the buildd, while a manual\neverything-should-be-the-same build succeeded. I don't know why I\ndidn't try this before, but this time I took this manual build and\nstarted a PG instance from it. Pasting the gs_group_1 queries made it\nsegfault instantly.\n\nSo here we are:\n\n#0  datumCopy (value=0, typLen=-1, typByVal=false) at ./build/../src/backend/utils/adt/datum.c:142\n#1  0x000002aa3bf6322e in datumCopy (value=<optimized out>, typByVal=<optimized out>, typLen=<optimized out>)\n    at ./build/../src/backend/utils/adt/datum.c:178\n#2  0x000002aa3bda6dd6 in finalize_aggregate (aggstate=aggstate@entry=0x2aa3defbfd0, peragg=peragg@entry=0x2aa3e0671f0,\n    pergroupstate=pergroupstate@entry=0x2aa3e026b78, resultVal=resultVal@entry=0x2aa3e067108, resultIsNull=0x2aa3e06712a)\n    at ./build/../src/backend/executor/nodeAgg.c:1153\n\n(gdb) p *resultVal\n$3 = 0\n(gdb) p *resultIsNull\n$6 = false\n\n(gdb) p *peragg\n$7 = {aggref = 0x2aa3deef218, transno = 2, finalfn_oid = 0, finalfn = {fn_addr = 0x0, fn_oid = 0, fn_nargs = 0, fn_strict = false,\n    fn_retset = false, fn_stats = 0 '\\000', fn_extra = 0x0, fn_mcxt = 0x0, fn_expr = 0x0}, numFinalArgs = 1, aggdirectargs = 0x0,\n  resulttypeLen = -1, resulttypeByVal = false, shareable = true}\n\nSince finalfn_oid is 0, resultVal/resultIsNull were set by the `else`\nbranch of the if (OidIsValid) in finalize_aggregate():\n\n    else\n    {\n        /* Don't need MakeExpandedObjectReadOnly; datumCopy will copy it */\n        *resultVal = pergroupstate->transValue;\n        *resultIsNull = pergroupstate->transValueIsNull;\n    }\n\n(gdb) p *pergroupstate\n$12 = {transValue = 0, transValueIsNull = false, noTransValue = false}Here transValueIsNull shouldn't be \"true\"?thus, DatumCopy would be protected, for this test: \"!*resultIsNull\" regards,Ranier Vilela", "msg_date": "Fri, 25 Sep 2020 14:36:48 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: gs_group_1 crashing on 13beta2/s390x" }, { "msg_contents": "Em sex., 25 de set. de 2020 às 14:36, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n> Em sex., 25 de set. de 2020 às 11:30, Christoph Berg <myon@debian.org>\n> escreveu:\n>\n>> Re: Tom Lane\n>> > > Tom> It's hardly surprising that datumCopy would segfault when given\n>> a\n>> > > Tom> null \"value\" and told it is pass-by-reference. However, to get\n>> to\n>> > > Tom> the datumCopy call, we must have passed the\n>> MemoryContextContains\n>> > > Tom> check on that very same pointer value, and that would surely\n>> have\n>> > > Tom> segfaulted as well, one would think.\n>> >\n>> > > Nope, because MemoryContextContains just returns \"false\" if passed a\n>> > > NULL pointer.\n>> >\n>> > Ah, right. So you could imagine getting here if the finalfn had\n>> returned\n>> > PointerGetDatum(NULL) with isnull = false. We have some aggregate\n>> > transfns that are capable of doing that for internal-type transvalues,\n>> > I think, but the finalfn never should do it.\n>>\n>> So I had another stab at this. As expected, the 13.0 upload to\n>> Debian/unstable crashed again on the buildd, while a manual\n>> everything-should-be-the-same build succeeded. I don't know why I\n>> didn't try this before, but this time I took this manual build and\n>> started a PG instance from it. Pasting the gs_group_1 queries made it\n>> segfault instantly.\n>>\n>> So here we are:\n>>\n>> #0 datumCopy (value=0, typLen=-1, typByVal=false) at\n>> ./build/../src/backend/utils/adt/datum.c:142\n>> #1 0x000002aa3bf6322e in datumCopy (value=<optimized out>,\n>> typByVal=<optimized out>, typLen=<optimized out>)\n>> at ./build/../src/backend/utils/adt/datum.c:178\n>> #2 0x000002aa3bda6dd6 in finalize_aggregate (aggstate=aggstate@entry=0x2aa3defbfd0,\n>> peragg=peragg@entry=0x2aa3e0671f0,\n>> pergroupstate=pergroupstate@entry=0x2aa3e026b78,\n>> resultVal=resultVal@entry=0x2aa3e067108, resultIsNull=0x2aa3e06712a)\n>> at ./build/../src/backend/executor/nodeAgg.c:1153\n>>\n>> (gdb) p *resultVal\n>> $3 = 0\n>> (gdb) p *resultIsNull\n>> $6 = false\n>>\n>> (gdb) p *peragg\n>> $7 = {aggref = 0x2aa3deef218, transno = 2, finalfn_oid = 0, finalfn =\n>> {fn_addr = 0x0, fn_oid = 0, fn_nargs = 0, fn_strict = false,\n>> fn_retset = false, fn_stats = 0 '\\000', fn_extra = 0x0, fn_mcxt =\n>> 0x0, fn_expr = 0x0}, numFinalArgs = 1, aggdirectargs = 0x0,\n>> resulttypeLen = -1, resulttypeByVal = false, shareable = true}\n>>\n>> Since finalfn_oid is 0, resultVal/resultIsNull were set by the `else`\n>> branch of the if (OidIsValid) in finalize_aggregate():\n>>\n>> else\n>> {\n>> /* Don't need MakeExpandedObjectReadOnly; datumCopy will copy it\n>> */\n>> *resultVal = pergroupstate->transValue;\n>> *resultIsNull = pergroupstate->transValueIsNull;\n>> }\n>>\n>> (gdb) p *pergroupstate\n>> $12 = {transValue = 0, transValueIsNull = false, noTransValue = false}\n>>\n> Here transValueIsNull shouldn't be \"true\"?\n> thus, DatumCopy would be protected, for this test: \"!*resultIsNull\"\n>\nObserve this excerpt (line 1129):\n/* don't call a strict function with NULL inputs */\n*resultVal = (Datum) 0;\n*resultIsNull = true;\n\nNow, it does not contradict this principle.\nIf all the values are null, they should be filled with True (1),\nand not 0 (false)?\n\nLine (4711), function ExecReScanAgg:\nMemSet(econtext->ecxt_aggvalues, 0, sizeof(Datum) * node->numaggs);\nMemSet(econtext->ecxt_aggnulls, true, sizeof(bool) * node->numaggs);\n\nzero, here, mean False, aggvalues is Null? Not.\n\nregards,\nRanier Vilela\n\nEm sex., 25 de set. de 2020 às 14:36, Ranier Vilela <ranier.vf@gmail.com> escreveu:Em sex., 25 de set. de 2020 às 11:30, Christoph Berg <myon@debian.org> escreveu:Re: Tom Lane\n> >  Tom> It's hardly surprising that datumCopy would segfault when given a\n> >  Tom> null \"value\" and told it is pass-by-reference. However, to get to\n> >  Tom> the datumCopy call, we must have passed the MemoryContextContains\n> >  Tom> check on that very same pointer value, and that would surely have\n> >  Tom> segfaulted as well, one would think.\n> \n> > Nope, because MemoryContextContains just returns \"false\" if passed a\n> > NULL pointer.\n> \n> Ah, right.  So you could imagine getting here if the finalfn had returned\n> PointerGetDatum(NULL) with isnull = false.  We have some aggregate\n> transfns that are capable of doing that for internal-type transvalues,\n> I think, but the finalfn never should do it.\n\nSo I had another stab at this. As expected, the 13.0 upload to\nDebian/unstable crashed again on the buildd, while a manual\neverything-should-be-the-same build succeeded. I don't know why I\ndidn't try this before, but this time I took this manual build and\nstarted a PG instance from it. Pasting the gs_group_1 queries made it\nsegfault instantly.\n\nSo here we are:\n\n#0  datumCopy (value=0, typLen=-1, typByVal=false) at ./build/../src/backend/utils/adt/datum.c:142\n#1  0x000002aa3bf6322e in datumCopy (value=<optimized out>, typByVal=<optimized out>, typLen=<optimized out>)\n    at ./build/../src/backend/utils/adt/datum.c:178\n#2  0x000002aa3bda6dd6 in finalize_aggregate (aggstate=aggstate@entry=0x2aa3defbfd0, peragg=peragg@entry=0x2aa3e0671f0,\n    pergroupstate=pergroupstate@entry=0x2aa3e026b78, resultVal=resultVal@entry=0x2aa3e067108, resultIsNull=0x2aa3e06712a)\n    at ./build/../src/backend/executor/nodeAgg.c:1153\n\n(gdb) p *resultVal\n$3 = 0\n(gdb) p *resultIsNull\n$6 = false\n\n(gdb) p *peragg\n$7 = {aggref = 0x2aa3deef218, transno = 2, finalfn_oid = 0, finalfn = {fn_addr = 0x0, fn_oid = 0, fn_nargs = 0, fn_strict = false,\n    fn_retset = false, fn_stats = 0 '\\000', fn_extra = 0x0, fn_mcxt = 0x0, fn_expr = 0x0}, numFinalArgs = 1, aggdirectargs = 0x0,\n  resulttypeLen = -1, resulttypeByVal = false, shareable = true}\n\nSince finalfn_oid is 0, resultVal/resultIsNull were set by the `else`\nbranch of the if (OidIsValid) in finalize_aggregate():\n\n    else\n    {\n        /* Don't need MakeExpandedObjectReadOnly; datumCopy will copy it */\n        *resultVal = pergroupstate->transValue;\n        *resultIsNull = pergroupstate->transValueIsNull;\n    }\n\n(gdb) p *pergroupstate\n$12 = {transValue = 0, transValueIsNull = false, noTransValue = false}Here transValueIsNull shouldn't be \"true\"?thus, DatumCopy would be protected, for this test: \"!*resultIsNull\"Observe this excerpt (line 1129):\t\t\t/* don't call a strict function with NULL inputs */\t\t\t*resultVal = (Datum) 0;\t\t\t*resultIsNull = true;Now, it does not contradict this principle.If all the values are null, they should be filled with True (1),and not 0 (false)?Line (4711), function ExecReScanAgg:\tMemSet(econtext->ecxt_aggvalues, 0, sizeof(Datum) * node->numaggs);\tMemSet(econtext->ecxt_aggnulls, true, sizeof(bool) * node->numaggs);zero, here, mean False, aggvalues is Null? Not.regards,Ranier Vilela", "msg_date": "Fri, 25 Sep 2020 15:05:16 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: gs_group_1 crashing on 13beta2/s390x" }, { "msg_contents": "Christoph Berg <cb@df7cb.de> writes:\n> Ok, but given that Debian is currently targeting 22 architectures, I doubt the PostgreSQL buildfarm covers all of them with the extra JIT option, so I should probably make sure to do that here when running tests.\n\n+1. I rather doubt our farm is running this type of test on anything\nbut x86_64.\n\nOf course, we can't actually *fix* any LLVM bugs, but it'd be nice\nto know whether they're there.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 25 Sep 2020 14:11:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: gs_group_1 crashing on 13beta2/s390x" }, { "msg_contents": "Hi,\n\nOn 2020-09-25 19:05:52 +0200, Christoph Berg wrote:\n> Am 25. September 2020 18:42:04 MESZ schrieb Andres Freund <andres@anarazel.de>\n> >> * jit is not exercised enough by \"make installcheck\"\n> >\n> >So far we've exercised more widely it by setting up machines that use\n> >it\n> >for all queries (by setting the config option). I'm doubtful it's worth\n> >doing differently.\n> \n> Ok, but given that Debian is currently targeting 22 architectures, I\n> doubt the PostgreSQL buildfarm covers all of them with the extra JIT\n> option, so I should probably make sure to do that here when running\n> tests.\n\nForcing to JIT a lot of queries that are otherwise really fast\nunfortunately has a significant time cost. Doing that on slow\narchitectures might be prohibitively slow. Kinda wonder if we shouldn't\njust restrict JIT to a few architectures that we have a bit more regular\naccess to (x86, arm, maybe also ppc?). It's not like anybody would run\nlarge analytics queries on mips.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 25 Sep 2020 12:14:04 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: gs_group_1 crashing on 13beta2/s390x" }, { "msg_contents": "Hi,\n\nOn 2020-09-25 14:11:46 -0400, Tom Lane wrote:\n> Christoph Berg <cb@df7cb.de> writes:\n> > Ok, but given that Debian is currently targeting 22 architectures, I doubt the PostgreSQL buildfarm covers all of them with the extra JIT option, so I should probably make sure to do that here when running tests.\n> \n> +1. I rather doubt our farm is running this type of test on anything\n> but x86_64.\n\nThere's quite a few arm animals and at least one mips animal that do\nsome minimal coverage of JITing (i.e. the queries that are actually\nsomewhat expensive). I pinged two owners asking whether one of the arm\nanimals could be changed to force JITing.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 25 Sep 2020 12:23:40 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: gs_group_1 crashing on 13beta2/s390x" }, { "msg_contents": "Re: Andres Freund\n> > > Ok, but given that Debian is currently targeting 22 architectures, I doubt the PostgreSQL buildfarm covers all of them with the extra JIT option, so I should probably make sure to do that here when running tests.\n> > \n> > +1. I rather doubt our farm is running this type of test on anything\n> > but x86_64.\n> \n> There's quite a few arm animals and at least one mips animal that do\n> some minimal coverage of JITing (i.e. the queries that are actually\n> somewhat expensive). I pinged two owners asking whether one of the arm\n> animals could be changed to force JITing.\n\nI pushed a change that should enable LLVM-10-JIT-testing everywhere [*]\nand (admittedly to my surprise) all other architectures passed just\nfine:\n\nhttps://buildd.debian.org/status/logs.php?pkg=postgresql-13&ver=13.0-2\n\nFor the record, the architectures with llvm disabled are these:\n\nclang-10 [!alpha !hppa !hurd-i386 !ia64 !kfreebsd-amd64 !kfreebsd-i386 !m68k !powerpc !riscv64 !s390x !sh4 !sparc64 !x32],\n\nAfter the tests I realized that LLVM 11 is also already packaged, but\ns390x still segfaults with that version.\n\nChristoph\n\n[*] apparently pgbench --temp-config=/no/such/file is not an error,\nwhich makes verifying this change a bit harder\n\n\n", "msg_date": "Mon, 28 Sep 2020 14:22:01 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": true, "msg_subject": "Re: gs_group_1 crashing on 13beta2/s390x" }, { "msg_contents": "Hi,\n\nOn 2020-09-28 14:22:01 +0200, Christoph Berg wrote:\n> Re: Andres Freund\n> > > > Ok, but given that Debian is currently targeting 22 architectures, I doubt the PostgreSQL buildfarm covers all of them with the extra JIT option, so I should probably make sure to do that here when running tests.\n> > > \n> > > +1. I rather doubt our farm is running this type of test on anything\n> > > but x86_64.\n> > \n> > There's quite a few arm animals and at least one mips animal that do\n> > some minimal coverage of JITing (i.e. the queries that are actually\n> > somewhat expensive). I pinged two owners asking whether one of the arm\n> > animals could be changed to force JITing.\n> \n> I pushed a change that should enable LLVM-10-JIT-testing everywhere [*]\n> and (admittedly to my surprise) all other architectures passed just\n> fine:\n> \n> https://buildd.debian.org/status/logs.php?pkg=postgresql-13&ver=13.0-2\n\nThanks!\n\n\n> For the record, the architectures with llvm disabled are these:\n> \n> clang-10 [!alpha !hppa !hurd-i386 !ia64 !kfreebsd-amd64 !kfreebsd-i386 !m68k !powerpc !riscv64 !s390x !sh4 !sparc64 !x32],\n\n!powerpc doesn't exclude ppc64, I assume?\n\n\n> After the tests I realized that LLVM 11 is also already packaged, but\n> s390x still segfaults with that version.\n> \n> Christoph\n> \n> [*] apparently pgbench --temp-config=/no/such/file is not an error,\n> which makes verifying this change a bit harder\n\npgbench? I assume you mean pg_regress?\n\nFWIW, an easy way to enable JIT for just about all tests, including tap\ntests, is to set\nPGOPTIONS='-c jit=1 -c jit_above_cost=0 ...'\nin the environment before starting the tests.\n\n\nCan a non-debian dev get access to a s390x? It'd be nice to isolate this\nenough to report a bug to LLVM - and that's probably a lot easier for me\nthan you... My guess would be that some relocation processing or such is\noff.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 13 Oct 2020 12:21:41 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: gs_group_1 crashing on 13beta2/s390x" }, { "msg_contents": "Re: Andres Freund\n> > clang-10 [!alpha !hppa !hurd-i386 !ia64 !kfreebsd-amd64 !kfreebsd-i386 !m68k !powerpc !riscv64 !s390x !sh4 !sparc64 !x32],\n> \n> !powerpc doesn't exclude ppc64, I assume?\n\nThat's direct matches only, there's no architecture-family logic in\nthere.\n\n> > [*] apparently pgbench --temp-config=/no/such/file is not an error,\n> > which makes verifying this change a bit harder\n> \n> pgbench? I assume you mean pg_regress?\n\nErr yes of course.\n\n> FWIW, an easy way to enable JIT for just about all tests, including tap\n> tests, is to set\n> PGOPTIONS='-c jit=1 -c jit_above_cost=0 ...'\n> in the environment before starting the tests.\n\nOk, that might simplify the setup a bit.\n\n> Can a non-debian dev get access to a s390x? It'd be nice to isolate this\n> enough to report a bug to LLVM - and that's probably a lot easier for me\n> than you... My guess would be that some relocation processing or such is\n> off.\n\nYou already had an account there in the past I think. I'll see if I\ncan get that reactivated. Thanks for the offer!\n\nChristoph\n\n\n", "msg_date": "Tue, 13 Oct 2020 23:42:32 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": true, "msg_subject": "Re: gs_group_1 crashing on 13beta2/s390x" }, { "msg_contents": "Hi,\n\nChristoph helped me to get access to a s390x machine - I wasn't able to\nreproduce exactly the error he hit. Initially all tests passed, but\nafter recompiling with build flags more similar to Christop's I was able\nto hit another instance of what I assume to be the same bug.\n\nI am fairly sure that I see the problem. Before a post LLVM 10 change\nthe \"runtime linker\" for JITed code only asserted that relocations that\nneed to be performed are of a known type. Since the debian build -\ncorrectly - uses a release version of LLVM, this results in unhandled\nrelocations basically being resolved to 0.\n\nI suspect that building with LDFLAGS=\"-Wl,-z,relro -Wl,-z,now\" - which\nis what I think the debian package does - creates the types of\nrelocations that LLVM doesn't handle for elf + s390.\n\n10 release branch:\n\nvoid RuntimeDyldELF::resolveSystemZRelocation(const SectionEntry &Section,\n uint64_t Offset, uint64_t Value,\n uint32_t Type, int64_t Addend) {\n uint8_t *LocalAddress = Section.getAddressWithOffset(Offset);\n switch (Type) {\n default:\n llvm_unreachable(\"Relocation type not implemented yet!\");\n break;\n\n11/master:\n\nvoid RuntimeDyldELF::resolveSystemZRelocation(const SectionEntry &Section,\n uint64_t Offset, uint64_t Value,\n uint32_t Type, int64_t Addend) {\n uint8_t *LocalAddress = Section.getAddressWithOffset(Offset);\n switch (Type) {\n default:\n report_fatal_error(\"Relocation type not implemented yet!\");\n break;\n\nVerifying that that's the case by rebuilding against 11 (and then an\nLLVM debug build, which will take a day or two).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 14 Oct 2020 14:58:35 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: gs_group_1 crashing on 13beta2/s390x" }, { "msg_contents": "Hi,\n\nOn 2020-10-14 14:58:35 -0700, Andres Freund wrote:\n> I suspect that building with LDFLAGS=\"-Wl,-z,relro -Wl,-z,now\" - which\n> is what I think the debian package does - creates the types of\n> relocations that LLVM doesn't handle for elf + s390.\n> \n> 10 release branch:\n> \n> void RuntimeDyldELF::resolveSystemZRelocation(const SectionEntry &Section,\n> uint64_t Offset, uint64_t Value,\n> uint32_t Type, int64_t Addend) {\n> uint8_t *LocalAddress = Section.getAddressWithOffset(Offset);\n> switch (Type) {\n> default:\n> llvm_unreachable(\"Relocation type not implemented yet!\");\n> break;\n> \n> 11/master:\n> \n> void RuntimeDyldELF::resolveSystemZRelocation(const SectionEntry &Section,\n> uint64_t Offset, uint64_t Value,\n> uint32_t Type, int64_t Addend) {\n> uint8_t *LocalAddress = Section.getAddressWithOffset(Offset);\n> switch (Type) {\n> default:\n> report_fatal_error(\"Relocation type not implemented yet!\");\n> break;\n> \n> Verifying that that's the case by rebuilding against 11 (and then an\n> LLVM debug build, which will take a day or two).\n\nOh dear. It's not as simple as that. The issue indeed are relocations,\nbut we don't hit those errors. The issue rather is that the systemz\nspecific relative redirection code thought that the only relative\nsymbols are functions. So it creates a stub function to redirect\nthem. Which turns out to not work well with variables like\nCurrentMemoryContext...\n\nExample debug output:\n\t\tThis is a SystemZ indirect relocation. Create a new stub function\n\t\tRelType: 20 Addend: 2 TargetName: ExecAggInitGroup\n\t\tSectionID: 0 Offset: 624\n\t\tThis is a SystemZ indirect relocation. Create a new stub function\n\t\tRelType: 26 Addend: 2 TargetName: CurrentMemoryContext\n\t\tSectionID: 0 Offset: 712\n\nOpening a bug report...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 14 Oct 2020 17:56:16 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: gs_group_1 crashing on 13beta2/s390x" }, { "msg_contents": "Hi,\n\nOn 2020-10-14 17:56:16 -0700, Andres Freund wrote:\n> Oh dear. It's not as simple as that. The issue indeed are relocations,\n> but we don't hit those errors. The issue rather is that the systemz\n> specific relative redirection code thought that the only relative\n> symbols are functions. So it creates a stub function to redirect\n> them. Which turns out to not work well with variables like\n> CurrentMemoryContext...\n\nThat might be a problem - but the main problem causing the crash at hand\nis likely something else. The prototypes we create for\nExecAggTransReparent() were missing the 'zeroext' parameter for a the\n'isnull' attribute, because the code for copying the attributes from\nllvmjit_types.bc didn't go deep enough (i.e. I didn't quite grok the\npretty weird API). On s390x that lead to the newValue argument in\nExecAggTransReparent() having a 0 lower byte, but set higher bytes -\nwhich then *sometimes* fooled the if (!newValueIsNull) check, which\nassumed that the higher bits were unset.\n\nI have a fix for this, but I've just stared at s390 assembly code for\n~10h, never having done so before. So that'll have to wait for tomorrow.\n\nIt's quite possible that that fix would also help on other\narchitectures...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 15 Oct 2020 01:32:46 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: gs_group_1 crashing on 13beta2/s390x" }, { "msg_contents": "Hi,\n\nOn 2020-10-15 01:32:46 -0700, Andres Freund wrote:\n> I have a fix for this, but I've just stared at s390 assembly code for\n> ~10h, never having done so before. So that'll have to wait for tomorrow.\n>\n> It's quite possible that that fix would also help on other\n> architectures...\n\nPushed now to 11-master.\n\nAuthor: Andres Freund <andres@anarazel.de>\nBranch: master [72559438f] 2020-10-15 14:29:53 -0700\nBranch: REL_13_STABLE [ae3e75aba] 2020-10-15 14:30:40 -0700\nBranch: REL_12_STABLE [c8a2bb0f1] 2020-10-15 14:31:32 -0700\nBranch: REL_11_STABLE [f3dee5b9a] 2020-10-15 15:06:16 -0700\n\n llvmjit: Also copy parameter / return value attributes from template functions.\n\n Previously we only copied the function attributes. That caused problems at\n least on s390x: Because we didn't copy the 'zeroext' attribute for\n ExecAggTransReparent()'s *IsNull parameters, expressions invoking it didn't\n ensure that the upper bytes of the registers were zeroed. In the - relatively\n rare - cases where not, ExecAggTransReparent() wrongly ended up in the\n newValueIsNull branch due to the register not being zero. Subsequently causing\n a crash.\n\n It's quite possible that this would cause problems on other platforms, and in\n other places than just ExecAggTransReparent() on s390x.\n\n Thanks to Christoph (and the Debian project) for providing me with access to a\n s390x machine, allowing me to debug this.\n\n Reported-By: Christoph Berg\n Author: Andres Freund\n Discussion: https://postgr.es/m/20201015083246.kie5726xerdt3ael@alap3.anarazel.de\n Backpatch: 11-, where JIT was added\n\n\nI had a successful check-world run with maximum jittery on s390x. But I\ndid hit the issue in different places than you did, so it'd be cool if\nyou could re-enable JIT for s390x - I think you have a package tracking\nHEAD?\n\nThanks again Christoph!\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 15 Oct 2020 15:29:24 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: gs_group_1 crashing on 13beta2/s390x" }, { "msg_contents": "Hi,\n\nOn 2020-10-15 15:29:24 -0700, Andres Freund wrote:\n> Pushed now to 11-master.\n\nUgh - there's a failure with an old LLVM version (3.9):\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dragonet&dt=2020-10-15%2022%3A24%3A04\n\nNeed to rebuild that locally to reproduce.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 15 Oct 2020 15:37:01 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: gs_group_1 crashing on 13beta2/s390x" }, { "msg_contents": "Hi,\n\nOn 2020-10-15 15:37:01 -0700, Andres Freund wrote:\n> On 2020-10-15 15:29:24 -0700, Andres Freund wrote:\n> > Pushed now to 11-master.\n> \n> Ugh - there's a failure with an old LLVM version (3.9):\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dragonet&dt=2020-10-15%2022%3A24%3A04\n> \n> Need to rebuild that locally to reproduce.\n\nIt's a bug that was fixed in LLVM 4, but too late to be backported to\n3.9.\n\nThe easiest seems to be to just use a wrapper function that does the\nnecessary pre-checks. Something like the below (in llvmjit_wrap.cpp).\n\nSince the wrapper still needs to call into\nLLVMGetAttributeCountAtIndexPG, it seems easier to just use the seperate\nfunction name, rather than #define'ing LLVMGetAttributeCountAtIndex() to\nthe PG version?\n\n/*\n * Like LLVM's LLVMGetAttributeCountAtIndex(), works around a bug in LLVM 3.9.\n *\n * In LLVM <= 3.9, LLVMGetAttributeCountAtIndex() segfaults if there are no\n * attributes at an index (fixed in LLVM commit ce9bb1097dc2).\n */\nunsigned\nLLVMGetAttributeCountAtIndexPG(LLVMValueRef F, uint32 Idx)\n{\n\t/*\n\t * This is more expensive, so only do when using a problematic LLVM\n\t * version.\n\t */\n#if LLVM_VERSION_MAJOR < 4\n\tif (!llvm::unwrap<llvm::Function>(F)->getAttributes().hasAttributes(Idx))\n\t\treturn 0;\n#endif\n\n\t/*\n\t * There is no nice public API to determine the count nicely, so just\n\t * always fall back to LLVM's C API.\n\t */\n\treturn LLVMGetAttributeCountAtIndex(F, Idx);\n}\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 15 Oct 2020 17:12:54 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: gs_group_1 crashing on 13beta2/s390x" }, { "msg_contents": "Hi,\n\nOn 2020-10-15 17:12:54 -0700, Andres Freund wrote:\n> On 2020-10-15 15:37:01 -0700, Andres Freund wrote:\n> It's a bug that was fixed in LLVM 4, but too late to be backported to\n> 3.9.\n> \n> The easiest seems to be to just use a wrapper function that does the\n> necessary pre-checks. Something like the below (in llvmjit_wrap.cpp).\n> \n> Since the wrapper still needs to call into\n> LLVMGetAttributeCountAtIndexPG, it seems easier to just use the seperate\n> function name, rather than #define'ing LLVMGetAttributeCountAtIndex() to\n> the PG version?\n> \n> /*\n> * Like LLVM's LLVMGetAttributeCountAtIndex(), works around a bug in LLVM 3.9.\n> *\n> * In LLVM <= 3.9, LLVMGetAttributeCountAtIndex() segfaults if there are no\n> * attributes at an index (fixed in LLVM commit ce9bb1097dc2).\n> */\n> unsigned\n> LLVMGetAttributeCountAtIndexPG(LLVMValueRef F, uint32 Idx)\n> {\n> \t/*\n> \t * This is more expensive, so only do when using a problematic LLVM\n> \t * version.\n> \t */\n> #if LLVM_VERSION_MAJOR < 4\n> \tif (!llvm::unwrap<llvm::Function>(F)->getAttributes().hasAttributes(Idx))\n> \t\treturn 0;\n> #endif\n> \n> \t/*\n> \t * There is no nice public API to determine the count nicely, so just\n> \t * always fall back to LLVM's C API.\n> \t */\n> \treturn LLVMGetAttributeCountAtIndex(F, Idx);\n> }\n\nSeems to have calmed the buildfarm, without negative consequences so far.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 15 Oct 2020 20:27:02 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: gs_group_1 crashing on 13beta2/s390x" }, { "msg_contents": "Re: Andres Freund\n> I had a successful check-world run with maximum jittery on s390x. But I\n> did hit the issue in different places than you did, so it'd be cool if\n> you could re-enable JIT for s390x - I think you have a package tracking\n> HEAD?\n\nCool, thanks!\n\nI'm tracking PG14 head with apt.postgresql.org, but that doesn't have\ns390x.\n\nI'll pull the patches for PG13, re-enable JIT on some more\narchitectures, and use the opportunity to bump the LLVM version used\nto 11.\n\nChristoph\n\n\n", "msg_date": "Fri, 16 Oct 2020 11:19:19 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": true, "msg_subject": "Re: gs_group_1 crashing on 13beta2/s390x" } ]
[ { "msg_contents": "According to the documentation, the filename given in file_fdw must be an\nabsolute path. Hwever, it works perfectly fine with a relative path.\n\nSo either the documentation is wrong, or the code is wrong. It behaves the\nsame at least back to 9.5, I did not try it further back than that.\n\nI can't find a reference to the code that limits this. AFAICT the\ndocumentation has been there since day 1.\n\nQuestion is, which one is right. Is there a reason we'd want to restrict it\nto absolute pathnames?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nAccording to the documentation, the filename given in file_fdw must be an absolute path. Hwever, it works perfectly fine with a relative path.So either the documentation is wrong, or the code is wrong. It behaves the same at least back to 9.5, I did not try it further back than that.I can't find a reference to the code that limits this. AFAICT the documentation has been there since day 1.Question is, which one is right. Is there a reason we'd want to restrict it to absolute pathnames?--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Wed, 15 Jul 2020 13:22:21 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": true, "msg_subject": "file_fdw vs relative paths" }, { "msg_contents": "On Wed, Jul 15, 2020 at 01:22:21PM +0200, Magnus Hagander wrote:\n> According to the documentation, the filename given in file_fdw must be an\n> absolute path. Hwever, it works perfectly fine with a relative path.\n> \n> So either the documentation is wrong, or the code is wrong.�It behaves the same\n> at least back to 9.5, I did not try it further back than that.\n\nYes, I tested back to 9.5 too:\n\n\tCREATE EXTENSION file_fdw;\n\tCREATE SERVER pgconf FOREIGN DATA WRAPPER file_fdw;\n\tCREATE FOREIGN TABLE pgconf (line TEXT) SERVER pgconf OPTIONS ( filename\n\t\t'postgresql.conf', format 'text', delimiter E'\\x7f' );\n\tSELECT * FROM pgconf;\n\t # -----------------------------\n\t # PostgreSQL configuration file\n\t # -----------------------------\n\t #\n\t # This file consists of lines of the form:\n\t...\n\n> I can't find a reference to the code that limits this. AFAICT the documentation\n> has been there since day 1.\n> \n> Question is, which one is right. Is there a reason we'd want to restrict it to\n> absolute pathnames?\n\nI think it should work just like COPY, which allows relative paths; doc\npatch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee", "msg_date": "Mon, 24 Aug 2020 20:26:12 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: file_fdw vs relative paths" }, { "msg_contents": "On Aug 25, 2020, at 8:26 AM, Bruce Momjian <bruce@momjian.us<mailto:bruce@momjian.us>> wrote:\r\n\r\nYes, I tested back to 9.5 too:\r\n\r\nCREATE EXTENSION file_fdw;\r\nCREATE SERVER pgconf FOREIGN DATA WRAPPER file_fdw;\r\nCREATE FOREIGN TABLE pgconf (line TEXT) SERVER pgconf OPTIONS ( filename\r\n'postgresql.conf', format 'text', delimiter E'\\x7f' );\r\nSELECT * FROM pgconf;\r\n # -----------------------------\r\n # PostgreSQL configuration file\r\n # -----------------------------\r\n #\r\n # This file consists of lines of the form:\r\n…\r\n\r\nThe file_fdw extension was introduced by commit 7c5d0ae7078456bfeedb2103c45b9a32285c2631,\r\nand I tested it supports relative paths. This is a doc bug.\r\n\r\n--\r\nJapin Li\r\n\r\n\n\n\n\n\n\n\n\n\nOn Aug 25, 2020, at 8:26 AM, Bruce Momjian <bruce@momjian.us> wrote:\n\nYes,\r\n I tested back to 9.5 too:\n\nCREATE\r\n EXTENSION file_fdw;\nCREATE\r\n SERVER pgconf FOREIGN DATA WRAPPER file_fdw;\nCREATE\r\n FOREIGN TABLE pgconf (line TEXT) SERVER pgconf OPTIONS ( filename\n'postgresql.conf',\r\n format 'text', delimiter E'\\x7f' );\nSELECT\r\n * FROM pgconf;\n #\r\n -----------------------------\n #\r\n PostgreSQL configuration file\n #\r\n -----------------------------\n #\n #\r\n This file consists of lines of the form:\n…\n\n\n\n\n\nThe file_fdw extension was introduced by commit 7c5d0ae7078456bfeedb2103c45b9a32285c2631,\nand I tested it supports relative paths.  This is a doc bug.\n\n\n\n--\nJapin Li", "msg_date": "Tue, 25 Aug 2020 07:28:41 +0000", "msg_from": "Li Japin <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: file_fdw vs relative paths" }, { "msg_contents": "On Tue, Aug 25, 2020 at 9:28 AM Li Japin <japinli@hotmail.com> wrote:\n\n>\n> On Aug 25, 2020, at 8:26 AM, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> Yes, I tested back to 9.5 too:\n>\n> CREATE EXTENSION file_fdw;\n> CREATE SERVER pgconf FOREIGN DATA WRAPPER file_fdw;\n> CREATE FOREIGN TABLE pgconf (line TEXT) SERVER pgconf OPTIONS ( filename\n> 'postgresql.conf', format 'text', delimiter E'\\x7f' );\n> SELECT * FROM pgconf;\n> # -----------------------------\n> # PostgreSQL configuration file\n> # -----------------------------\n> #\n> # This file consists of lines of the form:\n> …\n>\n>\n> The file_fdw extension was introduced by\n> commit 7c5d0ae7078456bfeedb2103c45b9a32285c2631,\n> and I tested it supports relative paths. This is a doc bug.\n>\n>\nWell technically it can also have been a code bug but yes if so it is one\nthat has lived since day 1. But given that nobody has chimed in to say they\nthink that's what it is for a month, I think we'll conclude it's a docs\nbug.\n\nBruce, I've applied and backpatched your docs patch for this.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Aug 25, 2020 at 9:28 AM Li Japin <japinli@hotmail.com> wrote:\n\n\n\n\nOn Aug 25, 2020, at 8:26 AM, Bruce Momjian <bruce@momjian.us> wrote:\n\nYes,\n I tested back to 9.5 too:\n\nCREATE\n EXTENSION file_fdw;\nCREATE\n SERVER pgconf FOREIGN DATA WRAPPER file_fdw;\nCREATE\n FOREIGN TABLE pgconf (line TEXT) SERVER pgconf OPTIONS ( filename\n'postgresql.conf',\n format 'text', delimiter E'\\x7f' );\nSELECT\n * FROM pgconf;\n #\n -----------------------------\n #\n PostgreSQL configuration file\n #\n -----------------------------\n #\n #\n This file consists of lines of the form:\n…\n\n\n\n\n\nThe file_fdw extension was introduced by commit 7c5d0ae7078456bfeedb2103c45b9a32285c2631,\nand I tested it supports relative paths.  This is a doc bug.\n\nWell technically it can also have been a code bug but yes if so it is one that has lived since day 1. But given that nobody has chimed in to say they think that's what it is for a month, I think we'll conclude it's a docs bug. Bruce, I've applied and backpatched your docs patch for this.--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Mon, 31 Aug 2020 13:10:58 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": true, "msg_subject": "Re: file_fdw vs relative paths" }, { "msg_contents": "On Mon, Aug 31, 2020 at 1:10 PM Magnus Hagander <magnus@hagander.net> wrote:\n\n>\n>\n> On Tue, Aug 25, 2020 at 9:28 AM Li Japin <japinli@hotmail.com> wrote:\n>\n>>\n>> On Aug 25, 2020, at 8:26 AM, Bruce Momjian <bruce@momjian.us> wrote:\n>>\n>> Yes, I tested back to 9.5 too:\n>>\n>> CREATE EXTENSION file_fdw;\n>> CREATE SERVER pgconf FOREIGN DATA WRAPPER file_fdw;\n>> CREATE FOREIGN TABLE pgconf (line TEXT) SERVER pgconf OPTIONS ( filename\n>> 'postgresql.conf', format 'text', delimiter E'\\x7f' );\n>> SELECT * FROM pgconf;\n>> # -----------------------------\n>> # PostgreSQL configuration file\n>> # -----------------------------\n>> #\n>> # This file consists of lines of the form:\n>> …\n>>\n>>\n>> The file_fdw extension was introduced by\n>> commit 7c5d0ae7078456bfeedb2103c45b9a32285c2631,\n>> and I tested it supports relative paths. This is a doc bug.\n>>\n>>\n> Well technically it can also have been a code bug but yes if so it is one\n> that has lived since day 1. But given that nobody has chimed in to say they\n> think that's what it is for a month, I think we'll conclude it's a docs\n> bug.\n>\n> Bruce, I've applied and backpatched your docs patch for this.\n>\n>\nGah, and of course right after doing that, I remembered I wanted to get a\nsecond change in :) To solve the \"who's this Josh\" question, I suggest we\nalso change the example to point to the data/log directory which is likely\nto exist in a lot more of the cases. I keep getting people who ask \"who is\njosh\" based on the /home/josh path. Not that it's that important, but...\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>", "msg_date": "Mon, 31 Aug 2020 13:16:05 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": true, "msg_subject": "Re: file_fdw vs relative paths" }, { "msg_contents": "On Mon, Aug 31, 2020 at 01:16:05PM +0200, Magnus Hagander wrote:\n> Bruce, I've applied and backpatched your docs patch for this.\n> \n> Gah, and of course right after doing that, I remembered I wanted to get a\n> second change in :) To solve the \"who's this Josh\" question, I suggest we also\n> change the example to point to the data/log directory which is likely to exist\n> in a lot more of the cases. I keep getting people who ask \"who is josh\" based\n> on the /home/josh path. Not that it's that important, but...�\n\nThanks, and agreed.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Mon, 31 Aug 2020 11:03:45 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: file_fdw vs relative paths" }, { "msg_contents": "On Mon, Aug 31, 2020 at 5:03 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Mon, Aug 31, 2020 at 01:16:05PM +0200, Magnus Hagander wrote:\n> > Bruce, I've applied and backpatched your docs patch for this.\n> >\n> > Gah, and of course right after doing that, I remembered I wanted to get a\n> > second change in :) To solve the \"who's this Josh\" question, I suggest\n> we also\n> > change the example to point to the data/log directory which is likely to\n> exist\n> > in a lot more of the cases. I keep getting people who ask \"who is josh\"\n> based\n> > on the /home/josh path. Not that it's that important, but...\n>\n> Thanks, and agreed.\n>\n>\nThanks, applied. I backpacked to 13 but didn't bother with the rest as it's\nnot technically *wrong* before..\n\n//Magnus\n\nOn Mon, Aug 31, 2020 at 5:03 PM Bruce Momjian <bruce@momjian.us> wrote:On Mon, Aug 31, 2020 at 01:16:05PM +0200, Magnus Hagander wrote:\n>     Bruce, I've applied and backpatched your docs patch for this.\n> \n> Gah, and of course right after doing that, I remembered I wanted to get a\n> second change in :) To solve the \"who's this Josh\" question, I suggest we also\n> change the example to point to the data/log directory which is likely to exist\n> in a lot more of the cases. I keep getting people who ask \"who is josh\" based\n> on the /home/josh path. Not that it's that important, but... \n\nThanks, and agreed.Thanks, applied. I backpacked to 13 but didn't bother with the rest as it's not technically *wrong* before..//Magnus", "msg_date": "Sun, 6 Sep 2020 19:31:08 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": true, "msg_subject": "Re: file_fdw vs relative paths" }, { "msg_contents": "Hi\n\nOn 2020/09/07 2:31, Magnus Hagander wrote:\n> On Mon, Aug 31, 2020 at 5:03 PM Bruce Momjian <bruce@momjian.us <mailto:bruce@momjian.us>> wrote:\n> \n> On Mon, Aug 31, 2020 at 01:16:05PM +0200, Magnus Hagander wrote:\n> >     Bruce, I've applied and backpatched your docs patch for this.\n> >\n> > Gah, and of course right after doing that, I remembered I wanted to get a\n> > second change in :) To solve the \"who's this Josh\" question, I suggest we also\n> > change the example to point to the data/log directory which is likely to exist\n> > in a lot more of the cases. I keep getting people who ask \"who is josh\" based\n> > on the /home/josh path. Not that it's that important, but...\n> \n> Thanks, and agreed.\n> \n> \n> Thanks, applied. I backpacked to 13 but didn't bother with the rest as it's not technically *wrong* before..\n\nIt's missing the leading single quote from the filename parameter:\n\n diff --git a/doc/src/sgml/file-fdw.sgml b/doc/src/sgml/file-fdw.sgml\n (...)\n -OPTIONS ( filename '/home/josh/data/log/pglog.csv', format 'csv' );\n +OPTIONS ( filename log/pglog.csv', format 'csv' );\n (...)\n\n\nRegards\n\n\nIan Barwick\n\n\n-- \nIan Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Wed, 9 Sep 2020 10:39:06 +0900", "msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: file_fdw vs relative paths" }, { "msg_contents": "On Wed, Sep 9, 2020 at 3:39 AM Ian Barwick <ian.barwick@2ndquadrant.com>\nwrote:\n\n> Hi\n>\n> On 2020/09/07 2:31, Magnus Hagander wrote:\n> > On Mon, Aug 31, 2020 at 5:03 PM Bruce Momjian <bruce@momjian.us <mailto:\n> bruce@momjian.us>> wrote:\n> >\n> > On Mon, Aug 31, 2020 at 01:16:05PM +0200, Magnus Hagander wrote:\n> > > Bruce, I've applied and backpatched your docs patch for this.\n> > >\n> > > Gah, and of course right after doing that, I remembered I wanted\n> to get a\n> > > second change in :) To solve the \"who's this Josh\" question, I\n> suggest we also\n> > > change the example to point to the data/log directory which is\n> likely to exist\n> > > in a lot more of the cases. I keep getting people who ask \"who is\n> josh\" based\n> > > on the /home/josh path. Not that it's that important, but...\n> >\n> > Thanks, and agreed.\n> >\n> >\n> > Thanks, applied. I backpacked to 13 but didn't bother with the rest as\n> it's not technically *wrong* before..\n>\n> It's missing the leading single quote from the filename parameter:\n>\n> diff --git a/doc/src/sgml/file-fdw.sgml b/doc/src/sgml/file-fdw.sgml\n> (...)\n> -OPTIONS ( filename '/home/josh/data/log/pglog.csv', format 'csv' );\n> +OPTIONS ( filename log/pglog.csv', format 'csv' );\n> (...)\n>\n\nGAH.\n\nThanks!\n\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Wed, Sep 9, 2020 at 3:39 AM Ian Barwick <ian.barwick@2ndquadrant.com> wrote:Hi\n\nOn 2020/09/07 2:31, Magnus Hagander wrote:\n> On Mon, Aug 31, 2020 at 5:03 PM Bruce Momjian <bruce@momjian.us <mailto:bruce@momjian.us>> wrote:\n> \n>     On Mon, Aug 31, 2020 at 01:16:05PM +0200, Magnus Hagander wrote:\n>      >     Bruce, I've applied and backpatched your docs patch for this.\n>      >\n>      > Gah, and of course right after doing that, I remembered I wanted to get a\n>      > second change in :) To solve the \"who's this Josh\" question, I suggest we also\n>      > change the example to point to the data/log directory which is likely to exist\n>      > in a lot more of the cases. I keep getting people who ask \"who is josh\" based\n>      > on the /home/josh path. Not that it's that important, but...\n> \n>     Thanks, and agreed.\n> \n> \n> Thanks, applied. I backpacked to 13 but didn't bother with the rest as it's not technically *wrong* before..\n\nIt's missing the leading single quote from the filename parameter:\n\n     diff --git a/doc/src/sgml/file-fdw.sgml b/doc/src/sgml/file-fdw.sgml\n     (...)\n     -OPTIONS ( filename '/home/josh/data/log/pglog.csv', format 'csv' );\n     +OPTIONS ( filename log/pglog.csv', format 'csv' );\n     (...)GAH.Thanks! --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Wed, 9 Sep 2020 12:42:51 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": true, "msg_subject": "Re: file_fdw vs relative paths" } ]
[ { "msg_contents": "Hi,\n I test some SQL in the latest Postgres master branch code (we find these issues when\ndeveloping Greenplum database in the PR https://github.com/greenplum-db/gpdb/pull/10418,\nand my colleague come up with the following cases in Postgres):\n\ncreate table t3 (c1 text, c2 text);\nCREATE TABLE\ninsert into t3\nselect\n 'fhufehwiohewiuewhuhwiufhwifhweuhfwu', --random data\n 'fiowehufwhfyegygfewpfwwfeuhwhufwh' --random data\nfrom generate_series(1, 10000000) i;\nINSERT 0 10000000\nanalyze t3;\nANALYZE\ncreate table t4 (like t3);\nCREATE TABLE\ninsert into t4 select * from t4;\nINSERT 0 0\ninsert into t4 select * from t3;\nINSERT 0 10000000\nanalyze t4;\nANALYZE\nset enable_hashjoin to off;\nSET\nexplain (costs off)\nselect count(*) from t3, t4\nwhere t3.c1 like '%sss'\n and timeofday() = t4.c1 and t3.c1 = t4.c1;\n QUERY PLAN\n--------------------------------------------------------\n Finalize Aggregate\n -> Gather\n Workers Planned: 2\n -> Partial Aggregate\n -> Nested Loop\n Join Filter: (t3.c1 = t4.c1)\n -> Parallel Seq Scan on t3\n Filter: (c1 ~~ '%sss'::text)\n -> Seq Scan on t4\n Filter: (timeofday() = c1)\n(10 rows)\n\nexplain (verbose, costs off)\nselect count(*)\nfrom\n t3,\n (select *, timeofday() as x from t4 ) t4\nwhere t3.c1 like '%sss' and\n timeofday() = t4.c1 and t3.c1 = t4.c1;\n QUERY PLAN\n------------------------------------------------------------------\n Finalize Aggregate\n Output: count(*)\n -> Gather\n Output: (PARTIAL count(*))\n Workers Planned: 2\n -> Partial Aggregate\n Output: PARTIAL count(*)\n -> Nested Loop\n Join Filter: (t3.c1 = t4.c1)\n -> Parallel Seq Scan on public.t3\n Output: t3.c1, t3.c2\n Filter: (t3.c1 ~~ '%sss'::text)\n -> Seq Scan on public.t4\n Output: t4.c1, NULL::text, timeofday()\n Filter: (timeofday() = t4.c1)\n(15 rows)\n\n\nFocus on the last two plans, the function timeofday is\nvolatile but paralle-safe. And Postgres outputs two parallel\nplan.\n\n\nThe first plan:\n Finalize Aggregate\n -> Gather\n Workers Planned: 2\n -> Partial Aggregate\n -> Nested Loop\n Join Filter: (t3.c1 = t4.c1)\n -> Parallel Seq Scan on t3\n Filter: (c1 ~~ '%sss'::text)\n -> Seq Scan on t4\n Filter: (timeofday() = c1)\n\nThe join's left tree is parallel scan and the right tree is seq scan.\nThis algorithm is correct using the distribute distributive law of\ndistributed join:\n A = [A1 A2 A3...An], B then A join B = gather( (A1 join B) (A2 join B) ... (An join B) )\n\nThe correctness of the above law should have a pre-assumption:\n The data set of B is the same in each join: (A1 join B) (A2 join B) ... (An join B)\n\nBut things get complicated when volatile functions come in. Timeofday is just\nan example to show the idea. The core is volatile functions can return different\nresults on successive calls with the same arguments. Thus the following piece,\nthe right tree of the join\n -> Seq Scan on t4\n Filter: (timeofday() = c1)\ncan not be considered consistent everywhere in the scan workers.\n\nThe second plan\n\n Finalize Aggregate\n Output: count(*)\n -> Gather\n Output: (PARTIAL count(*))\n Workers Planned: 2\n -> Partial Aggregate\n Output: PARTIAL count(*)\n -> Nested Loop\n Join Filter: (t3.c1 = t4.c1)\n -> Parallel Seq Scan on public.t3\n Output: t3.c1, t3.c2\n Filter: (t3.c1 ~~ '%sss'::text)\n -> Seq Scan on public.t4\n Output: t4.c1, NULL::text, timeofday()\n Filter: (timeofday() = t4.c1)\n\nhave voltile projections in the right tree of the nestloop:\n\n -> Seq Scan on public.t4\n Output: t4.c1, NULL::text, timeofday()\n Filter: (timeofday() = t4.c1)\n\nIt should not be taken as consistent in different workers.\n\n------------------------------------------------------------------------------------------\n\nThe above are just two cases we find today. And it should be enough to\nshow the core issue to have a discussion here.\n\nThe question is, should we consider volatile functions when generating\nparallel plans?\n\n------------------------------------------------------------------------------------------\nFYI, some plan diffs of Greenplum can be found here: https://www.diffnow.com/report/etulf\n\n\n\n\n\n\n\n\n\n\n\nHi,\n\n    I test some SQL in the latest Postgres master branch code (we find these issues when\n\ndeveloping Greenplum database in the PR https://github.com/greenplum-db/gpdb/pull/10418, \n\nand my colleague come up with the following cases in Postgres):\n\n  \n\n\ncreate table t3 (c1 text, c2 text);\n\nCREATE TABLE\n\n\ninsert into t3\n\n\nselect\n\n\n  'fhufehwiohewiuewhuhwiufhwifhweuhfwu', --random data\n\n\n  'fiowehufwhfyegygfewpfwwfeuhwhufwh' --random data\n\n\nfrom generate_series(1, 10000000) i;\n\n\nINSERT 0 10000000\n\n\nanalyze t3;\n\n\nANALYZE\n\n\ncreate table t4 (like t3);\n\n\nCREATE TABLE\n\n\ninsert into t4 select * from t4;\n\n\nINSERT 0 0\n\n\ninsert into t4 select * from t3;\n\n\nINSERT 0 10000000\n\n\nanalyze t4;\n\n\nANALYZE\n\n\nset enable_hashjoin to off;\n\n\nSET\n\n\nexplain (costs off)\n\n\nselect count(*) from t3, t4\n\n\nwhere t3.c1 like '%sss'\n\n\n      and timeofday() = t4.c1 and t3.c1 = t4.c1;\n\n\n                       QUERY PLAN\n\n\n--------------------------------------------------------\n\n\n Finalize Aggregate\n\n\n   ->  Gather\n\n\n         Workers Planned: 2\n\n\n         ->  Partial Aggregate\n\n\n               ->  Nested Loop\n\n\n                     Join Filter: (t3.c1 = t4.c1)\n\n\n                     ->  Parallel Seq Scan on t3\n\n\n                           Filter: (c1 ~~ '%sss'::text)\n\n\n                     ->  Seq Scan on t4\n\n\n                           Filter: (timeofday() = c1)\n\n\n(10 rows)\n\n\n\n\n\n\nexplain (verbose, costs off)\n\n\nselect count(*)\n\n\nfrom\n\n\n  t3,\n\n\n  (select *, timeofday() as x from t4 ) t4\n\n\nwhere t3.c1 like '%sss' and\n\n\n      timeofday() = t4.c1 and t3.c1 = t4.c1;\n\n\n                            QUERY PLAN\n\n\n------------------------------------------------------------------\n\n\n Finalize Aggregate\n\n\n   Output: count(*)\n\n\n   ->  Gather\n\n\n         Output: (PARTIAL count(*))\n\n\n         Workers Planned: 2\n\n\n         ->  Partial Aggregate\n\n\n               Output: PARTIAL count(*)\n\n\n               ->  Nested Loop\n\n\n                     Join Filter: (t3.c1 = t4.c1)\n\n\n                     ->  Parallel Seq Scan on public.t3\n\n\n                           Output: t3.c1, t3.c2\n\n\n                           Filter: (t3.c1 ~~ '%sss'::text)\n\n\n                     ->  Seq Scan on public.t4\n\n\n                           Output: t4.c1, NULL::text, timeofday()\n\n\n                           Filter: (timeofday() = t4.c1)\n\n\n(15 rows)\n\n\n\n\n\n     \n\nFocus on the last two plans, the function timeofday is\n\nvolatile but paralle-safe. And Postgres outputs two parallel\n\nplan. \n\n    \n\n\n\n\nThe first plan:\n\n\n\n Finalize Aggregate\n\n\n\n\n   ->  Gather\n\n\n\n\n         Workers Planned: 2\n\n\n\n\n         ->  Partial Aggregate\n\n\n\n\n               ->  Nested Loop\n\n\n\n\n                     Join Filter: (t3.c1 = t4.c1)\n\n\n\n\n                     ->  Parallel Seq Scan on t3\n\n\n\n\n                           Filter: (c1 ~~ '%sss'::text)\n\n\n\n\n                     ->  Seq Scan on t4\n\n\n\n\n                           Filter: (timeofday() = c1)\n\n\n\n\n\nThe join's left tree is parallel scan and the right tree is seq scan.\nThis algorithm is correct using the distribute distributive law of\ndistributed join: \n       A = [A1 A2 A3...An], B then A join B = gather( (A1 join B) (A2\n join B) ... (An join B) )\n\n\n\n\n\nThe correctness of the above law should have a pre-assumption:\n\n      The data set of B is the same in each join: (A1\n join B) (A2 join B) ... (An\n join B)\n\n\nBut\n things get complicated when volatile functions come in. Timeofday is just\nan\n example to show the idea. The core is volatile functions  can return different\nresults on successive calls with the same arguments. Thus the following piece,\nthe right tree of the join\n\n\n\n                     ->  Seq Scan on t4\n\n\n\n\n                           Filter: (timeofday() = c1)\ncan not be considered consistent everywhere in the scan workers.\n\n\nThe second plan \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n Finalize Aggregate\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n   Output: count(*)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n   ->  Gather\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n         Output: (PARTIAL count(*))\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n         Workers Planned: 2\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n         ->  Partial Aggregate\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n               Output: PARTIAL count(*)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n               ->  Nested Loop\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n                     Join Filter: (t3.c1 = t4.c1)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n                     ->  Parallel Seq Scan on public.t3\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n                           Output: t3.c1, t3.c2\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n                           Filter: (t3.c1 ~~ '%sss'::text)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n                     ->  Seq Scan on public.t4\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n                           Output: t4.c1, NULL::text, timeofday()\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n                           Filter: (timeofday() = t4.c1)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nhave voltile projections in the right tree of the nestloop:\n \n\n\n\n\n\n\n\n\n                     ->  Seq Scan on public.t4\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n                           Output: t4.c1, NULL::text, timeofday()\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n                           Filter: (timeofday() = t4.c1)\n\n\nIt should not be taken as consistent in different workers.\n\n\n------------------------------------------------------------------------------------------\n\n\n\nThe above are just two cases we find today. And it should be enough to \nshow the core issue to have a discussion here.\n\n\nThe question is, should we consider volatile functions when generating\nparallel plans?\n\n\n\n------------------------------------------------------------------------------------------\nFYI, some plan diffs of Greenplum can be found here: https://www.diffnow.com/report/etulf", "msg_date": "Wed, 15 Jul 2020 12:44:38 +0000", "msg_from": "Zhenghua Lyu <zlyu@vmware.com>", "msg_from_op": true, "msg_subject": "Volatile Functions in Parallel Plans" }, { "msg_contents": "On Wed, Jul 15, 2020 at 6:14 PM Zhenghua Lyu <zlyu@vmware.com> wrote:\n>\n>\n> The first plan:\n>\n> Finalize Aggregate\n> -> Gather\n> Workers Planned: 2\n> -> Partial Aggregate\n> -> Nested Loop\n> Join Filter: (t3.c1 = t4.c1)\n> -> Parallel Seq Scan on t3\n> Filter: (c1 ~~ '%sss'::text)\n> -> Seq Scan on t4\n> Filter: (timeofday() = c1)\n>\n> The join's left tree is parallel scan and the right tree is seq scan.\n> This algorithm is correct using the distribute distributive law of\n> distributed join:\n> A = [A1 A2 A3...An], B then A join B = gather( (A1 join B) (A2 join B) ... (An join B) )\n>\n> The correctness of the above law should have a pre-assumption:\n> The data set of B is the same in each join: (A1 join B) (A2 join B) ... (An join B)\n>\n> But things get complicated when volatile functions come in. Timeofday is just\n> an example to show the idea. The core is volatile functions can return different\n> results on successive calls with the same arguments. Thus the following piece,\n> the right tree of the join\n> -> Seq Scan on t4\n> Filter: (timeofday() = c1)\n> can not be considered consistent everywhere in the scan workers.\n>\n\nBut this won't be consistent even for non-parallel plans. I mean to\nsay for each loop of join the \"Seq Scan on t4\" would give different\nresults. Currently, we don't consider volatile functions as\nparallel-safe by default.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 16 Jul 2020 09:37:35 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Volatile Functions in Parallel Plans" }, { "msg_contents": "Hi, thanks for your reply.\nBut this won't be consistent even for non-parallel plans.\nIf we do not use the distributed law of parallel join, it seems\nOK.\n\nIf we generate a parallel plan using the distributed law of the join,\nthen this transformation's pre-assumption might be broken.\n\nCurrently, we don't consider volatile functions as\nparallel-safe by default.\n\nI run the SQL in pg12:\n\nzlv=# select count(proname) from pg_proc where provolatile = 'v' and proparallel ='s';\n count\n-------\n 100\n(1 row)\n\nzlv=# select proname from pg_proc where provolatile = 'v' and proparallel ='s';\n proname\n----------------------------------------\n timeofday\n bthandler\n hashhandler\n gisthandler\n ginhandler\n spghandler\n brinhandler\n\nIt seems there are many functions which is both volatile and parallel safe.\n________________________________\nFrom: Amit Kapila <amit.kapila16@gmail.com>\nSent: Thursday, July 16, 2020 12:07 PM\nTo: Zhenghua Lyu <zlyu@vmware.com>\nCc: PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>\nSubject: Re: Volatile Functions in Parallel Plans\n\nOn Wed, Jul 15, 2020 at 6:14 PM Zhenghua Lyu <zlyu@vmware.com> wrote:\n>\n>\n> The first plan:\n>\n> Finalize Aggregate\n> -> Gather\n> Workers Planned: 2\n> -> Partial Aggregate\n> -> Nested Loop\n> Join Filter: (t3.c1 = t4.c1)\n> -> Parallel Seq Scan on t3\n> Filter: (c1 ~~ '%sss'::text)\n> -> Seq Scan on t4\n> Filter: (timeofday() = c1)\n>\n> The join's left tree is parallel scan and the right tree is seq scan.\n> This algorithm is correct using the distribute distributive law of\n> distributed join:\n> A = [A1 A2 A3...An], B then A join B = gather( (A1 join B) (A2 join B) ... (An join B) )\n>\n> The correctness of the above law should have a pre-assumption:\n> The data set of B is the same in each join: (A1 join B) (A2 join B) ... (An join B)\n>\n> But things get complicated when volatile functions come in. Timeofday is just\n> an example to show the idea. The core is volatile functions can return different\n> results on successive calls with the same arguments. Thus the following piece,\n> the right tree of the join\n> -> Seq Scan on t4\n> Filter: (timeofday() = c1)\n> can not be considered consistent everywhere in the scan workers.\n>\n\nBut this won't be consistent even for non-parallel plans. I mean to\nsay for each loop of join the \"Seq Scan on t4\" would give different\nresults. Currently, we don't consider volatile functions as\nparallel-safe by default.\n\n--\nWith Regards,\nAmit Kapila.\nEnterpriseDB: https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.enterprisedb.com%2F&amp;data=02%7C01%7Czlyu%40vmware.com%7C825aa0c2259c4da0112008d8293dcd1c%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637304692698598521&amp;sdata=LWZnJ43KQML3EBwB2DoPGE0KHA2t6A3%2FIS9KSLx%2Bcn4%3D&amp;reserved=0\n\n\n\n\n\n\n\n\nHi, thanks for your reply.\n\n\nBut this won't be consistent\n even for non-parallel plans. \n\n\n\nIf we do not use the distributed\n law of parallel join, it seems\n\nOK.\n\n\n\n\nIf we generate a parallel plan using the distributed law of the join,\n\nthen this transformation's pre-assumption might be broken.\n\n\n\n\n\nCurrently, we don't consider\n volatile functions as\nparallel-safe by default.\n\n\n\n\n\n\nI run the SQL in pg12:\n\n\n\n\n\nzlv=# select count(proname) from pg_proc where provolatile = 'v' and proparallel ='s';\n\n count\n\n\n-------\n\n\n   100\n\n\n(1 row)\n\n\n\n\n\n\nzlv=# select proname from pg_proc where provolatile = 'v' and proparallel ='s';\n\n\n                proname\n\n\n----------------------------------------\n\n\n timeofday\n\n\n bthandler\n\n\n hashhandler\n\n\n gisthandler\n\n\n ginhandler\n\n\n spghandler\n\n\n brinhandler\n\n\n\n\n\nIt seems there are many functions which is both volatile and parallel safe.\n\n\nFrom: Amit Kapila <amit.kapila16@gmail.com>\nSent: Thursday, July 16, 2020 12:07 PM\nTo: Zhenghua Lyu <zlyu@vmware.com>\nCc: PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>\nSubject: Re: Volatile Functions in Parallel Plans\n \n\n\nOn Wed, Jul 15, 2020 at 6:14 PM Zhenghua Lyu <zlyu@vmware.com> wrote:\n>\n>\n> The first plan:\n>\n>  Finalize Aggregate\n>    ->  Gather\n>          Workers Planned: 2\n>          ->  Partial Aggregate\n>                ->  Nested Loop\n>                      Join Filter: (t3.c1 = t4.c1)\n>                      ->  Parallel Seq Scan on t3\n>                            Filter: (c1 ~~ '%sss'::text)\n>                      ->  Seq Scan on t4\n>                            Filter: (timeofday() = c1)\n>\n> The join's left tree is parallel scan and the right tree is seq scan.\n> This algorithm is correct using the distribute distributive law of\n> distributed join:\n>        A = [A1 A2 A3...An], B then A join B = gather( (A1 join B) (A2 join B) ... (An join B) )\n>\n> The correctness of the above law should have a pre-assumption:\n>       The data set of B is the same in each join: (A1 join B) (A2 join B) ... (An join B)\n>\n> But things get complicated when volatile functions come in. Timeofday is just\n> an example to show the idea. The core is volatile functions  can return different\n> results on successive calls with the same arguments. Thus the following piece,\n> the right tree of the join\n>                      ->  Seq Scan on t4\n>                            Filter: (timeofday() = c1)\n> can not be considered consistent everywhere in the scan workers.\n>\n\nBut this won't be consistent even for non-parallel plans.  I mean to\nsay for each loop of join the \"Seq Scan on t4\" would give different\nresults.  Currently, we don't consider volatile functions as\nparallel-safe by default.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: \nhttps://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.enterprisedb.com%2F&amp;data=02%7C01%7Czlyu%40vmware.com%7C825aa0c2259c4da0112008d8293dcd1c%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637304692698598521&amp;sdata=LWZnJ43KQML3EBwB2DoPGE0KHA2t6A3%2FIS9KSLx%2Bcn4%3D&amp;reserved=0", "msg_date": "Thu, 16 Jul 2020 04:22:36 +0000", "msg_from": "Zhenghua Lyu <zlyu@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Volatile Functions in Parallel Plans" }, { "msg_contents": "Hi Zhenghua,\n\nOn Wed, Jul 15, 2020 at 5:44 AM Zhenghua Lyu wrote:\n>\n> Hi,\n> I test some SQL in the latest Postgres master branch code (we find these issues when\n> developing Greenplum database in the PR https://github.com/greenplum-db/gpdb/pull/10418,\n> and my colleague come up with the following cases in Postgres):\n>\n>\n> create table t3 (c1 text, c2 text);\n> CREATE TABLE\n> insert into t3\n> select\n> 'fhufehwiohewiuewhuhwiufhwifhweuhfwu', --random data\n> 'fiowehufwhfyegygfewpfwwfeuhwhufwh' --random data\n> from generate_series(1, 10000000) i;\n> INSERT 0 10000000\n> analyze t3;\n> ANALYZE\n> create table t4 (like t3);\n> CREATE TABLE\n> insert into t4 select * from t4;\n> INSERT 0 0\n> insert into t4 select * from t3;\n> INSERT 0 10000000\n> analyze t4;\n> ANALYZE\n> set enable_hashjoin to off;\n> SET\n> explain (costs off)\n> select count(*) from t3, t4\n> where t3.c1 like '%sss'\n> and timeofday() = t4.c1 and t3.c1 = t4.c1;\n> QUERY PLAN\n> --------------------------------------------------------\n> Finalize Aggregate\n> -> Gather\n> Workers Planned: 2\n> -> Partial Aggregate\n> -> Nested Loop\n> Join Filter: (t3.c1 = t4.c1)\n> -> Parallel Seq Scan on t3\n> Filter: (c1 ~~ '%sss'::text)\n> -> Seq Scan on t4\n> Filter: (timeofday() = c1)\n> (10 rows)\n>\n> explain (verbose, costs off)\n> select count(*)\n> from\n> t3,\n> (select *, timeofday() as x from t4 ) t4\n> where t3.c1 like '%sss' and\n> timeofday() = t4.c1 and t3.c1 = t4.c1;\n> QUERY PLAN\n> ------------------------------------------------------------------\n> Finalize Aggregate\n> Output: count(*)\n> -> Gather\n> Output: (PARTIAL count(*))\n> Workers Planned: 2\n> -> Partial Aggregate\n> Output: PARTIAL count(*)\n> -> Nested Loop\n> Join Filter: (t3.c1 = t4.c1)\n> -> Parallel Seq Scan on public.t3\n> Output: t3.c1, t3.c2\n> Filter: (t3.c1 ~~ '%sss'::text)\n> -> Seq Scan on public.t4\n> Output: t4.c1, NULL::text, timeofday()\n> Filter: (timeofday() = t4.c1)\n> (15 rows)\n>\n>\n>\n> Focus on the last two plans, the function timeofday is\n> volatile but paralle-safe. And Postgres outputs two parallel\n> plan.\n>\n>\n> The first plan:\n>\n> Finalize Aggregate\n> -> Gather\n> Workers Planned: 2\n> -> Partial Aggregate\n> -> Nested Loop\n> Join Filter: (t3.c1 = t4.c1)\n> -> Parallel Seq Scan on t3\n> Filter: (c1 ~~ '%sss'::text)\n> -> Seq Scan on t4\n> Filter: (timeofday() = c1)\n>\n> The join's left tree is parallel scan and the right tree is seq scan.\n> This algorithm is correct using the distribute distributive law of\n> distributed join:\n> A = [A1 A2 A3...An], B then A join B = gather( (A1 join B) (A2 join B) ... (An join B) )\n>\n> The correctness of the above law should have a pre-assumption:\n> The data set of B is the same in each join: (A1 join B) (A2 join B) ... (An join B)\n>\n> But things get complicated when volatile functions come in. Timeofday is just\n> an example to show the idea. The core is volatile functions can return different\n> results on successive calls with the same arguments. Thus the following piece,\n> the right tree of the join\n> -> Seq Scan on t4\n> Filter: (timeofday() = c1)\n> can not be considered consistent everywhere in the scan workers.\n>\n> The second plan\n>\n> Finalize Aggregate\n> Output: count(*)\n> -> Gather\n> Output: (PARTIAL count(*))\n> Workers Planned: 2\n> -> Partial Aggregate\n> Output: PARTIAL count(*)\n> -> Nested Loop\n> Join Filter: (t3.c1 = t4.c1)\n> -> Parallel Seq Scan on public.t3\n> Output: t3.c1, t3.c2\n> Filter: (t3.c1 ~~ '%sss'::text)\n> -> Seq Scan on public.t4\n> Output: t4.c1, NULL::text, timeofday()\n> Filter: (timeofday() = t4.c1)\n>\n>\n> have voltile projections in the right tree of the nestloop:\n>\n> -> Seq Scan on public.t4\n> Output: t4.c1, NULL::text, timeofday()\n> Filter: (timeofday() = t4.c1)\n>\n> It should not be taken as consistent in different workers.\n\nYou are right, no they are not consistent. But Neither plans is\nincorrect:\n\n1. In the first query, it's semantically permissible to evaluate\ntimeofday() for each pair of (c1, c2), and the plan reflects that.\n(Notice that the parallel nature of the plan is just noise here, the\nplanner could have gone with a Nested Loop of which the inner side is\n_not_ materialized).\n\n2. In the second query -- again -- in a canonical \"outside-in\"\nevaluation, it's perfectly permissible to evaluate the subquery for each\nvalue of t3. Again, the parallelism here is hardly relevant, a serial\nplan without a material node on the inner side of a nested loop would\njust as well (or as badly as you would feel) project different\ntimeofday() values for the same tuple from t4.\n\nIn short, the above plans seem fine.\n\nP.S. the two plans you posted look identical to me, maybe I'm blind late\nat night?\n\nCheers,\nJesse\n\n\n", "msg_date": "Wed, 15 Jul 2020 23:16:47 -0700", "msg_from": "Jesse Zhang <sbjesse@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Volatile Functions in Parallel Plans" }, { "msg_contents": "Hi Jesse,\n\nyou are right.\n\nFor the nestloop case, they are identical.\n\nI do not come up with hash join or mergejoin case in pg now.\n________________________________\nFrom: Jesse Zhang <sbjesse@gmail.com>\nSent: Thursday, July 16, 2020 2:16 PM\nTo: Zhenghua Lyu <zlyu@vmware.com>\nCc: Amit Kapila <amit.kapila16@gmail.com>; PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>\nSubject: Re: Volatile Functions in Parallel Plans\n\nHi Zhenghua,\n\nOn Wed, Jul 15, 2020 at 5:44 AM Zhenghua Lyu wrote:\n>\n> Hi,\n> I test some SQL in the latest Postgres master branch code (we find these issues when\n> developing Greenplum database in the PR https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fgreenplum-db%2Fgpdb%2Fpull%2F10418&amp;data=02%7C01%7Czlyu%40vmware.com%7C41eeef401fb746757bc108d8294fe8d5%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637304770468321177&amp;sdata=XTxtrbJMO2d15WwQH9q4tXMzLPGFyk5hF8Tzs%2Bj3KlA%3D&amp;reserved=0,\n> and my colleague come up with the following cases in Postgres):\n>\n>\n> create table t3 (c1 text, c2 text);\n> CREATE TABLE\n> insert into t3\n> select\n> 'fhufehwiohewiuewhuhwiufhwifhweuhfwu', --random data\n> 'fiowehufwhfyegygfewpfwwfeuhwhufwh' --random data\n> from generate_series(1, 10000000) i;\n> INSERT 0 10000000\n> analyze t3;\n> ANALYZE\n> create table t4 (like t3);\n> CREATE TABLE\n> insert into t4 select * from t4;\n> INSERT 0 0\n> insert into t4 select * from t3;\n> INSERT 0 10000000\n> analyze t4;\n> ANALYZE\n> set enable_hashjoin to off;\n> SET\n> explain (costs off)\n> select count(*) from t3, t4\n> where t3.c1 like '%sss'\n> and timeofday() = t4.c1 and t3.c1 = t4.c1;\n> QUERY PLAN\n> --------------------------------------------------------\n> Finalize Aggregate\n> -> Gather\n> Workers Planned: 2\n> -> Partial Aggregate\n> -> Nested Loop\n> Join Filter: (t3.c1 = t4.c1)\n> -> Parallel Seq Scan on t3\n> Filter: (c1 ~~ '%sss'::text)\n> -> Seq Scan on t4\n> Filter: (timeofday() = c1)\n> (10 rows)\n>\n> explain (verbose, costs off)\n> select count(*)\n> from\n> t3,\n> (select *, timeofday() as x from t4 ) t4\n> where t3.c1 like '%sss' and\n> timeofday() = t4.c1 and t3.c1 = t4.c1;\n> QUERY PLAN\n> ------------------------------------------------------------------\n> Finalize Aggregate\n> Output: count(*)\n> -> Gather\n> Output: (PARTIAL count(*))\n> Workers Planned: 2\n> -> Partial Aggregate\n> Output: PARTIAL count(*)\n> -> Nested Loop\n> Join Filter: (t3.c1 = t4.c1)\n> -> Parallel Seq Scan on public.t3\n> Output: t3.c1, t3.c2\n> Filter: (t3.c1 ~~ '%sss'::text)\n> -> Seq Scan on public.t4\n> Output: t4.c1, NULL::text, timeofday()\n> Filter: (timeofday() = t4.c1)\n> (15 rows)\n>\n>\n>\n> Focus on the last two plans, the function timeofday is\n> volatile but paralle-safe. And Postgres outputs two parallel\n> plan.\n>\n>\n> The first plan:\n>\n> Finalize Aggregate\n> -> Gather\n> Workers Planned: 2\n> -> Partial Aggregate\n> -> Nested Loop\n> Join Filter: (t3.c1 = t4.c1)\n> -> Parallel Seq Scan on t3\n> Filter: (c1 ~~ '%sss'::text)\n> -> Seq Scan on t4\n> Filter: (timeofday() = c1)\n>\n> The join's left tree is parallel scan and the right tree is seq scan.\n> This algorithm is correct using the distribute distributive law of\n> distributed join:\n> A = [A1 A2 A3...An], B then A join B = gather( (A1 join B) (A2 join B) ... (An join B) )\n>\n> The correctness of the above law should have a pre-assumption:\n> The data set of B is the same in each join: (A1 join B) (A2 join B) ... (An join B)\n>\n> But things get complicated when volatile functions come in. Timeofday is just\n> an example to show the idea. The core is volatile functions can return different\n> results on successive calls with the same arguments. Thus the following piece,\n> the right tree of the join\n> -> Seq Scan on t4\n> Filter: (timeofday() = c1)\n> can not be considered consistent everywhere in the scan workers.\n>\n> The second plan\n>\n> Finalize Aggregate\n> Output: count(*)\n> -> Gather\n> Output: (PARTIAL count(*))\n> Workers Planned: 2\n> -> Partial Aggregate\n> Output: PARTIAL count(*)\n> -> Nested Loop\n> Join Filter: (t3.c1 = t4.c1)\n> -> Parallel Seq Scan on public.t3\n> Output: t3.c1, t3.c2\n> Filter: (t3.c1 ~~ '%sss'::text)\n> -> Seq Scan on public.t4\n> Output: t4.c1, NULL::text, timeofday()\n> Filter: (timeofday() = t4.c1)\n>\n>\n> have voltile projections in the right tree of the nestloop:\n>\n> -> Seq Scan on public.t4\n> Output: t4.c1, NULL::text, timeofday()\n> Filter: (timeofday() = t4.c1)\n>\n> It should not be taken as consistent in different workers.\n\nYou are right, no they are not consistent. But Neither plans is\nincorrect:\n\n1. In the first query, it's semantically permissible to evaluate\ntimeofday() for each pair of (c1, c2), and the plan reflects that.\n(Notice that the parallel nature of the plan is just noise here, the\nplanner could have gone with a Nested Loop of which the inner side is\n_not_ materialized).\n\n2. In the second query -- again -- in a canonical \"outside-in\"\nevaluation, it's perfectly permissible to evaluate the subquery for each\nvalue of t3. Again, the parallelism here is hardly relevant, a serial\nplan without a material node on the inner side of a nested loop would\njust as well (or as badly as you would feel) project different\ntimeofday() values for the same tuple from t4.\n\nIn short, the above plans seem fine.\n\nP.S. the two plans you posted look identical to me, maybe I'm blind late\nat night?\n\nCheers,\nJesse\n\n\n\n\n\n\n\n\nHi Jesse,\n\n\n\n\nyou are right.\n\n\n\n\nFor the nestloop case, they are identical. \n\n\n\n\nI do not come up with hash join or mergejoin case in pg now.\n\n\nFrom: Jesse Zhang <sbjesse@gmail.com>\nSent: Thursday, July 16, 2020 2:16 PM\nTo: Zhenghua Lyu <zlyu@vmware.com>\nCc: Amit Kapila <amit.kapila16@gmail.com>; PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>\nSubject: Re: Volatile Functions in Parallel Plans\n \n\n\nHi Zhenghua,\n\nOn Wed, Jul 15, 2020 at 5:44 AM Zhenghua Lyu wrote:\n>\n> Hi,\n>     I test some SQL in the latest Postgres master branch code (we find these issues when\n> developing Greenplum database in the PR \nhttps://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fgreenplum-db%2Fgpdb%2Fpull%2F10418&amp;data=02%7C01%7Czlyu%40vmware.com%7C41eeef401fb746757bc108d8294fe8d5%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637304770468321177&amp;sdata=XTxtrbJMO2d15WwQH9q4tXMzLPGFyk5hF8Tzs%2Bj3KlA%3D&amp;reserved=0,\n> and my colleague come up with the following cases in Postgres):\n>\n>\n> create table t3 (c1 text, c2 text);\n> CREATE TABLE\n> insert into t3\n> select\n>   'fhufehwiohewiuewhuhwiufhwifhweuhfwu', --random data\n>   'fiowehufwhfyegygfewpfwwfeuhwhufwh' --random data\n> from generate_series(1, 10000000) i;\n> INSERT 0 10000000\n> analyze t3;\n> ANALYZE\n> create table t4 (like t3);\n> CREATE TABLE\n> insert into t4 select * from t4;\n> INSERT 0 0\n> insert into t4 select * from t3;\n> INSERT 0 10000000\n> analyze t4;\n> ANALYZE\n> set enable_hashjoin to off;\n> SET\n> explain (costs off)\n> select count(*) from t3, t4\n> where t3.c1 like '%sss'\n>       and timeofday() = t4.c1 and t3.c1 = t4.c1;\n>                        QUERY PLAN\n> --------------------------------------------------------\n>  Finalize Aggregate\n>    ->  Gather\n>          Workers Planned: 2\n>          ->  Partial Aggregate\n>                ->  Nested Loop\n>                      Join Filter: (t3.c1 = t4.c1)\n>                      ->  Parallel Seq Scan on t3\n>                            Filter: (c1 ~~ '%sss'::text)\n>                      ->  Seq Scan on t4\n>                            Filter: (timeofday() = c1)\n> (10 rows)\n>\n> explain (verbose, costs off)\n> select count(*)\n> from\n>   t3,\n>   (select *, timeofday() as x from t4 ) t4\n> where t3.c1 like '%sss' and\n>       timeofday() = t4.c1 and t3.c1 = t4.c1;\n>                             QUERY PLAN\n> ------------------------------------------------------------------\n>  Finalize Aggregate\n>    Output: count(*)\n>    ->  Gather\n>          Output: (PARTIAL count(*))\n>          Workers Planned: 2\n>          ->  Partial Aggregate\n>                Output: PARTIAL count(*)\n>                ->  Nested Loop\n>                      Join Filter: (t3.c1 = t4.c1)\n>                      ->  Parallel Seq Scan on public.t3\n>                            Output: t3.c1, t3.c2\n>                            Filter: (t3.c1 ~~ '%sss'::text)\n>                      ->  Seq Scan on public.t4\n>                            Output: t4.c1, NULL::text, timeofday()\n>                            Filter: (timeofday() = t4.c1)\n> (15 rows)\n>\n>\n>\n> Focus on the last two plans, the function timeofday is\n> volatile but paralle-safe. And Postgres outputs two parallel\n> plan.\n>\n>\n> The first plan:\n>\n>  Finalize Aggregate\n>    ->  Gather\n>          Workers Planned: 2\n>          ->  Partial Aggregate\n>                ->  Nested Loop\n>                      Join Filter: (t3.c1 = t4.c1)\n>                      ->  Parallel Seq Scan on t3\n>                            Filter: (c1 ~~ '%sss'::text)\n>                      ->  Seq Scan on t4\n>                            Filter: (timeofday() = c1)\n>\n> The join's left tree is parallel scan and the right tree is seq scan.\n> This algorithm is correct using the distribute distributive law of\n> distributed join:\n>        A = [A1 A2 A3...An], B then A join B = gather( (A1 join B) (A2 join B) ... (An join B) )\n>\n> The correctness of the above law should have a pre-assumption:\n>       The data set of B is the same in each join: (A1 join B) (A2 join B) ... (An join B)\n>\n> But things get complicated when volatile functions come in. Timeofday is just\n> an example to show the idea. The core is volatile functions  can return different\n> results on successive calls with the same arguments. Thus the following piece,\n> the right tree of the join\n>                      ->  Seq Scan on t4\n>                            Filter: (timeofday() = c1)\n> can not be considered consistent everywhere in the scan workers.\n>\n> The second plan\n>\n>  Finalize Aggregate\n>    Output: count(*)\n>    ->  Gather\n>          Output: (PARTIAL count(*))\n>          Workers Planned: 2\n>          ->  Partial Aggregate\n>                Output: PARTIAL count(*)\n>                ->  Nested Loop\n>                      Join Filter: (t3.c1 = t4.c1)\n>                      ->  Parallel Seq Scan on public.t3\n>                            Output: t3.c1, t3.c2\n>                            Filter: (t3.c1 ~~ '%sss'::text)\n>                      ->  Seq Scan on public.t4\n>                            Output: t4.c1, NULL::text, timeofday()\n>                            Filter: (timeofday() = t4.c1)\n>\n>\n> have voltile projections in the right tree of the nestloop:\n>\n>                      ->  Seq Scan on public.t4\n>                            Output: t4.c1, NULL::text, timeofday()\n>                            Filter: (timeofday() = t4.c1)\n>\n> It should not be taken as consistent in different workers.\n\nYou are right, no they are not consistent. But Neither plans is\nincorrect:\n\n1. In the first query, it's semantically permissible to evaluate\ntimeofday() for each pair of (c1, c2), and the plan reflects that.\n(Notice that the parallel nature of the plan is just noise here, the\nplanner could have gone with a Nested Loop of which the inner side is\n_not_ materialized).\n\n2. In the second query -- again -- in a canonical \"outside-in\"\nevaluation, it's perfectly permissible to evaluate the subquery for each\nvalue of t3. Again, the parallelism here is hardly relevant, a serial\nplan without a material node on the inner side of a nested loop would\njust as well (or as badly as you would feel) project different\ntimeofday() values for the same tuple from t4.\n\nIn short, the above plans seem fine.\n\nP.S. the two plans you posted look identical to me, maybe I'm blind late\nat night?\n\nCheers,\nJesse", "msg_date": "Thu, 16 Jul 2020 11:57:36 +0000", "msg_from": "Zhenghua Lyu <zlyu@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Volatile Functions in Parallel Plans" }, { "msg_contents": "Jesse Zhang <sbjesse@gmail.com> writes:\n> You are right, no they are not consistent. But Neither plans is\n> incorrect:\n\n> 1. In the first query, it's semantically permissible to evaluate\n> timeofday() for each pair of (c1, c2), and the plan reflects that.\n> (Notice that the parallel nature of the plan is just noise here, the\n> planner could have gone with a Nested Loop of which the inner side is\n> _not_ materialized).\n\nYeah, exactly. The short answer here is that refusing to parallelize\nthe plan would not make things any more consistent.\n\nIn general, using a volatile function in a WHERE clause is problematic\nbecause we make no guarantees about how often it will be evaluated.\nIt could be anywhere between \"never\" and \"once per row of the\ncross-product of the FROM tables\". AFAIR, the only concession we've made\nto make that less unpredictable is to avoid using volatile functions in\nindex quals. But even that will only make things noticeably more\npredictable for single-table queries. As soon as you get into join cases,\nyou don't have much control over when the function will get evaluated.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 Jul 2020 11:18:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Volatile Functions in Parallel Plans" }, { "msg_contents": "Hi Tom and Zhenghua,\n\nOn Thu, Jul 16, 2020 at 8:18 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Jesse Zhang <sbjesse@gmail.com> writes:\n> > You are right, no they are not consistent. But Neither plans is\n> > incorrect:\n>\n> > 1. In the first query, it's semantically permissible to evaluate\n> > timeofday() for each pair of (c1, c2), and the plan reflects that.\n> > (Notice that the parallel nature of the plan is just noise here, the\n> > planner could have gone with a Nested Loop of which the inner side is\n> > _not_ materialized).\n>\n> Yeah, exactly. The short answer here is that refusing to parallelize\n> the plan would not make things any more consistent.\n>\n> In general, using a volatile function in a WHERE clause is problematic\n> because we make no guarantees about how often it will be evaluated.\n> It could be anywhere between \"never\" and \"once per row of the\n> cross-product of the FROM tables\". AFAIR, the only concession we've made\n> to make that less unpredictable is to avoid using volatile functions in\n> index quals. But even that will only make things noticeably more\n> predictable for single-table queries. As soon as you get into join cases,\n> you don't have much control over when the function will get evaluated.\n>\n> regards, tom lane\n\nFor more kicks, I don't even think this is restricted to volatile\nfunctions only. To stir the pot, it's conceivable that planner might\nproduce the following plan\n\nSeq Scan on pg_temp_3.foo\n Output: foo.a\n Filter: (SubPlan 1)\n SubPlan 1\n -> WindowAgg\n Output: sum(bar.d) OVER (?)\n -> Seq Scan on pg_temp_3.bar\n Output: bar.d\n\n\nFor the following query\n\nSELECT a FROM foo WHERE b = ALL (\nSELECT sum(d) OVER (ROWS UNBOUNDED PRECEDING) FROM bar\n);\n\nN.B. that the WindowAgg might produce a different set of numbers each\ntime depending on the scan order of bar, which means that for two\ndifferent \"foo\" tuples of equal b value, one might be rejected by the\nfilter whereas another survives.\n\nI think the crux of the discussion should be whether we can reasonably\nexpect a subquery (subquery-like structure, for example the inner side\nof nest loops upthread) to be evaluated only once. IMHO, no. The SQL\nstandard only broadly mandates that each \"execution\" of a subquery to be\n\"atomic\".\n\nZhenghua and Tom, would you suggest the above plan is wrong (not\nsuboptimal, but wrong) just because we don't materialize the WindowAgg\nunder the subplan?\n\nCheers,\nJesse\n\n\n", "msg_date": "Thu, 16 Jul 2020 08:40:12 -0700", "msg_from": "Jesse Zhang <sbjesse@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Volatile Functions in Parallel Plans" }, { "msg_contents": "Jesse Zhang <sbjesse@gmail.com> writes:\n> For more kicks, I don't even think this is restricted to volatile\n> functions only. To stir the pot, it's conceivable that planner might\n> produce the following plan\n\n> Seq Scan on pg_temp_3.foo\n> Output: foo.a\n> Filter: (SubPlan 1)\n> SubPlan 1\n> -> WindowAgg\n> Output: sum(bar.d) OVER (?)\n> -> Seq Scan on pg_temp_3.bar\n> Output: bar.d\n\n> For the following query\n\n> SELECT a FROM foo WHERE b = ALL (\n> SELECT sum(d) OVER (ROWS UNBOUNDED PRECEDING) FROM bar\n> );\n\nInteresting example. Normally you'd expect that repeated executions of\nthe inner seqscan would produce the same output in the same order ...\nbut if the table were big enough to allow the synchronize_seqscans logic\nto kick in, that might not be true. You could argue about whether or\nnot synchronize_seqscans breaks any fundamental SQL guarantees, but\nmy feeling is that it doesn't: if the above query produces unstable\nresults, that's the user's fault for having written an underspecified\nwindowing query.\n\n> Zhenghua and Tom, would you suggest the above plan is wrong (not\n> suboptimal, but wrong) just because we don't materialize the WindowAgg\n> under the subplan?\n\nI would not, per above: the query is buggy, not the implementation.\n(In standard-ese, the results of that query are undefined, not\nimplementation-defined, meaning that we don't have to produce\nconsistent results.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 Jul 2020 11:57:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Volatile Functions in Parallel Plans" } ]
[ { "msg_contents": "Hi,\n\nIn ApplyLauncherMain, it seems like we are having SIGTERM signal\nmapped for config reload. I think we should be having SIGHUP for\nSignalHandlerForConfigReload(). Otherwise we miss to take the updated\nvalue for wal_retrieve_retry_interval in ApplyLauncherMain.\n\nAttached is a patch having this change.\n\nThoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 15 Jul 2020 18:16:43 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Have SIGHUP instead of SIGTERM for config reload in logical\n replication launcher" }, { "msg_contents": "On Wed, Jul 15, 2020 at 6:17 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> In ApplyLauncherMain, it seems like we are having SIGTERM signal\n> mapped for config reload. I think we should be having SIGHUP for\n> SignalHandlerForConfigReload(). Otherwise we miss to take the updated\n> value for wal_retrieve_retry_interval in ApplyLauncherMain.\n>\n> Attached is a patch having this change.\n>\n> Thoughts?\n\nYeah, it just looks like a typo so your fix looks good to me.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 15 Jul 2020 18:21:20 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Have SIGHUP instead of SIGTERM for config reload in logical\n replication launcher" }, { "msg_contents": "On Wed, Jul 15, 2020 at 6:21 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Jul 15, 2020 at 6:17 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > In ApplyLauncherMain, it seems like we are having SIGTERM signal\n> > mapped for config reload. I think we should be having SIGHUP for\n> > SignalHandlerForConfigReload(). Otherwise we miss to take the updated\n> > value for wal_retrieve_retry_interval in ApplyLauncherMain.\n> >\n> > Attached is a patch having this change.\n> >\n> > Thoughts?\n>\n> Yeah, it just looks like a typo so your fix looks good to me.\n>\n\n+1. I will commit this tomorrow unless someone thinks otherwise.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 15 Jul 2020 18:46:19 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Have SIGHUP instead of SIGTERM for config reload in logical\n replication launcher" }, { "msg_contents": ">\n> +1. I will commit this tomorrow unless someone thinks otherwise.\n>\n\nI think versions <= 12, have \"pqsignal(SIGHUP,\nlogicalrep_launcher_sighup)\", not sure why and which commit removed\nlogicalrep_launcher_sighup().\n\nWe might have to also backpatch this patch to version 13.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 15 Jul 2020 20:33:59 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Have SIGHUP instead of SIGTERM for config reload in logical\n replication launcher" }, { "msg_contents": "Hi,\n\nOn 2020-07-15 20:33:59 +0530, Bharath Rupireddy wrote:\n> >\n> > +1. I will commit this tomorrow unless someone thinks otherwise.\n> >\n>\n> I think versions <= 12, have \"pqsignal(SIGHUP,\n> logicalrep_launcher_sighup)\", not sure why and which commit removed\n> logicalrep_launcher_sighup().\n\ncommit 1e53fe0e70f610c34f4c9e770d108cd94151342c\nAuthor: Robert Haas <rhaas@postgresql.org>\nDate: 2019-12-17 13:03:57 -0500\n\n Use PostgresSigHupHandler in more places.\n\n There seems to be no reason for every background process to have\n its own flag indicating that a config-file reload is needed.\n Instead, let's just use ConfigFilePending for that purpose\n everywhere.\n\n Patch by me, reviewed by Andres Freund and Daniel Gustafsson.\n\n Discussion: http://postgr.es/m/CA+TgmoZwDk=BguVDVa+qdA6SBKef=PKbaKDQALTC_9qoz1mJqg@mail.gmail.com\n\nIndeed looks like a typo. Robert, do you concur?\n\nAndres\n\n\n", "msg_date": "Wed, 15 Jul 2020 08:51:37 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Have SIGHUP instead of SIGTERM for config reload in logical\n replication launcher" }, { "msg_contents": "On Wed, Jul 15, 2020 at 11:51 AM Andres Freund <andres@anarazel.de> wrote:\n> Indeed looks like a typo. Robert, do you concur?\n\nYes, that's definitely unintentional. Oops.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 16 Jul 2020 11:14:28 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Have SIGHUP instead of SIGTERM for config reload in logical\n replication launcher" }, { "msg_contents": "On Thu, Jul 16, 2020 at 8:44 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Jul 15, 2020 at 11:51 AM Andres Freund <andres@anarazel.de> wrote:\n> > Indeed looks like a typo. Robert, do you concur?\n>\n> Yes, that's definitely unintentional. Oops.\n>\n\nPushed the fix.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 17 Jul 2020 11:27:03 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Have SIGHUP instead of SIGTERM for config reload in logical\n replication launcher" } ]
[ { "msg_contents": "Our Fine Manual (TM) specifies:\n\"As an exception, when changing the type of an existing column, if the\nUSING clause does not change the column contents and the old type is either\nbinary coercible to the new type or an unconstrained domain over the new\ntype, a table rewrite is not needed; but any indexes on the affected\ncolumns must still be rebuilt.\"\n\nFirst of all, how is a non-internals-expert even supposed to know what a\nbinary coercible type is? That's not a very user-friendly way to say it.\n\nSecond, how is even an expert supposed to find the list? :)\n\nFor example, we can query pg_cast for casts that are binary coercible,\nthat's a start, but it doesn't really tell us the answer.\n\nWe can also for example increase the precision of numeric without a rewrite\n(but not scale). Or we can change between text and varchar. And we can\nincrease the length of a varchar but not decrease it.\n\nSurely we can do better than this when it comes to documenting it? Even if\nit's a pluggable thing so it may or may not be true of external\ndatatypes installed later, we should be able to at least be more clear\nabout the builtin types, I think?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOur Fine Manual (TM) specifies:\"As an exception, when changing the type of an existing column, if the USING clause does not change the column contents and the old type is either binary coercible to the new type or an unconstrained domain over the new type, a table rewrite is not needed; but any indexes on the affected columns must still be rebuilt.\"First of all, how is a non-internals-expert even supposed to know what a binary coercible type is? That's not a very user-friendly way to say it.Second, how is even an expert supposed to find the list? :)For example, we can query pg_cast for casts that are binary coercible, that's a start, but it doesn't really tell us the answer.We can also for example increase the precision of numeric without a rewrite (but not scale). Or we can change between text and varchar. And we can increase the length of a varchar but not decrease it.Surely we can do better than this when it comes to documenting it? Even if it's a pluggable thing so it may or may not be true of external datatypes installed later, we should be able to at least be more clear about the builtin types, I think?--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Wed, 15 Jul 2020 14:54:37 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": true, "msg_subject": "Which SET TYPE don't actually require a rewrite" }, { "msg_contents": "On Wed, Jul 15, 2020 at 6:25 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> Our Fine Manual (TM) specifies:\n> \"As an exception, when changing the type of an existing column, if the USING clause does not change the column contents and the old type is either binary coercible to the new type or an unconstrained domain over the new type, a table rewrite is not needed; but any indexes on the affected columns must still be rebuilt.\"\n>\n> First of all, how is a non-internals-expert even supposed to know what a binary coercible type is? That's not a very user-friendly way to say it.\n>\n> Second, how is even an expert supposed to find the list? :)\n>\n> For example, we can query pg_cast for casts that are binary coercible, that's a start, but it doesn't really tell us the answer.\n>\n> We can also for example increase the precision of numeric without a rewrite (but not scale). Or we can change between text and varchar. And we can increase the length of a varchar but not decrease it.\n>\n> Surely we can do better than this when it comes to documenting it? Even if it's a pluggable thing so it may or may not be true of external datatypes installed later, we should be able to at least be more clear about the builtin types, I think?\n>\n\n+1 for providing more information in the documentation. One way could\nbe that we give some examples of how a user can check whether types\nare binary coercible or not and then also specify clearly in which\nother cases the rewrite can happen. Similarly, it seems the\ninformation when the rewrite can happen for \"SET (storage_parameter\n...)\" (doc says: \"depending on the parameter you might need to rewrite\nthe table to get the desired effects\") is thin.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 16 Jul 2020 15:02:17 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Which SET TYPE don't actually require a rewrite" }, { "msg_contents": "On Wed, Jul 15, 2020 at 02:54:37PM +0200, Magnus Hagander wrote:\n> Our Fine Manual (TM) specifies:\n> \"As an exception, when changing the type of an existing column, if the\n> USING clause does not change the column contents and the old type is either\n> binary coercible to the new type or an unconstrained domain over the new\n> type, a table rewrite is not needed; but any indexes on the affected\n> columns must still be rebuilt.\"\n> \n> First of all, how is a non-internals-expert even supposed to know what a\n> binary coercible type is?\n\nThe manual defines it at <firstterm>binary coercible</firstterm>.\n\n> We can also for example increase the precision of numeric without a rewrite\n> (but not scale). Or we can change between text and varchar. And we can\n> increase the length of a varchar but not decrease it.\n> \n> Surely we can do better than this when it comes to documenting it? Even if\n> it's a pluggable thing so it may or may not be true of external\n> datatypes installed later, we should be able to at least be more clear\n> about the builtin types, I think?\n\nI recall reasoning that ATColumnChangeRequiresRewrite() is a DDL analog of\nquery optimizer logic. The manual brings up only a minority of planner\noptimizations, and comprehensive lists of optimization preconditions are even\nrarer. But I don't mind if $SUBJECT documentation departs from that norm.\n\n\n", "msg_date": "Thu, 16 Jul 2020 20:40:13 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Which SET TYPE don't actually require a rewrite" }, { "msg_contents": "On Fri, Jul 17, 2020 at 5:40 AM Noah Misch <noah@leadboat.com> wrote:\n\n> On Wed, Jul 15, 2020 at 02:54:37PM +0200, Magnus Hagander wrote:\n> > Our Fine Manual (TM) specifies:\n> > \"As an exception, when changing the type of an existing column, if the\n> > USING clause does not change the column contents and the old type is\n> either\n> > binary coercible to the new type or an unconstrained domain over the new\n> > type, a table rewrite is not needed; but any indexes on the affected\n> > columns must still be rebuilt.\"\n> >\n> > First of all, how is a non-internals-expert even supposed to know what a\n> > binary coercible type is?\n>\n> The manual defines it at <firstterm>binary coercible</firstterm>.\n>\n\nThe only way to actually realize that this is a <firstterm> is to look at\nthe source code though, right? It's definitely not clear that one should go\nlook at the CREATE CAST documentation to find the definition -- certainly\nnot from the ALTER TABLE documentation, which I would argue is the place\nwhere most people would go.\n\nAnd while having the definition there is nice, it doesn't help an end user\nin any way at all to determine if their ALTER TABLE statement is going to\nbe \"safe from rewrites\" or not. It (hopefully) helps someone who knows some\nthings about the database internals, which is of course a valuable thing as\nwell, but not the end user.\n\n\n> We can also for example increase the precision of numeric without a\n> rewrite\n> > (but not scale). Or we can change between text and varchar. And we can\n> > increase the length of a varchar but not decrease it.\n> >\n> > Surely we can do better than this when it comes to documenting it? Even\n> if\n> > it's a pluggable thing so it may or may not be true of external\n> > datatypes installed later, we should be able to at least be more clear\n> > about the builtin types, I think?\n>\n> I recall reasoning that ATColumnChangeRequiresRewrite() is a DDL analog of\n> query optimizer logic. The manual brings up only a minority of planner\n> optimizations, and comprehensive lists of optimization preconditions are\n> even\n> rarer. But I don't mind if $SUBJECT documentation departs from that norm.\n>\n\nI can see the argument being made for that, and certainly having been made\nfor it in the future. But I'd say given the very bad consequences of\ngetting it wrong, it's far from minor. And given the number of times I've\nhad to answer the question \"can I make this change safely\" (which usually\namounts to me trying it out to see what happens, if I hadn't done that\nexact one many times before) indicates the need for a more detailed\ndocumentation on it.\n\nAs Amit mentions it is also triggered by some store parameter changes. But\nnot all. So looking at it the other way, the part that the end user really\ncares about it \"which ALTER TABLE operations will rewrite the table and\nwhich will not\". Maybe what we need is a section specifically on this that\nsummarizes all the different ways that it can happen.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Fri, Jul 17, 2020 at 5:40 AM Noah Misch <noah@leadboat.com> wrote:On Wed, Jul 15, 2020 at 02:54:37PM +0200, Magnus Hagander wrote:\n> Our Fine Manual (TM) specifies:\n> \"As an exception, when changing the type of an existing column, if the\n> USING clause does not change the column contents and the old type is either\n> binary coercible to the new type or an unconstrained domain over the new\n> type, a table rewrite is not needed; but any indexes on the affected\n> columns must still be rebuilt.\"\n> \n> First of all, how is a non-internals-expert even supposed to know what a\n> binary coercible type is?\n\nThe manual defines it at <firstterm>binary coercible</firstterm>.The only way to actually realize that this is a <firstterm> is to look at the source code though, right? It's definitely not clear that one should go look at the CREATE CAST documentation to find the definition -- certainly not from the ALTER TABLE documentation, which I would argue is the place where most people would go.And while having the definition there is nice, it doesn't help an end user in any way at all to determine if their ALTER TABLE statement is going to be \"safe from rewrites\" or not. It (hopefully) helps someone who knows some things about the database internals, which is of course a valuable thing as well, but not the end user.\n> We can also for example increase the precision of numeric without a rewrite\n> (but not scale). Or we can change between text and varchar. And we can\n> increase the length of a varchar but not decrease it.\n> \n> Surely we can do better than this when it comes to documenting it? Even if\n> it's a pluggable thing so it may or may not be true of external\n> datatypes installed later, we should be able to at least be more clear\n> about the builtin types, I think?\n\nI recall reasoning that ATColumnChangeRequiresRewrite() is a DDL analog of\nquery optimizer logic.  The manual brings up only a minority of planner\noptimizations, and comprehensive lists of optimization preconditions are even\nrarer.  But I don't mind if $SUBJECT documentation departs from that norm.\nI can see the argument being made for that, and certainly having been made for it in the future. But I'd say given the very bad consequences of getting it wrong, it's far from minor. And given the number of times I've had to answer the question \"can I make this change safely\" (which usually amounts to me trying it out to see what happens, if I hadn't done that exact one many times before) indicates the need for a more detailed documentation on it.As Amit mentions it is also triggered by some store parameter changes. But not all. So looking at it the other way, the part that the end user really cares about it \"which ALTER TABLE operations will rewrite the table and which will not\". Maybe what we need is a section specifically on this that summarizes all the different ways that it can happen.--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Fri, 17 Jul 2020 16:08:36 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": true, "msg_subject": "Re: Which SET TYPE don't actually require a rewrite" }, { "msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> As Amit mentions it is also triggered by some store parameter changes. But\n> not all. So looking at it the other way, the part that the end user really\n> cares about it \"which ALTER TABLE operations will rewrite the table and\n> which will not\". Maybe what we need is a section specifically on this that\n> summarizes all the different ways that it can happen.\n\nNo, what we need is EXPLAIN for DDL ;-). Trying to keep such\ndocumentation in sync with the actual code behavior would be impossible.\n(For one thing, some aspects can be affected by extension datatype\nbehaviors.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 Jul 2020 11:26:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Which SET TYPE don't actually require a rewrite" }, { "msg_contents": "On Fri, Jul 17, 2020 at 04:08:36PM +0200, Magnus Hagander wrote:\n> On Fri, Jul 17, 2020 at 5:40 AM Noah Misch <noah@leadboat.com> wrote:\n> > On Wed, Jul 15, 2020 at 02:54:37PM +0200, Magnus Hagander wrote:\n> > > Our Fine Manual (TM) specifies:\n> > > \"As an exception, when changing the type of an existing column, if the\n> > > USING clause does not change the column contents and the old type is\n> > either\n> > > binary coercible to the new type or an unconstrained domain over the new\n> > > type, a table rewrite is not needed; but any indexes on the affected\n> > > columns must still be rebuilt.\"\n> > >\n> > > First of all, how is a non-internals-expert even supposed to know what a\n> > > binary coercible type is?\n> >\n> > The manual defines it at <firstterm>binary coercible</firstterm>.\n> \n> The only way to actually realize that this is a <firstterm> is to look at\n> the source code though, right?\n\nI see italic typeface for <firstterm>. This one deserves an <indexterm>, too.\n(I bet many other <firstterm> uses deserve an <indexterm>.)\n\n> It's definitely not clear that one should go\n> look at the CREATE CAST documentation to find the definition -- certainly\n> not from the ALTER TABLE documentation, which I would argue is the place\n> where most people would go.\n\nAgreed.\n\n> > We can also for example increase the precision of numeric without a\n> > rewrite\n> > > (but not scale). Or we can change between text and varchar. And we can\n> > > increase the length of a varchar but not decrease it.\n> > >\n> > > Surely we can do better than this when it comes to documenting it? Even\n> > if\n> > > it's a pluggable thing so it may or may not be true of external\n> > > datatypes installed later, we should be able to at least be more clear\n> > > about the builtin types, I think?\n> >\n> > I recall reasoning that ATColumnChangeRequiresRewrite() is a DDL analog of\n> > query optimizer logic. The manual brings up only a minority of planner\n> > optimizations, and comprehensive lists of optimization preconditions are\n> > even\n> > rarer. But I don't mind if $SUBJECT documentation departs from that norm.\n> \n> I can see the argument being made for that, and certainly having been made\n> for it in the future. But I'd say given the very bad consequences of\n> getting it wrong, it's far from minor. And given the number of times I've\n> had to answer the question \"can I make this change safely\" (which usually\n> amounts to me trying it out to see what happens, if I hadn't done that\n> exact one many times before) indicates the need for a more detailed\n> documentation on it.\n\nSuch a doc addition is fine with me. I agree with Tom that it will be prone\nto staleness, but I don't conclude that the potential for staleness reduces\nits net value below zero. Having said that, if the consequences of doc\nstaleness are \"very bad\", you may consider documenting the debug1 user\ninterface (https://postgr.es/m/20121202020736.GD13163@tornado.leadboat.com)\ninstead of documenting the exact rules. Either way is fine with me.\n\n\n", "msg_date": "Fri, 17 Jul 2020 19:57:40 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Which SET TYPE don't actually require a rewrite" }, { "msg_contents": "On Fri, Jul 17, 2020 at 11:26:56AM -0400, Tom Lane wrote:\n> Magnus Hagander <magnus@hagander.net> writes:\n> > As Amit mentions it is also triggered by some store parameter changes. But\n> > not all. So looking at it the other way, the part that the end user really\n> > cares about it \"which ALTER TABLE operations will rewrite the table and\n> > which will not\". Maybe what we need is a section specifically on this that\n> > summarizes all the different ways that it can happen.\n> \n> No, what we need is EXPLAIN for DDL ;-). Trying to keep such\n> documentation in sync with the actual code behavior would be impossible.\n> (For one thing, some aspects can be affected by extension datatype\n> behaviors.)\n\nI know Tom put a wink on that, but I actually do feel that the only\nclean way to do this is to give users a way to issue the query in a\nnon-executing way that will report if a rewrite is going to happen.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Tue, 21 Jul 2020 16:55:37 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Which SET TYPE don't actually require a rewrite" }, { "msg_contents": "On Tue, Jul 21, 2020 at 04:55:37PM -0400, Bruce Momjian wrote:\n> I know Tom put a wink on that, but I actually do feel that the only\n> clean way to do this is to give users a way to issue the query in a\n> non-executing way that will report if a rewrite is going to happen.\n\nYeah, when doing a schema upgrade for an application, that's the usual\nperformance pin-point and people used to other things than Postgres\nwrite their queries without being aware of that. We have something\nable to track that with the event trigger table_rewrite, but there is\nno easy option to store the event and bypass its execution. I think\nthat using a plpgsql function wrapping an ALTER TABLE query with an\nexception block for an error generated by an event trigger if seeing\ntable_rewrite allows to do that, though.\n--\nMichael", "msg_date": "Wed, 22 Jul 2020 09:31:12 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Which SET TYPE don't actually require a rewrite" }, { "msg_contents": "On Sat, Jul 18, 2020 at 4:57 AM Noah Misch <noah@leadboat.com> wrote:\n\n> On Fri, Jul 17, 2020 at 04:08:36PM +0200, Magnus Hagander wrote:\n> > On Fri, Jul 17, 2020 at 5:40 AM Noah Misch <noah@leadboat.com> wrote:\n> > > On Wed, Jul 15, 2020 at 02:54:37PM +0200, Magnus Hagander wrote:\n> > > > Our Fine Manual (TM) specifies:\n> > > > \"As an exception, when changing the type of an existing column, if\n> the\n> > > > USING clause does not change the column contents and the old type is\n> > > either\n> > > > binary coercible to the new type or an unconstrained domain over the\n> new\n> > > > type, a table rewrite is not needed; but any indexes on the affected\n> > > > columns must still be rebuilt.\"\n> > > >\n> > > > First of all, how is a non-internals-expert even supposed to know\n> what a\n> > > > binary coercible type is?\n> > >\n> > > The manual defines it at <firstterm>binary coercible</firstterm>.\n> >\n> > The only way to actually realize that this is a <firstterm> is to look at\n> > the source code though, right?\n>\n> I see italic typeface for <firstterm>. This one deserves an <indexterm>,\n> too.\n> (I bet many other <firstterm> uses deserve an <indexterm>.)\n>\n> > It's definitely not clear that one should go\n> > look at the CREATE CAST documentation to find the definition -- certainly\n> > not from the ALTER TABLE documentation, which I would argue is the place\n> > where most people would go.\n>\n> Agreed.\n>\n> > > We can also for example increase the precision of numeric without a\n> > > rewrite\n> > > > (but not scale). Or we can change between text and varchar. And we\n> can\n> > > > increase the length of a varchar but not decrease it.\n> > > >\n> > > > Surely we can do better than this when it comes to documenting it?\n> Even\n> > > if\n> > > > it's a pluggable thing so it may or may not be true of external\n> > > > datatypes installed later, we should be able to at least be more\n> clear\n> > > > about the builtin types, I think?\n> > >\n> > > I recall reasoning that ATColumnChangeRequiresRewrite() is a DDL\n> analog of\n> > > query optimizer logic. The manual brings up only a minority of planner\n> > > optimizations, and comprehensive lists of optimization preconditions\n> are\n> > > even\n> > > rarer. But I don't mind if $SUBJECT documentation departs from that\n> norm.\n> >\n> > I can see the argument being made for that, and certainly having been\n> made\n> > for it in the future. But I'd say given the very bad consequences of\n> > getting it wrong, it's far from minor. And given the number of times I've\n> > had to answer the question \"can I make this change safely\" (which usually\n> > amounts to me trying it out to see what happens, if I hadn't done that\n> > exact one many times before) indicates the need for a more detailed\n> > documentation on it.\n>\n> Such a doc addition is fine with me. I agree with Tom that it will be\n> prone\n> to staleness, but I don't conclude that the potential for staleness reduces\n> its net value below zero. Having said that, if the consequences of doc\n> staleness are \"very bad\", you may consider documenting the debug1 user\n> interface (https://postgr.es/m/20121202020736.GD13163@tornado.leadboat.com\n> )\n> instead of documenting the exact rules. Either way is fine with me.\n>\n\nThe DEBUG1 method is only after the fact though, isn't it?\n\nThat makes it pretty hard for someone to say review a migration script and\nsee \"this is going to cause problems\". And if it's going to be run in an\nenv, I personally find it more useful to just stick an event trigger in\nthere per our documentation and block it (though it might be a good idea to\nlink to that from the alter table reference page, and not just have it\nunder event trigger examples).\n\nI agree that documenting the rules would definitely be prone to staleness,\nand that having EXPLAIN for DDL would be the *better* solution. But also\nthat having the docs, even if they go a bit stale, would be better than the\nscenario today.\n\nUnfortunately, I'm not sure I know enough of the details of what the rules\nactually *are* to explain them in a way that's easy enough to go in the\ndocs...\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Sat, Jul 18, 2020 at 4:57 AM Noah Misch <noah@leadboat.com> wrote:On Fri, Jul 17, 2020 at 04:08:36PM +0200, Magnus Hagander wrote:\n> On Fri, Jul 17, 2020 at 5:40 AM Noah Misch <noah@leadboat.com> wrote:\n> > On Wed, Jul 15, 2020 at 02:54:37PM +0200, Magnus Hagander wrote:\n> > > Our Fine Manual (TM) specifies:\n> > > \"As an exception, when changing the type of an existing column, if the\n> > > USING clause does not change the column contents and the old type is\n> > either\n> > > binary coercible to the new type or an unconstrained domain over the new\n> > > type, a table rewrite is not needed; but any indexes on the affected\n> > > columns must still be rebuilt.\"\n> > >\n> > > First of all, how is a non-internals-expert even supposed to know what a\n> > > binary coercible type is?\n> >\n> > The manual defines it at <firstterm>binary coercible</firstterm>.\n> \n> The only way to actually realize that this is a <firstterm> is to look at\n> the source code though, right?\n\nI see italic typeface for <firstterm>.  This one deserves an <indexterm>, too.\n(I bet many other <firstterm> uses deserve an <indexterm>.)\n\n> It's definitely not clear that one should go\n> look at the CREATE CAST documentation to find the definition -- certainly\n> not from the ALTER TABLE documentation, which I would argue is the place\n> where most people would go.\n\nAgreed.\n\n> > We can also for example increase the precision of numeric without a\n> > rewrite\n> > > (but not scale). Or we can change between text and varchar. And we can\n> > > increase the length of a varchar but not decrease it.\n> > >\n> > > Surely we can do better than this when it comes to documenting it? Even\n> > if\n> > > it's a pluggable thing so it may or may not be true of external\n> > > datatypes installed later, we should be able to at least be more clear\n> > > about the builtin types, I think?\n> >\n> > I recall reasoning that ATColumnChangeRequiresRewrite() is a DDL analog of\n> > query optimizer logic.  The manual brings up only a minority of planner\n> > optimizations, and comprehensive lists of optimization preconditions are\n> > even\n> > rarer.  But I don't mind if $SUBJECT documentation departs from that norm.\n> \n> I can see the argument being made for that, and certainly having been made\n> for it in the future. But I'd say given the very bad consequences of\n> getting it wrong, it's far from minor. And given the number of times I've\n> had to answer the question \"can I make this change safely\" (which usually\n> amounts to me trying it out to see what happens, if I hadn't done that\n> exact one many times before) indicates the need for a more detailed\n> documentation on it.\n\nSuch a doc addition is fine with me.  I agree with Tom that it will be prone\nto staleness, but I don't conclude that the potential for staleness reduces\nits net value below zero.  Having said that, if the consequences of doc\nstaleness are \"very bad\", you may consider documenting the debug1 user\ninterface (https://postgr.es/m/20121202020736.GD13163@tornado.leadboat.com)\ninstead of documenting the exact rules.  Either way is fine with me.\nThe DEBUG1 method is only after the fact though, isn't it?That makes it pretty hard for someone to say review a migration script and see \"this is going to cause problems\". And if it's going to be run in an env, I personally find it more useful to just stick an event trigger in there per our documentation and block it (though it might be a good idea to link to that from the alter table reference page, and not just have it under event trigger examples).I agree that documenting the rules would definitely be prone to staleness, and that having EXPLAIN for DDL would be the *better* solution. But also that having the docs, even if they go a bit stale, would be better than the scenario today.Unfortunately, I'm not sure I know enough of the details of what the rules actually *are* to explain them in a way that's easy enough to go in the docs...--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Wed, 5 Aug 2020 14:52:42 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": true, "msg_subject": "Re: Which SET TYPE don't actually require a rewrite" }, { "msg_contents": "On Wed, Aug 05, 2020 at 02:52:42PM +0200, Magnus Hagander wrote:\n> On Sat, Jul 18, 2020 at 4:57 AM Noah Misch <noah@leadboat.com> wrote:\n> > Such a doc addition is fine with me.� I agree with Tom that it will be prone\n> > to staleness, but I don't conclude that the potential for staleness reduces\n> > its net value below zero.� Having said that, if the consequences of doc\n> > staleness are \"very bad\", you may consider documenting the debug1 user\n> > interface (https://postgr.es/m/20121202020736.GD13163@tornado.leadboat.com)\n> > instead of documenting the exact rules.� Either way is fine with me.\n> \n> The DEBUG1 method is only after the fact though, isn't it?\n> \n> That makes it pretty hard for someone to say review a migration script and\n> see \"this is going to cause problems\". And if it's going to be run in an\n> env, I personally find it more useful to just stick an event trigger in\n> there per our documentation and block it (though it might be a good idea to\n> link to that from the alter table reference page, and not just have it\n> under event trigger examples).\n\nThe \"after the fact\" aspect is basically the same for the DEBUG1 method and\nthe event trigger method. Each fires after lock acquisition and before\nrewriting the first tuple.\n\nEvent trigger drawbacks include the requirement for superuser cooperation.\nDEBUG1/statement_timeout drawbacks include an ambiguity: if it reaches\nstatement_timeout without printing the DEBUG1, that could mean a lack of\nrewrite, or it could mean some other cause of slowness. I have a weak\npreference for promoting the DEBUG1/statement_timeout approach, because cloud\ndeployments find the superuser obstacle insurmountable. The ambiguity is\nsurmountable; one can always remove the statement_timeout and run the command\nto completion in a pre-production environment.\n\n\n", "msg_date": "Wed, 5 Aug 2020 21:11:46 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Which SET TYPE don't actually require a rewrite" } ]
[ { "msg_contents": "It was mentioned elsewhere in passing that a new Autoconf release might \nbe coming. That one will warn about the old naming \"configure.in\" and \nrequest \"configure.ac\". So we might want to rename that sometime. \nBefore we get into the specifics, I suggest that all interested parties \ncheck whether buildfarm scripts, packaging scripts, etc. need to be \nadjusted for the newer name.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 15 Jul 2020 15:14:22 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "renaming configure.in to configure.ac" }, { "msg_contents": "\nOn 7/15/20 9:14 AM, Peter Eisentraut wrote:\n> It was mentioned elsewhere in passing that a new Autoconf release\n> might be coming.  That one will warn about the old naming\n> \"configure.in\" and request \"configure.ac\".  So we might want to rename\n> that sometime. Before we get into the specifics, I suggest that all\n> interested parties check whether buildfarm scripts, packaging scripts,\n> etc. need to be adjusted for the newer name.\n>\n\n\nThe buildfarm does not use autoconf at all, so it won't care less what\nthe file is called.\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Wed, 15 Jul 2020 09:39:11 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: renaming configure.in to configure.ac" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> It was mentioned elsewhere in passing that a new Autoconf release might \n> be coming. That one will warn about the old naming \"configure.in\" and \n> request \"configure.ac\". So we might want to rename that sometime. \n> Before we get into the specifics, I suggest that all interested parties \n> check whether buildfarm scripts, packaging scripts, etc. need to be \n> adjusted for the newer name.\n\nAlong the same line, I read at [1]\n\n Because it has been such a long time, and because some of the changes\n potentially break existing Autoconf scripts, we are conducting a\n public beta test before the final release of version 2.70. Please\n test this beta with your autoconf scripts, and report any problems you\n find to the Savannah bug tracker:\n\nMaybe we should do some pro-active testing, rather than just waiting for\n2.70 to get dropped on us? God knows how long it will be until 2.71.\n\n\t\t\tregards, tom lane\n\n[1] https://lists.gnu.org/archive/html/autoconf/2020-07/msg00006.html\n\n\n", "msg_date": "Wed, 15 Jul 2020 09:45:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: renaming configure.in to configure.ac" }, { "msg_contents": "On Wed, Jul 15, 2020 at 09:45:54AM -0400, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> > It was mentioned elsewhere in passing that a new Autoconf release might \n> > be coming. That one will warn about the old naming \"configure.in\" and \n> > request \"configure.ac\". So we might want to rename that sometime. \n> > Before we get into the specifics, I suggest that all interested parties \n> > check whether buildfarm scripts, packaging scripts, etc. need to be \n> > adjusted for the newer name.\n> \n> Along the same line, I read at [1]\n> \n> Because it has been such a long time, and because some of the changes\n> potentially break existing Autoconf scripts, we are conducting a\n> public beta test before the final release of version 2.70. Please\n> test this beta with your autoconf scripts, and report any problems you\n> find to the Savannah bug tracker:\n> \n> Maybe we should do some pro-active testing, rather than just waiting for\n> 2.70 to get dropped on us? God knows how long it will be until 2.71.\n\nSounds good. A cheap option would be to regenerate with 2.70, push that on a\nFriday night to see what the buildfarm thinks, and revert it on Sunday night.\n\n\n", "msg_date": "Wed, 15 Jul 2020 21:56:39 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: renaming configure.in to configure.ac" }, { "msg_contents": "Noah Misch <noah@leadboat.com> writes:\n\n> On Wed, Jul 15, 2020 at 09:45:54AM -0400, Tom Lane wrote:\n>> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> > It was mentioned elsewhere in passing that a new Autoconf release might \n>> > be coming. That one will warn about the old naming \"configure.in\" and \n>> > request \"configure.ac\". So we might want to rename that sometime. \n>> > Before we get into the specifics, I suggest that all interested parties \n>> > check whether buildfarm scripts, packaging scripts, etc. need to be \n>> > adjusted for the newer name.\n>> \n>> Along the same line, I read at [1]\n>> \n>> Because it has been such a long time, and because some of the changes\n>> potentially break existing Autoconf scripts, we are conducting a\n>> public beta test before the final release of version 2.70. Please\n>> test this beta with your autoconf scripts, and report any problems you\n>> find to the Savannah bug tracker:\n>> \n>> Maybe we should do some pro-active testing, rather than just waiting for\n>> 2.70 to get dropped on us? God knows how long it will be until 2.71.\n>\n> Sounds good. A cheap option would be to regenerate with 2.70, push that on a\n> Friday night to see what the buildfarm thinks, and revert it on Sunday night.\n\nInstead of doing this on the master branch, would it be worth defining a\nnamespace for branches that the buildfarm tests in addition to master\nand REL_*_STABLE?\n\nIn the Perl world we have this in the form of smoke-me/* branches, and\nit's invaluable to be able to test things across many platforms without\nbreaking blead (our name for the main development branch).\n\n- ilmari\n-- \n- Twitter seems more influential [than blogs] in the 'gets reported in\n the mainstream press' sense at least. - Matt McLeod\n- That'd be because the content of a tweet is easier to condense down\n to a mainstream media article. - Calle Dybedahl\n\n\n", "msg_date": "Thu, 16 Jul 2020 11:41:56 +0100", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": false, "msg_subject": "Re: renaming configure.in to configure.ac" }, { "msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> On Wed, Jul 15, 2020 at 09:45:54AM -0400, Tom Lane wrote:\n>> Maybe we should do some pro-active testing, rather than just waiting for\n>> 2.70 to get dropped on us? God knows how long it will be until 2.71.\n\n> Sounds good. A cheap option would be to regenerate with 2.70, push that on a\n> Friday night to see what the buildfarm thinks, and revert it on Sunday night.\n\nWe'd have to rename configure.in as per $subject; but AFAIK that works\nwith extant autoconf, so we could just do it and leave it that way,\nfiguring that it'll have to happen eventually.\n\nMore ambitiously, we could just adopt 2.69b in HEAD and see what happens,\nplanning to revert only if things break. The cost to that is that\ncommitters who want to commit configure.ac changes would have to install\n2.69b. But they'd be having to install 2.70 whenever we move to that,\nanyway, so I'm not sure that's a big cost.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 Jul 2020 11:24:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: renaming configure.in to configure.ac" }, { "msg_contents": "\nOn 7/16/20 11:24 AM, Tom Lane wrote:\n> Noah Misch <noah@leadboat.com> writes:\n>> On Wed, Jul 15, 2020 at 09:45:54AM -0400, Tom Lane wrote:\n>>> Maybe we should do some pro-active testing, rather than just waiting for\n>>> 2.70 to get dropped on us? God knows how long it will be until 2.71.\n>> Sounds good. A cheap option would be to regenerate with 2.70, push that on a\n>> Friday night to see what the buildfarm thinks, and revert it on Sunday night.\n> We'd have to rename configure.in as per $subject; but AFAIK that works\n> with extant autoconf, so we could just do it and leave it that way,\n> figuring that it'll have to happen eventually.\n\n\n\nYeah, let's just do that forthwith.\n\n\n>\n> More ambitiously, we could just adopt 2.69b in HEAD and see what happens,\n> planning to revert only if things break. The cost to that is that\n> committers who want to commit configure.ac changes would have to install\n> 2.69b. But they'd be having to install 2.70 whenever we move to that,\n> anyway, so I'm not sure that's a big cost.\n>\n> \t\t\t\n\n\n\nI don't think it's a big cost. IIRC for quite some years we had to keep\n2 or 3 versions of autoconf to cover all the live branches.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Thu, 16 Jul 2020 11:36:29 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: renaming configure.in to configure.ac" }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 7/16/20 11:24 AM, Tom Lane wrote:\n>> More ambitiously, we could just adopt 2.69b in HEAD and see what happens,\n>> planning to revert only if things break. The cost to that is that\n>> committers who want to commit configure.ac changes would have to install\n>> 2.69b. But they'd be having to install 2.70 whenever we move to that,\n>> anyway, so I'm not sure that's a big cost.\n\n> I don't think it's a big cost. IIRC for quite some years we had to keep\n> 2 or 3 versions of autoconf to cover all the live branches.\n\nYeah, everyone who's had a commit bit for more than a few years\nhas a workflow that allows for using different autoconf versions\nfor different branches. And if the autoconf crew get their act\nback together and start making regular releases again, that will\nbecome the norm for us again too --- so the newer committers had\nbetter get set up to handle this if they aren't already.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 Jul 2020 11:43:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: renaming configure.in to configure.ac" }, { "msg_contents": "Hi,\n\nOn July 16, 2020 8:24:15 AM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>Noah Misch <noah@leadboat.com> writes:\n>> On Wed, Jul 15, 2020 at 09:45:54AM -0400, Tom Lane wrote:\n>More ambitiously, we could just adopt 2.69b in HEAD and see what\n>happens,\n>planning to revert only if things break. The cost to that is that\n>committers who want to commit configure.ac changes would have to\n>install\n>2.69b. But they'd be having to install 2.70 whenever we move to that,\n>anyway, so I'm not sure that's a big cost.\n\nI think it'd be a good plan to adopt the beta on master.\n\nWe already have parts of it backpacked, there have been things we couldn't easily do because of bugs in 2.69. There aren't that many changes to configure it total, and particularly not in the back branches. So I think it'd be ok overhead wise.\n\nAndres\n\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Thu, 16 Jul 2020 08:48:05 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: renaming configure.in to configure.ac" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I think it'd be a good plan to adopt the beta on master.\n\n> We already have parts of it backpacked, there have been things we couldn't easily do because of bugs in 2.69. There aren't that many changes to configure it total, and particularly not in the back branches. So I think it'd be ok overhead wise.\n\nYeah. Because we'd want to rip out those hacks, it's not quite as simple\nas \"regenerate configure with this other autoconf version\"; somebody will\nhave to do some preliminary investigation and produce a patch for the\nautoconf input files. Peter, were you intending to do that?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 Jul 2020 12:17:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: renaming configure.in to configure.ac" }, { "msg_contents": "On Thu, Jul 16, 2020 at 11:41:56AM +0100, Dagfinn Ilmari Manns�ker wrote:\n> Noah Misch <noah@leadboat.com> writes:\n> > On Wed, Jul 15, 2020 at 09:45:54AM -0400, Tom Lane wrote:\n> >> Along the same line, I read at [1]\n> >> \n> >> Because it has been such a long time, and because some of the changes\n> >> potentially break existing Autoconf scripts, we are conducting a\n> >> public beta test before the final release of version 2.70. Please\n> >> test this beta with your autoconf scripts, and report any problems you\n> >> find to the Savannah bug tracker:\n> >> \n> >> Maybe we should do some pro-active testing, rather than just waiting for\n> >> 2.70 to get dropped on us? God knows how long it will be until 2.71.\n> >\n> > Sounds good. A cheap option would be to regenerate with 2.70, push that on a\n> > Friday night to see what the buildfarm thinks, and revert it on Sunday night.\n> \n> Instead of doing this on the master branch, would it be worth defining a\n> namespace for branches that the buildfarm tests in addition to master\n> and REL_*_STABLE?\n> \n> In the Perl world we have this in the form of smoke-me/* branches, and\n> it's invaluable to be able to test things across many platforms without\n> breaking blead (our name for the main development branch).\n\nPotentially. What advantages and disadvantages has Perl experienced?\n\n(Given the support downthread for just changing master indefinitely, which is\nfine with me, it's more likely this particular change won't use such a branch.\nThere have been and will be other changes that may benefit.)\n\n\n", "msg_date": "Thu, 16 Jul 2020 23:29:04 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: renaming configure.in to configure.ac" }, { "msg_contents": "On 2020-07-16 18:17, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> I think it'd be a good plan to adopt the beta on master.\n> \n>> We already have parts of it backpacked, there have been things we couldn't easily do because of bugs in 2.69. There aren't that many changes to configure it total, and particularly not in the back branches. So I think it'd be ok overhead wise.\n> \n> Yeah. Because we'd want to rip out those hacks, it's not quite as simple\n> as \"regenerate configure with this other autoconf version\"; somebody will\n> have to do some preliminary investigation and produce a patch for the\n> autoconf input files. Peter, were you intending to do that?\n\nOkay, let's take a look. Attached is a patch series.\n\nv1-0001-Rename-configure.in-to-configure.ac.patch\n\nThis is unsurprising.\n\nv1-0002-Update-to-Autoconf-2.69b.patch.bz2\n\nThis runs auto(re)conf 2.69b and cleans up a minimal amount of obsoleted \nstuff.\n\nThe bulk of the changes in the produced configure are from the change \nfrom echo to printf. Not much else that's too interesting. I think a \nlot of the compatibility/testing advisories relate to the way you write \nyour configure.ac, not so much to the produced shell code.\n\nv1-0003-Remove-check_decls.m4-obsoleted-by-Autoconf-updat.patch\n\nThis is something we had backported and is now no longer necessary. \nNote that there are no significant changes in the produced configure, \nwhich is good.\n\nv1-0004-configure.ac-Remove-_DARWIN_USE_64_BIT_INODE-hack.patch\n\nThis is also something that has been obsoleted.\n\nI'm not immediately aware of anything else that can be removed, cleaned, \nor simplified.\n\nOne thing that's annoying is that the release notes claim that configure \nshould now be faster, and some of the changes they have made should \nsupport that, but my (limited) testing doesn't bear that out. Most \nnotably, the newly arisen test\n\nchecking for g++ option to enable C++11 features... none needed\n\ntakes approximately 10 seconds(!) on my machine (for one loop, since \n\"none needed\"; good luck if you need more than none).\n\nThis clearly depends on a lot of specifics of the environment, so some \nmore testing would be useful. This is perhaps something we can \nconstruct some useful feedback for.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 17 Jul 2020 10:46:30 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: renaming configure.in to configure.ac" }, { "msg_contents": "Noah Misch <noah@leadboat.com> writes:\n\n> On Thu, Jul 16, 2020 at 11:41:56AM +0100, Dagfinn Ilmari Mannsåker wrote:\n>\n>> Instead of doing this on the master branch, would it be worth defining a\n>> namespace for branches that the buildfarm tests in addition to master\n>> and REL_*_STABLE?\n>> \n>> In the Perl world we have this in the form of smoke-me/* branches, and\n>> it's invaluable to be able to test things across many platforms without\n>> breaking blead (our name for the main development branch).\n>\n> Potentially. What advantages and disadvantages has Perl experienced?\n\nThe advantage is getting proposed changes tested on a number of\nplatforms that individual developers otherwise don't have access to.\nFor example http://perl.develop-help.com/?b=smoke-me%2Filmari%2Fremove-symbian\nshows the reults of one branch of mine.\n\nThe only disadvantage is that it takes up more build farm capacity, but\nit's not used for all changes, only ones that developers are concerned\nmight break on other platforms (e.g. affecting platform-specific code or\nconstructs otherwise known to behave differently across platforms and\ncompilers).\n\n- ilmari\n-- \n- Twitter seems more influential [than blogs] in the 'gets reported in\n the mainstream press' sense at least. - Matt McLeod\n- That'd be because the content of a tweet is easier to condense down\n to a mainstream media article. - Calle Dybedahl\n\n\n", "msg_date": "Fri, 17 Jul 2020 11:58:41 +0100", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": false, "msg_subject": "Re: renaming configure.in to configure.ac" }, { "msg_contents": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=) writes:\n> Noah Misch <noah@leadboat.com> writes:\n>> On Thu, Jul 16, 2020 at 11:41:56AM +0100, Dagfinn Ilmari Mannsåker wrote:\n>>> Instead of doing this on the master branch, would it be worth defining a\n>>> namespace for branches that the buildfarm tests in addition to master\n>>> and REL_*_STABLE?\n\n>> Potentially. What advantages and disadvantages has Perl experienced?\n\n> The advantage is getting proposed changes tested on a number of\n> platforms that individual developers otherwise don't have access to.\n> For example http://perl.develop-help.com/?b=smoke-me%2Filmari%2Fremove-symbian\n> shows the reults of one branch of mine.\n> The only disadvantage is that it takes up more build farm capacity, but\n> it's not used for all changes, only ones that developers are concerned\n> might break on other platforms (e.g. affecting platform-specific code or\n> constructs otherwise known to behave differently across platforms and\n> compilers).\n\nI'd argue that cluttering the main development repo with dead branches\nis a non-negligible cost. We have one or two such left over from very\nancient days, and I don't really want more. (Is there a way to remove\na branch once it's been pushed to a shared git repo?)\n\nAnother issue is that we're not going to open up the main repo for\naccess by non-committers, so this approach doesn't help for most\ndevelopers. We've had some success, I think, with Munro's cfbot\nsolution --- I'd rather see that approach expanded to provide more\ntest environments.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 Jul 2020 10:12:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: renaming configure.in to configure.ac" }, { "msg_contents": "On Fri, Jul 17, 2020 at 4:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=) writes:\n> > Noah Misch <noah@leadboat.com> writes:\n> >> On Thu, Jul 16, 2020 at 11:41:56AM +0100, Dagfinn Ilmari Mannsåker\n> wrote:\n> >>> Instead of doing this on the master branch, would it be worth defining\n> a\n> >>> namespace for branches that the buildfarm tests in addition to master\n> >>> and REL_*_STABLE?\n>\n> >> Potentially. What advantages and disadvantages has Perl experienced?\n>\n> > The advantage is getting proposed changes tested on a number of\n> > platforms that individual developers otherwise don't have access to.\n> > For example\n> http://perl.develop-help.com/?b=smoke-me%2Filmari%2Fremove-symbian\n> > shows the reults of one branch of mine.\n> > The only disadvantage is that it takes up more build farm capacity, but\n> > it's not used for all changes, only ones that developers are concerned\n> > might break on other platforms (e.g. affecting platform-specific code or\n> > constructs otherwise known to behave differently across platforms and\n> > compilers).\n>\n> I'd argue that cluttering the main development repo with dead branches\n> is a non-negligible cost. We have one or two such left over from very\n> ancient days, and I don't really want more. (Is there a way to remove\n> a branch once it's been pushed to a shared git repo?)\n>\n\nYes, it's trivial to remove a branch from a shared git repo. In modern\nversions of git, just \"git push origin --delete stupidbranch\".\n\nThe actual commits remain in the repo of course, until such time that it's\nGCed.\n\n\nAnother issue is that we're not going to open up the main repo for\n> access by non-committers, so this approach doesn't help for most\n> developers. We've had some success, I think, with Munro's cfbot\n> solution --- I'd rather see that approach expanded to provide more\n> test environments.\n>\n\nThat one does more or less what Dagfinn suggests except in a separate repo.\nWe could also just have a separate repo for it where people could push if\nwe wanted to. Which could be committers, or others. But in comparison with\nwhat Perl does, I would assume actually having \"just committers\"be able to\npush would really be enough for that. A committer should be able to judge\nwhether a patch needs extra cross-platform testing (and the cfbot does just\nfine for the limited platforms it runs on, which would still be good enough\nfor *most* patches).\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Fri, Jul 17, 2020 at 4:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=) writes:\n> Noah Misch <noah@leadboat.com> writes:\n>> On Thu, Jul 16, 2020 at 11:41:56AM +0100, Dagfinn Ilmari Mannsåker wrote:\n>>> Instead of doing this on the master branch, would it be worth defining a\n>>> namespace for branches that the buildfarm tests in addition to master\n>>> and REL_*_STABLE?\n\n>> Potentially.  What advantages and disadvantages has Perl experienced?\n\n> The advantage is getting proposed changes tested on a number of\n> platforms that individual developers otherwise don't have access to.\n> For example http://perl.develop-help.com/?b=smoke-me%2Filmari%2Fremove-symbian\n> shows the reults of one branch of mine.\n> The only disadvantage is that it takes up more build farm capacity, but\n> it's not used for all changes, only ones that developers are concerned\n> might break on other platforms (e.g. affecting platform-specific code or\n> constructs otherwise known to behave differently across platforms and\n> compilers).\n\nI'd argue that cluttering the main development repo with dead branches\nis a non-negligible cost.  We have one or two such left over from very\nancient days, and I don't really want more.  (Is there a way to remove\na branch once it's been pushed to a shared git repo?)Yes, it's trivial to remove a branch from a shared git repo. In modern versions of git, just \"git push origin --delete stupidbranch\".The actual commits remain in the repo of course, until such time that it's GCed.\nAnother issue is that we're not going to open up the main repo for\naccess by non-committers, so this approach doesn't help for most\ndevelopers.  We've had some success, I think, with Munro's cfbot\nsolution --- I'd rather see that approach expanded to provide more\ntest environments.That one does more or less what Dagfinn suggests except in a separate repo. We could also just have a separate repo for it where people could push if we wanted to. Which could be committers, or others. But in comparison with what Perl does, I would assume actually having \"just committers\"be able to push would really be enough for that. A committer should be able to judge whether a patch needs extra cross-platform testing (and the cfbot does just fine for the limited platforms it runs on, which would still be good enough for *most* patches).--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Fri, 17 Jul 2020 16:34:35 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: renaming configure.in to configure.ac" }, { "msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> That one does more or less what Dagfinn suggests except in a separate repo.\n> We could also just have a separate repo for it where people could push if\n> we wanted to. Which could be committers, or others. But in comparison with\n> what Perl does, I would assume actually having \"just committers\"be able to\n> push would really be enough for that. A committer should be able to judge\n> whether a patch needs extra cross-platform testing (and the cfbot does just\n> fine for the limited platforms it runs on, which would still be good enough\n> for *most* patches).\n\nBy and large, once a patch has reached that stage, we just push it to\nmaster and deal with any fallout. I suppose you could argue that\npushing to a testing branch first would reduce the amount of time that\nHEAD is broken, but TBH I think it would not help much. An awful lot\nof the stuff that breaks the buildfarm is patches that the committer\nwas not expecting trouble with.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 Jul 2020 11:18:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: renaming configure.in to configure.ac" }, { "msg_contents": "Hi,\n\nOn 2020-07-17 10:46:30 +0200, Peter Eisentraut wrote:\n> Okay, let's take a look. Attached is a patch series.\n\nCool.\n\n\n> One thing that's annoying is that the release notes claim that configure\n> should now be faster, and some of the changes they have made should support\n> that, but my (limited) testing doesn't bear that out. Most notably, the\n> newly arisen test\n> \n> checking for g++ option to enable C++11 features... none needed\n> \n> takes approximately 10 seconds(!) on my machine (for one loop, since \"none\n> needed\"; good luck if you need more than none).\n\nSomething got to be wrong here, no? I see that there's a surprisingly\nlarge c++ program embedded for this test, but still, 10s?\n\nIt's not even clear why we're seeing this test at all? Is this now\nalways part of AC_PROG_CXX?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 17 Jul 2020 11:53:05 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: renaming configure.in to configure.ac" }, { "msg_contents": "On Sat, Jul 18, 2020 at 2:12 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Another issue is that we're not going to open up the main repo for\n> access by non-committers, so this approach doesn't help for most\n> developers. We've had some success, I think, with Munro's cfbot\n> solution --- I'd rather see that approach expanded to provide more\n> test environments.\n\nRecently I've been using Cirrus CI for my own development branches\nthat involve portability stuff, because it supports Linux, FreeBSD,\nmacOS and Windows in one place. That's nearly half the OSes we\nsupport, and they hinted that they might be about to add more OSes\ntoo. What you get (if you're lucky) is a little green check mark\nbeside the commit hash on github, which you can click for more info,\nlike this:\n\nhttps://github.com/macdice/postgres/tree/cirrus-ci\n\nThe .cirrus.yml file shown in that branch is just a starting point.\nSee list of problems at the top; help wanted. I also put some\ninformation about this on\nhttps://wiki.postgresql.org/wiki/Continuous_Integration. I think if\nwe could get to a really good dot file for (say) the three providers\nshown there, we should just stick them in the tree so that anyone can\nturn that on for their own public development branches with a click.\nThen cfbot wouldn't have to add it, but it'd still have a good reason\nto exist, to catch bitrot and as a second line of defence for people\nwho don't opt into the first one.\n\n\n", "msg_date": "Sat, 18 Jul 2020 10:00:38 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: renaming configure.in to configure.ac" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2020-07-16 18:17, Tom Lane wrote:\n>> Yeah. Because we'd want to rip out those hacks, it's not quite as simple\n>> as \"regenerate configure with this other autoconf version\"; somebody will\n>> have to do some preliminary investigation and produce a patch for the\n>> autoconf input files. Peter, were you intending to do that?\n\n> Okay, let's take a look. Attached is a patch series.\n\nI haven't carefully read any of these patches, but I applied them and did\nsome testing on a couple of machines. I find that I get near-identical\noutput files, so the functionality appears OK. There are just a couple of\nnon-comment differences in pg_config.h:\n\n1. The new autoconf omits\n#define HAVE_MEMORY_H 1\nwhich we don't care about, so that's not an issue.\n\n2. \"pg_restrict\" and \"restrict\" get defined as \"__restrict__\" not\n\"__restrict\". That seems cosmetic.\n\n> One thing that's annoying is that the release notes claim that configure \n> should now be faster, and some of the changes they have made should \n> support that, but my (limited) testing doesn't bear that out. Most \n> notably, the newly arisen test\n> checking for g++ option to enable C++11 features... none needed\n> takes approximately 10 seconds(!) on my machine (for one loop, since \n> \"none needed\"; good luck if you need more than none).\n\nYeah, I confirm these results. The time penalty for the \"C++11 features\"\ntest is about 8 seconds on my RHEL8 machine, but only about 3 seconds\non a current MacBook Pro. Not sure if that's all about faster hardware\nor if clang is faster than gcc for this test.\n\nNow, the one bit of good news about that is that the result is cacheable:\nusing either ccache or configure --enable-cache causes the time for a\nrepeated test to drop to nil. Still, it's going to be damn annoying for\nenvironments where neither escape hatch applies.\n\nAs best I can tell, the reason it's so slow is that somebody decided they\nought to have a completionist approach to discovering whether the compiler\nhas \"C++11 features\". The test program embedded in configure for this\nis a good 220 lines long, and it imports 20 different header files, and\nappears to be trying to exercise every one of those modules. This seems\nutterly unreasonable. The traditional autoconf approach, I think, has\nbeen to test for a couple of bellwether features and assume that if\nyou have those then you have the full set. As you say, it'd be\nparticularly important to do it like that if the test requires multiple\niterations to find working switches.\n\nBTW, when I tried this on an old gcc (gaur's compiler), the C++11 test\nfailed fairly quickly but then it spent an equally ridiculous amount of\ntime testing for \"C++98 features\". So both parts of that are completely\noverdesigned if you ask me.\n\nSo I think we should push back on that test, or if all else fails\nfind a way to dike it out of our configure run --- we don't actually\ncare about these feature switches do we?\n\nAlso, I'm kinda wondering why our configure script investigates g++\nat all when I haven't specified --with-llvm. Up to now, that hasn't\nbeen enough of a penalty to really get me irate, but this behavior\nmight get me on the warpath about it.\n\nAnyway, the bottom line for speed is that on a modern Linux (RHEL8),\nconsidering only the runtime in the fully-cached case (both ccache and\naccache up to date), it does seem like the new version is noticeably\nfaster: I see ~2.2 seconds instead of 2.7. Similarly on my MacBook Pro.\nIt's hard to compare the non-cached cases because the silly g++ test\nswamps everything.\n\n> This clearly depends on a lot of specifics of the environment, so some \n> more testing would be useful.\n\nThe test scenarios I tried were\n\n* gcc 8.3.1 on RHEL8\n* Apple clang version 11.0.3 on macOS Catalina\n* gcc 4.5.4 on HPUX 10.20\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 18 Jul 2020 20:15:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: renaming configure.in to configure.ac" }, { "msg_contents": "Hi,\n\nOn 2020-07-18 20:15:52 -0400, Tom Lane wrote:\n> As best I can tell, the reason it's so slow is that somebody decided they\n> ought to have a completionist approach to discovering whether the compiler\n> has \"C++11 features\". The test program embedded in configure for this\n> is a good 220 lines long, and it imports 20 different header files, and\n> appears to be trying to exercise every one of those modules. This seems\n> utterly unreasonable. The traditional autoconf approach, I think, has\n> been to test for a couple of bellwether features and assume that if\n> you have those then you have the full set. As you say, it'd be\n> particularly important to do it like that if the test requires multiple\n> iterations to find working switches.\n\nYea, that's way over top.\n\n\n> So I think we should push back on that test, or if all else fails\n> find a way to dike it out of our configure run --- we don't actually\n> care about these feature switches do we?\n\nNot at the moment, at least.\n\n\n> Also, I'm kinda wondering why our configure script investigates g++\n> at all when I haven't specified --with-llvm. Up to now, that hasn't\n> been enough of a penalty to really get me irate, but this behavior\n> might get me on the warpath about it.\n\nIIRC we ended up doing it that way because it'd be annoying for pgxs\nusing extensions etc to not be able to rely on the c++ compiler being\ndetected, even when actually available.\n\n\nThe docs don't mention disabling the conformance tests:\n\n> If necessary, add an option to output variable @code{CXX} to enable\n> support for ISO Standard C++ features with extensions. Prefer the\n> newest C++ standard that is supported. Currently the newest standard is\n> ISO C++11, with ISO C++98 being the previous standard. After calling\n> this macro you can check whether the C++ compiler has been set to accept\n> Standard C++; if not, the shell variable @code{ac_cv_prog_cxx_stdcxx} is\n> set to @samp{no}. If the C++ compiler will not accept C++11, the shell\n> variable @code{ac_cv_prog_cxx_cxx11} is set to @samp{no}, and if it will\n> not accept C++98, the shell variable @code{ac_cv_prog_cxx_cxx98} is set\n> to @samp{no}.\n\nAnd it can't be cleanly done in the code either afaict:\n\nIn lib/autoconf/c.m4\n_AC_PROG_CXX_CXX11([ac_prog_cxx_stdcxx=cxx11\n\t\t ac_cv_prog_cxx_stdcxx=$ac_cv_prog_cxx_cxx11\n\t\t ac_cv_prog_cxx_cxx98=$ac_cv_prog_cxx_cxx11],\n [_AC_PROG_CXX_CXX98([ac_prog_cxx_stdcxx=cxx98\n\t\t ac_cv_prog_cxx_stdcxx=$ac_cv_prog_cxx_cxx98],\n\t\t [ac_prog_cxx_stdcxx=no\n\t\t ac_cv_prog_cxx_stdcxx=no])])\n\nPresumably we could, as a pretty ugly workaround, define the cache\nvariable to a constant value :/\n\nThe commit that added this is\n\ncommit bd79b51000e2fe59368c93ff463adb59852ec6e7\nAuthor: Roger Leigh <rleigh@debian.org>\nDate: 2013-01-20 18:50:49 +0000\n\n AC_PROG_CXX: Add checks for C++11, C++98TR1 and C++98\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 18 Jul 2020 17:31:16 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: renaming configure.in to configure.ac" }, { "msg_contents": "On 2020-07-17 10:46, Peter Eisentraut wrote:\n> v1-0001-Rename-configure.in-to-configure.ac.patch\n\nI have committed that, and I have sent feedback to the Autoconf \ndevelopers about our concerns about the slowness of some of the new tests.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 24 Jul 2020 11:13:28 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: renaming configure.in to configure.ac" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2020-07-17 10:46, Peter Eisentraut wrote:\n>> v1-0001-Rename-configure.in-to-configure.ac.patch\n\n> I have committed that, and I have sent feedback to the Autoconf \n> developers about our concerns about the slowness of some of the new tests.\n\nSounds good. Do we want to try Noah's idea of temporarily committing\nthe remaining changes, to see if the buildfarm is happy?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 24 Jul 2020 09:23:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: renaming configure.in to configure.ac" }, { "msg_contents": "On 2020-07-24 15:23, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> On 2020-07-17 10:46, Peter Eisentraut wrote:\n>>> v1-0001-Rename-configure.in-to-configure.ac.patch\n> \n>> I have committed that, and I have sent feedback to the Autoconf\n>> developers about our concerns about the slowness of some of the new tests.\n\nThe slow C++ feature test has been fixed in Autoconf git.\n\n> Sounds good. Do we want to try Noah's idea of temporarily committing\n> the remaining changes, to see if the buildfarm is happy?\n\nI think to get value out of this you'd have to compare the config.status \noutput files (mainly pg_config.h and Makefile.global) before and after. \nOtherwise you're just testing that the shell can parse the script. \nPerhaps some manual tests on, say, AIX and HP-UX using the native shell \nwould be of some value.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 25 Aug 2020 16:01:49 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: renaming configure.in to configure.ac" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2020-07-24 15:23, Tom Lane wrote:\n>> Sounds good. Do we want to try Noah's idea of temporarily committing\n>> the remaining changes, to see if the buildfarm is happy?\n\n> I think to get value out of this you'd have to compare the config.status \n> output files (mainly pg_config.h and Makefile.global) before and after. \n> Otherwise you're just testing that the shell can parse the script. \n> Perhaps some manual tests on, say, AIX and HP-UX using the native shell \n> would be of some value.\n\nI already did that on assorted boxes, using the patches you previously\nposted [1]. Do you think there's value in re-doing it manually,\nrather than just having at it with the buildfarm?\n\n(I did not try to test whether the configure script itself could be\nregenerated on an ancient platform; I doubt we care.)\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/30379.1595117752%40sss.pgh.pa.us\n\n\n", "msg_date": "Tue, 25 Aug 2020 12:44:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: renaming configure.in to configure.ac" }, { "msg_contents": "On 2020-08-25 18:44, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> On 2020-07-24 15:23, Tom Lane wrote:\n>>> Sounds good. Do we want to try Noah's idea of temporarily committing\n>>> the remaining changes, to see if the buildfarm is happy?\n> \n>> I think to get value out of this you'd have to compare the config.status\n>> output files (mainly pg_config.h and Makefile.global) before and after.\n>> Otherwise you're just testing that the shell can parse the script.\n>> Perhaps some manual tests on, say, AIX and HP-UX using the native shell\n>> would be of some value.\n> \n> I already did that on assorted boxes, using the patches you previously\n> posted [1]. Do you think there's value in re-doing it manually,\n> rather than just having at it with the buildfarm?\n\nI think right now we don't need any further organized testing.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 26 Aug 2020 16:33:20 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: renaming configure.in to configure.ac" } ]
[ { "msg_contents": "Hi,\n\nI'm bumping this thread on pgsql-hacker, hopefully it will drag some more\nopinions/discussions.\n\nShould we try to fix this issue or not? This is clearly an upstream bug. It has\nbeen reported, including regression tests, but this doesn't move since 2 years\nnow.\n\nIf we choose not to fix it on our side using eg a workaround (see patch), I\nsuppose this small bug should be documented somewhere so people are not lost\nalone in the wild.\n\nOpinions?\n\nRegards,\n\nBegin forwarded message:\n\nDate: Sat, 13 Jun 2020 00:43:22 +0200\nFrom: Jehan-Guillaume de Rorthais <jgdr@dalibo.com>\nTo: Thomas Munro <thomas.munro@gmail.com>, Peter Geoghegan <pg@bowt.ie>\nCc: Роман Литовченко <roman.lytovchenko@gmail.com>, PostgreSQL mailing lists\n<pgsql-bugs@lists.postgresql.org> Subject: Re: BUG #15285: Query used index\nover field with ICU collation in some cases wrongly return 0 rows\n\n\nOn Fri, 12 Jun 2020 18:40:55 +0200\nJehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n\n> On Wed, 10 Jun 2020 00:29:33 +0200\n> Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n> [...] \n> > After playing with ICU regression tests, I found functions ucol_strcollIter\n> > and ucol_nextSortKeyPart are safe. I'll do some performance tests and report\n> > here. \n> \n> I did some benchmarks. See attachment for the script and its header to\n> reproduce.\n> \n> It sorts 935895 french phrases from 0 to 122 chars with an average of 49.\n> Performance tests were done on current master HEAD (buggy) and using the patch\n> in attachment, relying on ucol_strcollIter.\n> \n> My preliminary test with ucol_getSortKey was catastrophic, as we might\n> expect. 15-17x slower than the current HEAD. So I removed it from actual\n> tests. I didn't try with ucol_nextSortKeyPart though.\n> \n> Using ucol_strcollIter performs ~20% slower than HEAD on UTF8 databases, but\n> this might be acceptable. Here are the numbers:\n> \n> DB Encoding HEAD strcollIter ratio\n> UTF8 2.74 3.27 1.19x\n> LATIN1 5.34 5.40 1.01x\n> \n> I plan to add a regression test soon. \n\nPlease, find in attachment the second version of the patch, with a\nregression test.\n\nRegards,\n\n\n-- \nJehan-Guillaume de Rorthais\nDalibo", "msg_date": "Wed, 15 Jul 2020 15:52:20 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "BUG #15285: Query used index over field with ICU collation in some\n cases wrongly return 0 rows" } ]
[ { "msg_contents": "Besides the great efforts that Dmitry et al. are putting into the skip scan for DISTINCT queries [1], I'm also still keen on extending the use of it further. I'd like to address the limited cases in which skipping can occur here. A few months ago I shared an initial rough patch that provided a generic skip implementation, but lacked the proper planning work [2]. I'd like to share a second patch set that provides an implementation of the planner as well. Perhaps this can lead to some proper discussions how we'd like to shape this patch further.\n\nPlease see [2] for an introduction and some rough performance comparisons. This patch improves upon those, because it implements proper cost estimation logic. It will now only choose the skip scan if it's deemed to be cheaper than using a regular index scan. Other than that, all the features are still there. The skip scan can be used in many more types of queries than in the original DISTINCT patch as provided in [1], making it more performant and also more predictable for users.\n\nI'm keen on receiving feedback on this idea and on the patch. I believe it could be a great feature that is useful to many users. However, when I posted the previous version of the patch, only Thomas expressed his explicit interest in the feature. It would be useful for me to know if there's enough interest here. Please speak out as well if you can't (currently) review, but do think that this feature is worth the effort.\n\nI'm sure there are still plenty of things that need to be improved. I have some in mind, but at the moment it's hard for me to judge which ones are really important and which ones are not. I think I really need someone with more experience of the code looking at this for feedback.\n\nv9-0001 + v9-0002 are Andy's UniqueKeys patches [3]\nv01-0001 is a slightly modified version of Dmitry's extension of unique keys patch (his lastest patch plus the diff patch that I posted in the original index skip thread)\nv01-0002 is the bulk of the work: the skip implementation, indexam interface and implementation for DISTINCT queries\nv01-0003 is the additional planner work to add support for skipping in regular index scans (non-DISTINCT)\n\n-Floris\n\n[1] https://www.postgresql.org/message-id/flat/20200609102247.jdlatmfyeecg52fi@localhost\n[2] https://www.postgresql.org/message-id/c5c5c974714a47f1b226c857699e8680%40opammb0561.comp.optiver.com\n[3] https://www.postgresql.org/message-id/flat/CAKU4AWrwZMAL=uaFUDMf4WGOVkEL3ONbatqju9nSXTUucpp_pw@mail.gmail.com", "msg_date": "Wed, 15 Jul 2020 19:52:02 +0000", "msg_from": "Floris Van Nee <florisvannee@Optiver.com>", "msg_from_op": true, "msg_subject": "Generic Index Skip Scan" }, { "msg_contents": "On Thu, 16 Jul 2020 at 07:52, Floris Van Nee <florisvannee@optiver.com> wrote:\n>\n> Besides the great efforts that Dmitry et al. are putting into the skip scan for DISTINCT queries [1], I’m also still keen on extending the use of it further. I’d like to address the limited cases in which skipping can occur here. A few months ago I shared an initial rough patch that provided a generic skip implementation, but lacked the proper planning work [2]. I’d like to share a second patch set that provides an implementation of the planner as well. Perhaps this can lead to some proper discussions how we’d like to shape this patch further.\n>\n> Please see [2] for an introduction and some rough performance comparisons. This patch improves upon those, because it implements proper cost estimation logic. It will now only choose the skip scan if it’s deemed to be cheaper than using a regular index scan. Other than that, all the features are still there. The skip scan can be used in many more types of queries than in the original DISTINCT patch as provided in [1], making it more performant and also more predictable for users.\n>\n> I’m keen on receiving feedback on this idea and on the patch.\n\nI don't think anyone ever thought the feature would be limited to just\nmaking DISTINCT go faster. There's certain to be more usages in the\nfuture.\n\nHowever, for me it would be premature to look at this now.\n\nDavid\n\n\n", "msg_date": "Thu, 16 Jul 2020 09:46:03 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Generic Index Skip Scan" }, { "msg_contents": "Attached v02, rebased on latest master. It uses the new nbtree lock/unlock functions by Peter and I've verified with Valgrind there's no cases where it's trying to access pages without holding the lock. Just for debugging, I ran the Valgrind session on a modified version of the patch that always favors a skip scan over a regular index scan, in order to greatly increase the Valgrind coverage of the new parts.\n\n-Floris", "msg_date": "Wed, 22 Jul 2020 21:38:39 +0000", "msg_from": "Floris Van Nee <florisvannee@Optiver.com>", "msg_from_op": true, "msg_subject": "RE: Generic Index Skip Scan" } ]
[ { "msg_contents": "I've been experimenting with trying to dump-and-restore the\nregression database, which is a test case that for some reason\nwe don't cover in the buildfarm (pg_upgrade is not the same thing).\nIt seems like the dependency choices we've made for partitioned\nindexes are a complete failure for this purpose.\n\nSetup:\n\n1. make installcheck\n2. Work around the bug complained of at [1]:\n psql regression -c 'drop table gtest30_1, gtest1_1'\n3. pg_dump -Fc regression >regression.dump\n\nIssue #1: \"--clean\" does not work\n\n1. createdb r2\n2. pg_restore -d r2 regression.dump\n3. pg_restore --clean -d r2 regression.dump\n\npg_restore: while PROCESSING TOC:\npg_restore: from TOC entry 6606; 1259 35458 INDEX idxpart32_a_idx postgres\npg_restore: error: could not execute query: ERROR: cannot drop index public.idxpart32_a_idx because index public.idxpart3_a_idx requires it\nHINT: You can drop index public.idxpart3_a_idx instead.\nCommand was: DROP INDEX public.idxpart32_a_idx;\npg_restore: from TOC entry 6605; 1259 35454 INDEX idxpart31_a_idx postgres\npg_restore: error: could not execute query: ERROR: cannot drop index public.idxpart31_a_idx because index public.idxpart3_a_idx requires it\nHINT: You can drop index public.idxpart3_a_idx instead.\nCommand was: DROP INDEX public.idxpart31_a_idx;\n...\npg_restore: from TOC entry 6622; 2606 35509 CONSTRAINT pk52 pk52_pkey postgres\npg_restore: error: could not execute query: ERROR: cannot drop inherited constraint \"pk52_pkey\" of relation \"pk52\"\nCommand was: ALTER TABLE ONLY regress_indexing.pk52 DROP CONSTRAINT pk52_pkey;\npg_restore: from TOC entry 6620; 2606 35504 CONSTRAINT pk51 pk51_pkey postgres\npg_restore: error: could not execute query: ERROR: cannot drop inherited constraint \"pk51_pkey\" of relation \"pk51\"\nCommand was: ALTER TABLE ONLY regress_indexing.pk51 DROP CONSTRAINT pk51_pkey;\npg_restore: from TOC entry 6618; 2606 35502 CONSTRAINT pk5 pk5_pkey postgres\npg_restore: error: could not execute query: ERROR: cannot drop inherited constraint \"pk5_pkey\" of relation \"pk5\"\nCommand was: ALTER TABLE ONLY regress_indexing.pk5 DROP CONSTRAINT pk5_pkey;\n...\n\n(There seem to be some other problems as well, but most of the 54 complaints\nare related to partitioned indexes/constraints.)\n\nIssue #2: parallel restore does not work\n\n1. dropdb r2; createdb r2\n2. pg_restore -j8 -d r2 regression.dump \n\nThis is fairly timing-dependent, but some attempts fail with messages\nlike\n\npg_restore: while PROCESSING TOC:\npg_restore: from TOC entry 6684; 2606 29166 FK CONSTRAINT fk fk_a_fkey postgres\npg_restore: error: could not execute query: ERROR: there is no unique constraint matching given keys for referenced table \"pk\"\nCommand was: ALTER TABLE fkpart3.fk\n ADD CONSTRAINT fk_a_fkey FOREIGN KEY (a) REFERENCES fkpart3.pk(a);\n\nThe problem here seems to be that some commands like this:\n\nALTER INDEX fkpart3.pk5_pkey ATTACH PARTITION fkpart3.pk52_pkey;\n\t\nare not executed soon enough, indicating that we lack dependencies\nthat would guarantee the restore order.\n\nI have not analyzed these issues in any detail -- they're just bugs\nI tripped over while testing parallel pg_restore. In particular\nI do not know if #1 and #2 have the same root cause.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/3169466.1594841366%40sss.pgh.pa.us\n\n\n", "msg_date": "Wed, 15 Jul 2020 15:52:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Dependencies for partitioned indexes are still a mess" }, { "msg_contents": "On 2020-Jul-15, Tom Lane wrote:\n\n> Issue #1: \"--clean\" does not work\n> \n> 1. createdb r2\n> 2. pg_restore -d r2 regression.dump\n> 3. pg_restore --clean -d r2 regression.dump\n> \n> pg_restore: while PROCESSING TOC:\n> pg_restore: from TOC entry 6606; 1259 35458 INDEX idxpart32_a_idx postgres\n> pg_restore: error: could not execute query: ERROR: cannot drop index public.idxpart32_a_idx because index public.idxpart3_a_idx requires it\n> HINT: You can drop index public.idxpart3_a_idx instead.\n> Command was: DROP INDEX public.idxpart32_a_idx;\n\nI think this problem is just that we're trying to drop a partition index\nthat's not droppable. This seems fixed with just leaving the dropStmt\nempty, as in the attached.\n\nOne issue is that if you previously restored only that particular\npartition and its indexes, but not the ATTACH command that would make it\ndependent on the parent index, there would not be a DROP command to get\nrid of it. Do we need to be concerned about that case? I'm inclined to\nthink not.\n\n> (There seem to be some other problems as well, but most of the 54 complaints\n> are related to partitioned indexes/constraints.)\n\nIn my run of it there's a good dozen remaining problems, all alike: we\ndo DROP TYPE widget CASCADE (which works) followed by DROP FUNCTION\npublic.widget_out(widget), which fails complaining that type widget\ndoesn't exist. But in reality the error is innocuous, since that\nfunction was dropped by the DROP TYPE CASCADE anyway. You could say\nthat the same thing is happening with these noisy DROP INDEX of index\npartitions: the complaints are right in that each partition's DROP INDEX\ncommand doesn't actually work, but the indexes are dropped later anyway,\nso the effect is the same.\n\n> Issue #2: parallel restore does not work\n\nLooking.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 12 Aug 2020 17:49:18 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Dependencies for partitioned indexes are still a mess" }, { "msg_contents": "Hi,\n\nOn 2020-07-15 15:52:03 -0400, Tom Lane wrote:\n> I've been experimenting with trying to dump-and-restore the\n> regression database, which is a test case that for some reason\n> we don't cover in the buildfarm (pg_upgrade is not the same thing).\n\nYea, we really should have that. IIRC I was trying to add that, and\ntests that compare dumps from primary / standby, and failed due to some\ndifferences that were hard to fix.\n\nA quick test with pg_dumpall shows some odd differences after:\n1) create new cluster\n2) installcheck-parallel\n3) drop table gtest30_1, gtest1_1;\n4) pg_dumpall > first.sql\n5) recreate cluster\n6) psql < first.sql > first.sql.log\n7) pg_dumpall > second.sql\n\nI've attached the diff between first.sql and second.sql. Here's the\nhighlights:\n\n@@ -15392,9 +15392,9 @@\n --\n \n CREATE TABLE public.test_type_diff2_c1 (\n+ int_two smallint,\n int_four bigint,\n- int_eight bigint,\n- int_two smallint\n+ int_eight bigint\n )\n INHERITS (public.test_type_diff2);\n...\n\n@@ -39194,10 +39194,10 @@\n -- Data for Name: b_star; Type: TABLE DATA; Schema: public; Owner: andres\n --\n \n-COPY public.b_star (class, aa, bb, a) FROM stdin;\n-b 3 mumble \\N\n+COPY public.b_star (class, aa, a, bb) FROM stdin;\n+b 3 \\N mumble\n b 4 \\N \\N\n-b \\N bumble \\N\n+b \\N \\N bumble\n b \\N \\N \\N\n \\.\n \n\n@@ -323780,7 +323780,7 @@\n -- Data for Name: renamecolumnanother; Type: TABLE DATA; Schema: public; Owner: andres\n --\n \n-COPY public.renamecolumnanother (d, a, c, w) FROM stdin;\n+COPY public.renamecolumnanother (d, w, a, c) FROM stdin;\n \\.\n \n \n\nThe primary / standby differences are caused by sequence logging. I\nwonder if there's some good way to hide those, or to force them to be\nthe same between primary / standby, without hiding bugs.\n\nGreetings,\n\nAndres Freund", "msg_date": "Wed, 12 Aug 2020 15:13:13 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Dependencies for partitioned indexes are still a mess" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I've attached the diff between first.sql and second.sql. Here's the\n> highlights:\n\nAs I recall, the differences in b_star etc are expected, because\npg_dump reorders that table's columns to match its inheritance parent,\nwhich they don't to start with because of ALTER TABLE operations.\n\nI'm pretty sure we set it up that way deliberately ages ago, because\npg_dump used to have bugs when contending with such cases. Not sure\nabout a good way to mechanize recognizing that these diffs are\nexpected.\n\nDunno about test_type_diff2, but it might be a newer instance of\nthe same thing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 12 Aug 2020 18:29:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Dependencies for partitioned indexes are still a mess" }, { "msg_contents": "Hi,\n\nOn 2020-08-12 18:29:16 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I've attached the diff between first.sql and second.sql. Here's the\n> > highlights:\n> \n> As I recall, the differences in b_star etc are expected, because\n> pg_dump reorders that table's columns to match its inheritance parent,\n> which they don't to start with because of ALTER TABLE operations.\n\nUgh. Obviously applications shouldn't use INSERT or SELECT without a\ntarget list, but that still seems somewhat nasty.\n\nI guess we could script it so that we don't compare the \"original\" with\na restored database, but instead compare the restored version with one\nrestored from that. But that seems likely to hide bugs.\n\n\nGiven that pg_dump already re-orders the columns for DDL, could we make\nit apply that re-ordering not just during the CREATE TABLE, but also\nwhen dumping the table contents?\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 12 Aug 2020 15:38:53 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Dependencies for partitioned indexes are still a mess" }, { "msg_contents": "On 2020-Jul-15, Tom Lane wrote:\n\n> Issue #2: parallel restore does not work\n> \n> 1. dropdb r2; createdb r2\n> 2. pg_restore -j8 -d r2 regression.dump \n> \n> This is fairly timing-dependent, but some attempts fail with messages\n> like\n> \n> pg_restore: while PROCESSING TOC:\n> pg_restore: from TOC entry 6684; 2606 29166 FK CONSTRAINT fk fk_a_fkey postgres\n> pg_restore: error: could not execute query: ERROR: there is no unique constraint matching given keys for referenced table \"pk\"\n> Command was: ALTER TABLE fkpart3.fk\n> ADD CONSTRAINT fk_a_fkey FOREIGN KEY (a) REFERENCES fkpart3.pk(a);\n\nHmm, we do make the FK constraint depend on the ATTACH for the direct\nchildren; what I think we're lacking is dependencies on descendants\ntwice-removed (?) or higher. This mock patch seems to fix this problem\nby adding dependencies recursively on all children of the index; I no\nlonger see this problem with it.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 12 Aug 2020 18:48:28 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Dependencies for partitioned indexes are still a mess" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Given that pg_dump already re-orders the columns for DDL, could we make\n> it apply that re-ordering not just during the CREATE TABLE, but also\n> when dumping the table contents?\n\nHm, possibly. I think when this was last looked at, we didn't have any\nway to get COPY to output the columns in non-physical order, but now we\ndo.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 12 Aug 2020 19:13:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Dependencies for partitioned indexes are still a mess" }, { "msg_contents": "On 2020-Aug-12, Alvaro Herrera wrote:\n\n> Hmm, we do make the FK constraint depend on the ATTACH for the direct\n> children; what I think we're lacking is dependencies on descendants\n> twice-removed (?) or higher. This mock patch seems to fix this problem\n> by adding dependencies recursively on all children of the index; I no\n> longer see this problem with it.\n\nAfter going over this some more, this analysis seems correct. Here's a\nbetter version of the patch which seems final to me.\n\nI'm not yet clear on whether the noisy DROP INDEX is an actual bug that\nneeds to be fixed, or instead it needs to be left alone.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 14 Aug 2020 13:30:08 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Dependencies for partitioned indexes are still a mess" }, { "msg_contents": "On 2020-Aug-14, Alvaro Herrera wrote:\n\n> On 2020-Aug-12, Alvaro Herrera wrote:\n> \n> > Hmm, we do make the FK constraint depend on the ATTACH for the direct\n> > children; what I think we're lacking is dependencies on descendants\n> > twice-removed (?) or higher. This mock patch seems to fix this problem\n> > by adding dependencies recursively on all children of the index; I no\n> > longer see this problem with it.\n> \n> After going over this some more, this analysis seems correct. Here's a\n> better version of the patch which seems final to me.\n\nPushed.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 14 Aug 2020 17:35:34 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Dependencies for partitioned indexes are still a mess" }, { "msg_contents": "On 2020-Aug-12, Alvaro Herrera wrote:\n\n> On 2020-Jul-15, Tom Lane wrote:\n\n> > (There seem to be some other problems as well, but most of the 54 complaints\n> > are related to partitioned indexes/constraints.)\n> \n> In my run of it there's a good dozen remaining problems, all alike: we\n> do DROP TYPE widget CASCADE (which works) followed by DROP FUNCTION\n> public.widget_out(widget), which fails complaining that type widget\n> doesn't exist. But in reality the error is innocuous, since that\n> function was dropped by the DROP TYPE CASCADE anyway. You could say\n> that the same thing is happening with these noisy DROP INDEX of index\n> partitions: the complaints are right in that each partition's DROP INDEX\n> command doesn't actually work, but the indexes are dropped later anyway,\n> so the effect is the same.\n\nI pushed the typo fix that was in this patch. Other than that, I think\nthis patch should not be pushed; ISTM it would break the logic.\n(Consider that the partition with its index might exist beforehand and\nbe an independent table. If we wanted --clean to work properly, it\nshould definitely drop that index.)\n\nAlthough I'm doubtful that it makes sense to do DROP INDEX when the\ntable is going to be dropped completely, even for regular tables.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 1 Sep 2020 20:49:41 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Dependencies for partitioned indexes are still a mess" } ]
[ { "msg_contents": "As of a couple days ago, buildfarm member caiman (Fedora rawhide)\nis failing like this in all the pre-v12 branches:\n\nccache gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -g -O2 -DFRONTEND -I../../src/include -D_GNU_SOURCE -I/usr/include/libxml2 -c -o wait_error.o wait_error.c\nwait_error.c: In function \\342\\200\\230wait_result_to_str\\342\\200\\231:\nwait_error.c:71:6: error: \\342\\200\\230sys_siglist\\342\\200\\231 undeclared (first use in this function)\n 71 | sys_siglist[WTERMSIG(exitstatus)] : \"(unknown)\");\n | ^~~~~~~~~~~\nwait_error.c:71:6: note: each undeclared identifier is reported only once for each function it appears in\nmake[2]: *** [<builtin>: wait_error.o] Error 1\n\nWe haven't changed anything, ergo something changed at the OS level.\n\nOddly, we'd not get to this code unless configure set\nHAVE_DECL_SYS_SIGLIST, so it's defined *somewhere*. I suspect the root\nissue here is some rearrangement of system header files combined with\nwait_error.c (and maybe other places?) not including exactly the same\nheaders that configure tested.\n\nAnyway, rather than installing rawhide and trying to debug this,\nI'd like to make a modest proposal: let's back-patch the v12\npatches that made us stop relying on sys_siglist[], viz a73d08319\nand cc92cca43. Per the discussions that led to those patches,\nit's been decades since any platform didn't have POSIX-compliant\nstrsignal(), so we'd be much better off relying on that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 15 Jul 2020 18:48:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "sys_siglist[] is causing us trouble again" }, { "msg_contents": "On Wed, Jul 15, 2020 at 7:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> As of a couple days ago, buildfarm member caiman (Fedora rawhide)\n> is failing like this in all the pre-v12 branches:\n>\n> ccache gcc -Wall -Wmissing-prototypes -Wpointer-arith\n> -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute\n> -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard\n> -Wno-format-truncation -Wno-stringop-truncation -g -O2 -DFRONTEND\n> -I../../src/include -D_GNU_SOURCE -I/usr/include/libxml2 -c -o\n> wait_error.o wait_error.c\n> wait_error.c: In function \\342\\200\\230wait_result_to_str\\342\\200\\231:\n> wait_error.c:71:6: error: \\342\\200\\230sys_siglist\\342\\200\\231 undeclared\n> (first use in this function)\n> 71 | sys_siglist[WTERMSIG(exitstatus)] : \"(unknown)\");\n> | ^~~~~~~~~~~\n> wait_error.c:71:6: note: each undeclared identifier is reported only once\n> for each function it appears in\n> make[2]: *** [<builtin>: wait_error.o] Error 1\n>\n> We haven't changed anything, ergo something changed at the OS level.\n>\n> Oddly, we'd not get to this code unless configure set\n> HAVE_DECL_SYS_SIGLIST, so it's defined *somewhere*. I suspect the root\n> issue here is some rearrangement of system header files combined with\n> wait_error.c (and maybe other places?) not including exactly the same\n> headers that configure tested.\n>\n> Anyway, rather than installing rawhide and trying to debug this,\n> I'd like to make a modest proposal: let's back-patch the v12\n> patches that made us stop relying on sys_siglist[], viz a73d08319\n> and cc92cca43. Per the discussions that led to those patches,\n> it's been decades since any platform didn't have POSIX-compliant\n> strsignal(), so we'd be much better off relying on that.\n>\n> regards, tom lane\n>\n\n I believe it's related with these recent glibc changes at rawhide.\nhttps://src.fedoraproject.org/rpms/glibc/c/0aab7eb58528999277c626fc16682da179de03d0?branch=master\n\n - signal: Move sys_errlist to a compat symbol\n - signal: Move sys_siglist to a compat symbol\nSHA512 (glibc-2.31.9000-683-gffb17e7ba3.tar.xz) =\n103ff3c04de5dc149df93e5399de1630f6fff1b8d7f127881d6e530492b8b953a8064205ceecb311a77c0a10de3a5ab2056121fd1fa833a30327c6b1f08beacc\n\nOn Wed, Jul 15, 2020 at 7:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:As of a couple days ago, buildfarm member caiman (Fedora rawhide)\nis failing like this in all the pre-v12 branches:\n\nccache gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -g -O2 -DFRONTEND -I../../src/include -D_GNU_SOURCE -I/usr/include/libxml2   -c -o wait_error.o wait_error.c\nwait_error.c: In function \\342\\200\\230wait_result_to_str\\342\\200\\231:\nwait_error.c:71:6: error: \\342\\200\\230sys_siglist\\342\\200\\231 undeclared (first use in this function)\n   71 |      sys_siglist[WTERMSIG(exitstatus)] : \"(unknown)\");\n      |      ^~~~~~~~~~~\nwait_error.c:71:6: note: each undeclared identifier is reported only once for each function it appears in\nmake[2]: *** [<builtin>: wait_error.o] Error 1\n\nWe haven't changed anything, ergo something changed at the OS level.\n\nOddly, we'd not get to this code unless configure set\nHAVE_DECL_SYS_SIGLIST, so it's defined *somewhere*.  I suspect the root\nissue here is some rearrangement of system header files combined with\nwait_error.c (and maybe other places?) not including exactly the same\nheaders that configure tested.\n\nAnyway, rather than installing rawhide and trying to debug this,\nI'd like to make a modest proposal: let's back-patch the v12\npatches that made us stop relying on sys_siglist[], viz a73d08319\nand cc92cca43.  Per the discussions that led to those patches,\nit's been decades since any platform didn't have POSIX-compliant\nstrsignal(), so we'd be much better off relying on that.\n\n                        regards, tom lane I believe it's related with these recent glibc changes at rawhide.https://src.fedoraproject.org/rpms/glibc/c/0aab7eb58528999277c626fc16682da179de03d0?branch=master    - signal: Move sys_errlist to a compat symbol  - signal: Move sys_siglist to a compat symbolSHA512 (glibc-2.31.9000-683-gffb17e7ba3.tar.xz) = 103ff3c04de5dc149df93e5399de1630f6fff1b8d7f127881d6e530492b8b953a8064205ceecb311a77c0a10de3a5ab2056121fd1fa833a30327c6b1f08beacc", "msg_date": "Wed, 15 Jul 2020 20:13:19 -0300", "msg_from": "Filipe Rosset <rosset.filipe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: sys_siglist[] is causing us trouble again" }, { "msg_contents": "On Thu, Jul 16, 2020 at 10:48 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> We haven't changed anything, ergo something changed at the OS level.\n>\n> Oddly, we'd not get to this code unless configure set\n> HAVE_DECL_SYS_SIGLIST, so it's defined *somewhere*. I suspect the root\n> issue here is some rearrangement of system header files combined with\n> wait_error.c (and maybe other places?) not including exactly the same\n> headers that configure tested.\n\nIt looks like glibc very recently decided[1] to hide the declaration,\nbut we're using a cached configure test result. I guess rawhide is\nthe RH thing that tracks the bleeding edge?\n\n> Anyway, rather than installing rawhide and trying to debug this,\n> I'd like to make a modest proposal: let's back-patch the v12\n> patches that made us stop relying on sys_siglist[], viz a73d08319\n> and cc92cca43. Per the discussions that led to those patches,\n> it's been decades since any platform didn't have POSIX-compliant\n> strsignal(), so we'd be much better off relying on that.\n\nSeems sensible. Despite the claims of the glibc manual[2], it's not\nreally a GNU extension, and the BSDs have it (for decades).\n\n[1] https://sourceware.org/git/?p=glibc.git;a=commitdiff;h=b1ccfc061feee9ce616444ded8e1cd5acf9fa97f\n[2] https://www.gnu.org/software/libc/manual/html_node/Signal-Messages.html\n\n\n", "msg_date": "Thu, 16 Jul 2020 11:21:53 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: sys_siglist[] is causing us trouble again" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Thu, Jul 16, 2020 at 10:48 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Oddly, we'd not get to this code unless configure set\n>> HAVE_DECL_SYS_SIGLIST, so it's defined *somewhere*.\n\n> It looks like glibc very recently decided[1] to hide the declaration,\n> but we're using a cached configure test result.\n\nAh, of course. I was thinking that Peter had just changed configure\nin the last day or so, but that did not affect the back branches.\nSo it's unsurprising for buildfarm animals to be using cached configure\nresults.\n\n> I guess rawhide is the RH thing that tracks the bleeding edge?\n\nYup. Possibly we should recommend that buildfarm owners running on\nnon-stable platforms disable autoconf result caching --- I believe\nthat's \"use_accache => undef\" in the configuration file.\n\nAlternatively, maybe it'd be bright for the buildfarm script to\ndiscard that cache after any failure (or at least configure or\nbuild failures).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 15 Jul 2020 19:36:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: sys_siglist[] is causing us trouble again" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Thu, Jul 16, 2020 at 10:48 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> We haven't changed anything, ergo something changed at the OS level.\n\n> It looks like glibc very recently decided[1] to hide the declaration,\n> but we're using a cached configure test result.\n\nRight. So, modulo the mis-cached result, what would happen if we do\nnothing is that the back branches would lose the ability to translate\nsignal numbers to strings on bleeding-edge glibc. I don't think we\nwant that, so we need to back-patch. Attached is a lightly tested\npatch for v11. (This includes 7570df0f3 as well, so that\npgstrsignal.c will be the same in all branches.)\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 15 Jul 2020 20:14:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: sys_siglist[] is causing us trouble again" }, { "msg_contents": "\nOn 7/15/20 7:36 PM, Tom Lane wrote:\n> I guess rawhide is the RH thing that tracks the bleeding edge?\n> Yup. Possibly we should recommend that buildfarm owners running on\n> non-stable platforms disable autoconf result caching --- I believe\n> that's \"use_accache => undef\" in the configuration file.\n>\n> Alternatively, maybe it'd be bright for the buildfarm script to\n> discard that cache after any failure (or at least configure or\n> build failures).\n\n\n\nYeah, these lines will be added to the upcoming client code release in\nrun_build.pl Search for 'obsolete' and you'll find where to put it if\nyou want to be ahead of the curve.\n\n\nmy $last_stage = get_last_stage() || \"\";\n$obsolete ||=\n    $last_stage =~ /^(Make|Configure|Contrib|.*-build)$/;\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Thu, 16 Jul 2020 09:34:19 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: sys_siglist[] is causing us trouble again" } ]
[ { "msg_contents": "Hello.\n\nThe \"Certificate Authentication\" section in the doc for PG12 and later\ndescribes the relation ship with clientcert as follows.\n\n> In a pg_hba.conf record specifying certificate authentication, the\n> authentication option clientcert is assumed to be verify-ca or\n> verify-full, and it cannot be turned off since a client certificate\n> is necessary for this method. What the cert method adds to the basic\n> clientcert certificate validity test is a check that the cn\n> attribute matches the database user name.\n\nIn reality, cert method is assumed as \"vefiry-full\" and does not add\nanything to verify-full and cannot be degraded or turned off. It seems\nto be a mistake on rewriting it when clientcert was changed to accept\nverify-ca/full at PG12.\n\nRelated to that, pg_hba.conf accepts the combination of \"cert\" method\nand the option clientcert=\"verify-ca\" but it is ignored. We should\nreject that combination the same way with \"cert\"+\"no-verify\".\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 16 Jul 2020 09:30:12 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "\"cert\" + clientcert=verify-ca in pg_hba.conf?" }, { "msg_contents": "On Thu, Jul 16, 2020 at 09:30:12AM +0900, Kyotaro Horiguchi wrote:\n> Hello.\n> \n> The \"Certificate Authentication\" section in the doc for PG12 and later\n> describes the relation ship with clientcert as follows.\n> \n> > In a pg_hba.conf record specifying certificate authentication, the\n> > authentication option clientcert is assumed to be verify-ca or\n> > verify-full, and it cannot be turned off since a client certificate\n> > is necessary for this method. What the cert method adds to the basic\n> > clientcert certificate validity test is a check that the cn\n> > attribute matches the database user name.\n> \n> In reality, cert method is assumed as \"verify-full\" and does not add\n> anything to verify-full and cannot be degraded or turned off. It seems\n> to be a mistake on rewriting it when clientcert was changed to accept\n> verify-ca/full at PG12.\n\nAgreed. I was able to test this patch and it does what you explained. \nI have slightly adjusted the doc part of the patch, attached.\n\n> Related to that, pg_hba.conf accepts the combination of \"cert\" method\n> and the option clientcert=\"verify-ca\" but it is ignored. We should\n> reject that combination the same way with \"cert\"+\"no-verify\".\n\nAre you saying we should _require_ clientcert=verify-full when 'cert'\nauthentication is used? I don't see the point of that --- I just\nupdated the docs to say doing so was duplicate behavior.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee", "msg_date": "Mon, 24 Aug 2020 20:01:26 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: \"cert\" + clientcert=verify-ca in pg_hba.conf?" }, { "msg_contents": "At Mon, 24 Aug 2020 20:01:26 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> On Thu, Jul 16, 2020 at 09:30:12AM +0900, Kyotaro Horiguchi wrote:\n> > Hello.\n> > \n> > The \"Certificate Authentication\" section in the doc for PG12 and later\n> > describes the relation ship with clientcert as follows.\n> > \n> > > In a pg_hba.conf record specifying certificate authentication, the\n> > > authentication option clientcert is assumed to be verify-ca or\n> > > verify-full, and it cannot be turned off since a client certificate\n> > > is necessary for this method. What the cert method adds to the basic\n> > > clientcert certificate validity test is a check that the cn\n> > > attribute matches the database user name.\n> > \n> > In reality, cert method is assumed as \"verify-full\" and does not add\n> > anything to verify-full and cannot be degraded or turned off. It seems\n> > to be a mistake on rewriting it when clientcert was changed to accept\n> > verify-ca/full at PG12.\n> \n> Agreed. I was able to test this patch and it does what you explained. \n> I have slightly adjusted the doc part of the patch, attached.\n\nThanks.\n\n In a <filename>pg_hba.conf</filename> record specifying certificate\n- authentication, the authentication option <literal>clientcert</literal> is\n- assumed to be <literal>verify-ca</literal> or <literal>verify-full</literal>,\n- and it cannot be turned off since a client certificate is necessary for this\n- method. What the <literal>cert</literal> method adds to the basic\n- <literal>clientcert</literal> certificate validity test is a check that the\n- <literal>cn</literal> attribute matches the database user name.\n+ authentication, the only valid value for <literal>clientcert</literal>\n+ is <literal>verify-full</literal>, and this has no affect since it\n+ just duplicates <literal>client</literal> authentication's behavior.\n\nI read it as \"it can be specified (without an error), but actually\ndoes nothing\". If it is the correct reading, I prefer to mention that\nincompatible values cause an error.\n\n> > Related to that, pg_hba.conf accepts the combination of \"cert\" method\n> > and the option clientcert=\"verify-ca\" but it is ignored. We should\n> > reject that combination the same way with \"cert\"+\"no-verify\".\n> \n> Are you saying we should _require_ clientcert=verify-full when 'cert'\n> authentication is used? I don't see the point of that --- I just\n> updated the docs to say doing so was duplicate behavior.\n\nI don't suggest changing the current behavior. I'm saying it is the\nway it is working and we should correctly error-out that since it\ndoesn't work as specified.\n\nauth.c:608\n\tif ((status == STATUS_OK && port->hba->clientcert == clientCertFull)\n\t\t|| port->hba->auth_method == uaCert)\n\t{\n\t\t/*\n\t\t * Make sure we only check the certificate if we use the cert method\n\t\t * or verify-full option.\n\t\t */\n#ifdef USE_SSL\n\t\tstatus = CheckCertAuth(port);\n#else\n\t\tAssert(false);\n#endif\n\t}\n\nregard.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 25 Aug 2020 10:41:26 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: \"cert\" + clientcert=verify-ca in pg_hba.conf?" }, { "msg_contents": "On Tue, Aug 25, 2020 at 10:41:26AM +0900, Kyotaro Horiguchi wrote:\n> At Mon, 24 Aug 2020 20:01:26 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> > I have slightly adjusted the doc part of the patch, attached.\n> \n> Thanks.\n> \n> In a <filename>pg_hba.conf</filename> record specifying certificate\n> - authentication, the authentication option <literal>clientcert</literal> is\n> - assumed to be <literal>verify-ca</literal> or <literal>verify-full</literal>,\n> - and it cannot be turned off since a client certificate is necessary for this\n> - method. What the <literal>cert</literal> method adds to the basic\n> - <literal>clientcert</literal> certificate validity test is a check that the\n> - <literal>cn</literal> attribute matches the database user name.\n> + authentication, the only valid value for <literal>clientcert</literal>\n> + is <literal>verify-full</literal>, and this has no affect since it\n> + just duplicates <literal>client</literal> authentication's behavior.\n> \n> I read it as \"it can be specified (without an error), but actually\n> does nothing\". If it is the correct reading, I prefer to mention that\n> incompatible values cause an error.\n\nWell, when I say \"the only valid value\", that means any other value is\ninvalid, and hence will generate an error.\n\n> > > Related to that, pg_hba.conf accepts the combination of \"cert\" method\n> > > and the option clientcert=\"verify-ca\" but it is ignored. We should\n> > > reject that combination the same way with \"cert\"+\"no-verify\".\n> > \n> > Are you saying we should _require_ clientcert=verify-full when 'cert'\n> > authentication is used? I don't see the point of that --- I just\n> > updated the docs to say doing so was duplicate behavior.\n> \n> I don't suggest changing the current behavior. I'm saying it is the\n> way it is working and we should correctly error-out that since it\n> doesn't work as specified.\n\nUh, I don't understand what 'combination the same way with\n\"cert\"+\"no-verify\"'. Right now, cert with no clientcert/verify line\nworks just fine. Is \"no-verify\" something special? Are you saying it\nis any random string that would generate an error?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Mon, 24 Aug 2020 21:49:40 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: \"cert\" + clientcert=verify-ca in pg_hba.conf?" }, { "msg_contents": "At Mon, 24 Aug 2020 21:49:40 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> > > Are you saying we should _require_ clientcert=verify-full when 'cert'\n> > > authentication is used? I don't see the point of that --- I just\n> > > updated the docs to say doing so was duplicate behavior.\n> > \n> > I don't suggest changing the current behavior. I'm saying it is the\n> > way it is working and we should correctly error-out that since it\n> > doesn't work as specified.\n\nSorry, I mistead you. I don't suggest verify-full is needed for cert\nauthentication. I said we should just reject the combination\ncert+veriry-ca.\n\n> Uh, I don't understand what 'combination the same way with\n> \"cert\"+\"no-verify\"'. Right now, cert with no clientcert/verify line\n> works just fine. Is \"no-verify\" something special? Are you saying it\n> is any random string that would generate an error?\n\nIt was delimited as \"We should reject (that)\" \"that combination\n(=cert+ferify-ca)\" \"the same way(=error-out)\" \"with cert+no-verify\".\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 25 Aug 2020 11:00:49 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: \"cert\" + clientcert=verify-ca in pg_hba.conf?" }, { "msg_contents": "On Tue, Aug 25, 2020 at 11:00:49AM +0900, Kyotaro Horiguchi wrote:\n> At Mon, 24 Aug 2020 21:49:40 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> > > > Are you saying we should _require_ clientcert=verify-full when 'cert'\n> > > > authentication is used? I don't see the point of that --- I just\n> > > > updated the docs to say doing so was duplicate behavior.\n> > > \n> > > I don't suggest changing the current behavior. I'm saying it is the\n> > > way it is working and we should correctly error-out that since it\n> > > doesn't work as specified.\n> \n> Sorry, I mistead you. I don't suggest verify-full is needed for cert\n> authentication. I said we should just reject the combination\n> cert+veriry-ca.\n\nOK.\n\n> > Uh, I don't understand what 'combination the same way with\n> > \"cert\"+\"no-verify\"'. Right now, cert with no clientcert/verify line\n> > works just fine. Is \"no-verify\" something special? Are you saying it\n> > is any random string that would generate an error?\n> \n> It was delimited as \"We should reject (that)\" \"that combination\n> (=cert+ferify-ca)\" \"the same way(=error-out)\" \"with cert+no-verify\".\n\nOK, and that is what your patch does, right? And we should error out on\n\"with cert+no-verify\" just like \"with cert+XXXXXX\", right? I don't see\n\"no-verify\" mentioned anywhere in our docs.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Mon, 24 Aug 2020 22:06:45 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: \"cert\" + clientcert=verify-ca in pg_hba.conf?" }, { "msg_contents": "Thank you for the patience.\n\nAt Mon, 24 Aug 2020 22:06:45 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> On Tue, Aug 25, 2020 at 11:00:49AM +0900, Kyotaro Horiguchi wrote:\n> > At Mon, 24 Aug 2020 21:49:40 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> > > > > Are you saying we should _require_ clientcert=verify-full when 'cert'\n> > > > > authentication is used? I don't see the point of that --- I just\n> > > > > updated the docs to say doing so was duplicate behavior.\n> > > > \n> > > > I don't suggest changing the current behavior. I'm saying it is the\n> > > > way it is working and we should correctly error-out that since it\n> > > > doesn't work as specified.\n> > \n> > Sorry, I mistead you. I don't suggest verify-full is needed for cert\n> > authentication. I said we should just reject the combination\n> > cert+veriry-ca.\n> \n> OK.\n> \n> > > Uh, I don't understand what 'combination the same way with\n> > > \"cert\"+\"no-verify\"'. Right now, cert with no clientcert/verify line\n> > > works just fine. Is \"no-verify\" something special? Are you saying it\n> > > is any random string that would generate an error?\n> > \n> > It was delimited as \"We should reject (that)\" \"that combination\n> > (=cert+ferify-ca)\" \"the same way(=error-out)\" \"with cert+no-verify\".\n> \n> OK, and that is what your patch does, right?\n\nYes, \n\n> And we should error out on \"with cert+no-verify\" just like \"with\n> cert+XXXXXX\", right?\n\nCurrently only cert+no-verify is rejected. The patch makes \"cert+verify-ca\" be rejected.\n\n> I don't see \"no-verify\" mentioned anywhere in our docs.\n\nno-verify itself is mentioned here.\n\nhttps://www.postgresql.org/docs/13/ssl-tcp.html#SSL-CLIENT-CERTIFICATES\n\n> The clientcert authentication option is available for all\n> authentication methods, but only in pg_hba.conf lines specified as\n> hostssl. When clientcert is not specified or is set to *no-verify*,\n> the server will still verify any presented client certificates\n> against its CA file, if one is configured ― but it will not insist\n> that a client certificate be presented.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 25 Aug 2020 11:41:55 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: \"cert\" + clientcert=verify-ca in pg_hba.conf?" }, { "msg_contents": "On Tue, Aug 25, 2020 at 11:41:55AM +0900, Kyotaro Horiguchi wrote:\n> Thank you for the patience.\n> \n> At Mon, 24 Aug 2020 22:06:45 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> > On Tue, Aug 25, 2020 at 11:00:49AM +0900, Kyotaro Horiguchi wrote:\n> > > At Mon, 24 Aug 2020 21:49:40 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> > > > > > Are you saying we should _require_ clientcert=verify-full when 'cert'\n> > > > > > authentication is used? I don't see the point of that --- I just\n> > > > > > updated the docs to say doing so was duplicate behavior.\n> > > > > \n> > > > > I don't suggest changing the current behavior. I'm saying it is the\n> > > > > way it is working and we should correctly error-out that since it\n> > > > > doesn't work as specified.\n> > > \n> > > Sorry, I mistead you. I don't suggest verify-full is needed for cert\n> > > authentication. I said we should just reject the combination\n> > > cert+veriry-ca.\n> > \n> > OK.\n> > \n> > > > Uh, I don't understand what 'combination the same way with\n> > > > \"cert\"+\"no-verify\"'. Right now, cert with no clientcert/verify line\n> > > > works just fine. Is \"no-verify\" something special? Are you saying it\n> > > > is any random string that would generate an error?\n> > > \n> > > It was delimited as \"We should reject (that)\" \"that combination\n> > > (=cert+ferify-ca)\" \"the same way(=error-out)\" \"with cert+no-verify\".\n> > \n> > OK, and that is what your patch does, right?\n> \n> Yes, \n> \n> > And we should error out on \"with cert+no-verify\" just like \"with\n> > cert+XXXXXX\", right?\n> \n> Currently only cert+no-verify is rejected. The patch makes \"cert+verify-ca\" be rejected.\n> \n> > I don't see \"no-verify\" mentioned anywhere in our docs.\n> \n> no-verify itself is mentioned here.\n> \n> https://www.postgresql.org/docs/13/ssl-tcp.html#SSL-CLIENT-CERTIFICATES\n\nOh, I see it now, thanks. Do you have any idea what this part of the\ndocs means?\n\n\tWhen <literal>clientcert</literal> is not specified or is set to\n\t<literal>no-verify</literal>, the server will still verify any presented\n\tclient certificates against its CA file, if one is configured &mdash;\n\tbut it will not insist that a client certificate be presented.\n\nWhy is this useful?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Mon, 24 Aug 2020 23:04:51 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: \"cert\" + clientcert=verify-ca in pg_hba.conf?" }, { "msg_contents": "At Mon, 24 Aug 2020 23:04:51 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> > > I don't see \"no-verify\" mentioned anywhere in our docs.\n> > \n> > no-verify itself is mentioned here.\n> > \n> > https://www.postgresql.org/docs/13/ssl-tcp.html#SSL-CLIENT-CERTIFICATES\n> \n> Oh, I see it now, thanks. Do you have any idea what this part of the\n> docs means?\n> \n> \tWhen <literal>clientcert</literal> is not specified or is set to\n> \t<literal>no-verify</literal>, the server will still verify any presented\n> \tclient certificates against its CA file, if one is configured &mdash;\n> \tbut it will not insist that a client certificate be presented.\n\nAh.. Indeed.\n\nEven if clientcert is not set or set to no-verify, it checks client\ncertificate against the CA if any. If verify-ca, client certificate\nmust be provided. As the result, no-verify actually fails if client\nhad a certificate that is not backed by the CA.\n\n> Why is this useful?\n\nI agree, but there seems to be an implementation reason for the\nbehavior. To identify an hba-line, some connection parameters like\nuser name and others sent over a connection is required. Thus the\nclientcert option in the to-be-identified hba-line is unknown prior to\nthe time SSL connection is made. So the documentation might need\namendment. Roughly something like the following?\n\n===\nWhen <literal>clientcert</literal> is not specified or is set\nto<literal>no-verify</literal>, clients can connect to server without\nhaving a client certificate.\n\nNote: Regardless of the setting of <literal>clientcert</literal>,\nconnection can end with failure if a client certificate that cannot be\nverified by the server is stored in the ~/.postgresql directory.\n===\n\nBy the way, the following table line might need to be changed?\n\nlibpq-ssl.html:\n\n> <entry><filename>~/.postgresql/postgresql.crt</filename></entry>\n> <entry>client certificate</entry>\n- <entry>requested by server</entry>\n\nThe file is actually not requested by server, client just pushes to\nserver if any, unconditionally.\n\n+ <entry>sent to server</entry>\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 25 Aug 2020 15:53:20 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: \"cert\" + clientcert=verify-ca in pg_hba.conf?" }, { "msg_contents": "On Tue, Aug 25, 2020 at 03:53:20PM +0900, Kyotaro Horiguchi wrote:\n> At Mon, 24 Aug 2020 23:04:51 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> > > > I don't see \"no-verify\" mentioned anywhere in our docs.\n> > > \n> > > no-verify itself is mentioned here.\n> > > \n> > > https://www.postgresql.org/docs/13/ssl-tcp.html#SSL-CLIENT-CERTIFICATES\n> > \n> > Oh, I see it now, thanks. Do you have any idea what this part of the\n> > docs means?\n> > \n> > \tWhen <literal>clientcert</literal> is not specified or is set to\n> > \t<literal>no-verify</literal>, the server will still verify any presented\n> > \tclient certificates against its CA file, if one is configured &mdash;\n> > \tbut it will not insist that a client certificate be presented.\n> \n> Ah.. Indeed.\n> \n> Even if clientcert is not set or set to no-verify, it checks client\n> certificate against the CA if any. If verify-ca, client certificate\n> must be provided. As the result, no-verify actually fails if client\n> had a certificate that is not backed by the CA.\n\nI think there are a few problems here. In the docs, it says \"will still\nverify\", but it doesn't say if it verifies the CA, or the CA _and_ the\nCN/username.\n\nSecond, since it is optional, what value does it have?\n\n> > Why is this useful?\n> \n> I agree, but there seems to be an implementation reason for the\n> behavior. To identify an hba-line, some connection parameters like\n> user name and others sent over a connection is required. Thus the\n> clientcert option in the to-be-identified hba-line is unknown prior to\n> the time SSL connection is made. So the documentation might need\n> amendment. Roughly something like the following?\n\nWell, I realize internally we need a way to indicate clientcert is not\nused, but why do we bother exposing that to the user as a named option?\n\nAnd you are right that the option name 'no-verify' is wrong since it\nwill verify the CA if it exists, so it more like 'optionally-verify',\nwhich seems useless from a user interface perspective.\n\nI guess the behavior of no-verify matches our client-side\nsslmode=prefer, but at least that has the value of using SSL if\navailable, which prevents user-visible network traffic, but doesn't\nforce it, but I am not sure what the value of optional certificate\nverification is, since verification is all it does. I guess it should\nbe called \"prefer-verify\".\n\n> ===\n> When <literal>clientcert</literal> is not specified or is set\n> to<literal>no-verify</literal>, clients can connect to server without\n> having a client certificate.\n> \n> Note: Regardless of the setting of <literal>clientcert</literal>,\n> connection can end with failure if a client certificate that cannot be\n> verified by the server is stored in the ~/.postgresql directory.\n> ===\n> \n> By the way, the following table line might need to be changed?\n> \n> libpq-ssl.html:\n> \n> > <entry><filename>~/.postgresql/postgresql.crt</filename></entry>\n> > <entry>client certificate</entry>\n> - <entry>requested by server</entry>\n> \n> The file is actually not requested by server, client just pushes to\n> server if any, unconditionally.\n> \n> + <entry>sent to server</entry>\n\nI have just applied this change to all branches, since it is an\nindependent fix. Thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Tue, 25 Aug 2020 10:04:44 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: \"cert\" + clientcert=verify-ca in pg_hba.conf?" }, { "msg_contents": "At Tue, 25 Aug 2020 10:04:44 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> On Tue, Aug 25, 2020 at 03:53:20PM +0900, Kyotaro Horiguchi wrote:\n> > At Mon, 24 Aug 2020 23:04:51 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> > > > > I don't see \"no-verify\" mentioned anywhere in our docs.\n> > > > \n> > > > no-verify itself is mentioned here.\n> > > > \n> > > > https://www.postgresql.org/docs/13/ssl-tcp.html#SSL-CLIENT-CERTIFICATES\n> > > \n> > > Oh, I see it now, thanks. Do you have any idea what this part of the\n> > > docs means?\n> > > \n> > > \tWhen <literal>clientcert</literal> is not specified or is set to\n> > > \t<literal>no-verify</literal>, the server will still verify any presented\n> > > \tclient certificates against its CA file, if one is configured &mdash;\n> > > \tbut it will not insist that a client certificate be presented.\n> > \n> > Ah.. Indeed.\n> > \n> > Even if clientcert is not set or set to no-verify, it checks client\n> > certificate against the CA if any. If verify-ca, client certificate\n> > must be provided. As the result, no-verify actually fails if client\n> > had a certificate that is not backed by the CA.\n> \n> I think there are a few problems here. In the docs, it says \"will still\n> verify\", but it doesn't say if it verifies the CA, or the CA _and_ the\n> CN/username.\n\nIt verifies only CA.\n\n> Second, since it is optional, what value does it have?\n> \n> > > Why is this useful?\n> > \n> > I agree, but there seems to be an implementation reason for the\n> > behavior. To identify an hba-line, some connection parameters like\n> > user name and others sent over a connection is required. Thus the\n> > clientcert option in the to-be-identified hba-line is unknown prior to\n> > the time SSL connection is made. So the documentation might need\n> > amendment. Roughly something like the following?\n> \n> Well, I realize internally we need a way to indicate clientcert is not\n> used, but why do we bother exposing that to the user as a named option?\n\nBecause we think we need any named value for every alternatives\nincluding the default value?\n\n> And you are right that the option name 'no-verify' is wrong since it\n> will verify the CA if it exists, so it more like 'optionally-verify',\n> which seems useless from a user interface perspective.\n> \n> I guess the behavior of no-verify matches our client-side\n> sslmode=prefer, but at least that has the value of using SSL if\n> available, which prevents user-visible network traffic, but doesn't\n> force it, but I am not sure what the value of optional certificate\n> verification is, since verification is all it does. I guess it should\n> be called \"prefer-verify\".\n\nThe point of no-verify is to allow the absence of client\ncertificate. It is similar to \"prefer\" in a sense that it allows the\nabsence of availability of an SSL connection. (In a similar way to\n\"prefer\", we could \"fall back\" to \"no client cert\" SSL connection\nafter verification failure but I think it's not worth doing.)\n\n\"prefer-verify\" seems right in that sense. But I'm not sure we may\nbreak backward compatibility for the reason.\n\n> > ===\n> > When <literal>clientcert</literal> is not specified or is set\n> > to<literal>no-verify</literal>, clients can connect to server without\n> > having a client certificate.\n> > \n> > Note: Regardless of the setting of <literal>clientcert</literal>,\n> > connection can end with failure if a client certificate that cannot be\n> > verified by the server is stored in the ~/.postgresql directory.\n> > ===\n> > \n> > By the way, the following table line might need to be changed?\n> > \n> > libpq-ssl.html:\n> > \n> > > <entry><filename>~/.postgresql/postgresql.crt</filename></entry>\n> > > <entry>client certificate</entry>\n> > - <entry>requested by server</entry>\n> > \n> > The file is actually not requested by server, client just pushes to\n> > server if any, unconditionally.\n> > \n> > + <entry>sent to server</entry>\n> \n> I have just applied this change to all branches, since it is an\n> independent fix. Thanks.\n\nThanks.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 26 Aug 2020 11:41:39 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: \"cert\" + clientcert=verify-ca in pg_hba.conf?" }, { "msg_contents": "On Wed, Aug 26, 2020 at 11:41:39AM +0900, Kyotaro Horiguchi wrote:\n> At Tue, 25 Aug 2020 10:04:44 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> > I think there are a few problems here. In the docs, it says \"will still\n> > verify\", but it doesn't say if it verifies the CA, or the CA _and_ the\n> > CN/username.\n> \n> It verifies only CA.\n\nOK, that will need to be clarified.\n\n> > Second, since it is optional, what value does it have?\n> > \n> > > > Why is this useful?\n> > > \n> > > I agree, but there seems to be an implementation reason for the\n> > > behavior. To identify an hba-line, some connection parameters like\n> > > user name and others sent over a connection is required. Thus the\n> > > clientcert option in the to-be-identified hba-line is unknown prior to\n> > > the time SSL connection is made. So the documentation might need\n> > > amendment. Roughly something like the following?\n> > \n> > Well, I realize internally we need a way to indicate clientcert is not\n> > used, but why do we bother exposing that to the user as a named option?\n> \n> Because we think we need any named value for every alternatives\n> including the default value?\n\nWell, not putting clientcert at all gives the default behavior, so why\nhave clientcert=no-verify?\n\n> > And you are right that the option name 'no-verify' is wrong since it\n> > will verify the CA if it exists, so it more like 'optionally-verify',\n> > which seems useless from a user interface perspective.\n> > \n> > I guess the behavior of no-verify matches our client-side\n> > sslmode=prefer, but at least that has the value of using SSL if\n> > available, which prevents user-visible network traffic, but doesn't\n> > force it, but I am not sure what the value of optional certificate\n> > verification is, since verification is all it does. I guess it should\n> > be called \"prefer-verify\".\n> \n> The point of no-verify is to allow the absence of client\n> certificate. It is similar to \"prefer\" in a sense that it allows the\n> absence of availability of an SSL connection. (In a similar way to\n> \"prefer\", we could \"fall back\" to \"no client cert\" SSL connection\n> after verification failure but I think it's not worth doing.)\n\nWell, sslmode=prefer gives encryption without identification. \nclientcert=no-verify has no value because it is just an optional CA\ncheck that has no value because optional authentication is useless. It\nis like saying you can type in the password if you want, and we will\ncheck it, or you can just not type in the password.\n\n> \"prefer-verify\" seems right in that sense. But I'm not sure we may\n> break backward compatibility for the reason.\n\nTrue, but right now it is inaccurate so I think it just need to be fixed\nor removed and documented in the PG 14 release notes.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Tue, 25 Aug 2020 22:52:44 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: \"cert\" + clientcert=verify-ca in pg_hba.conf?" }, { "msg_contents": "At Tue, 25 Aug 2020 22:52:44 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> On Wed, Aug 26, 2020 at 11:41:39AM +0900, Kyotaro Horiguchi wrote:\n> > At Tue, 25 Aug 2020 10:04:44 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> > > I think there are a few problems here. In the docs, it says \"will still\n> > > verify\", but it doesn't say if it verifies the CA, or the CA _and_ the\n> > > CN/username.\n> > \n> > It verifies only CA.\n> \n> OK, that will need to be clarified.\n> \n> > > Second, since it is optional, what value does it have?\n> > > \n> > > > > Why is this useful?\n> > > > \n> > > > I agree, but there seems to be an implementation reason for the\n> > > > behavior. To identify an hba-line, some connection parameters like\n> > > > user name and others sent over a connection is required. Thus the\n> > > > clientcert option in the to-be-identified hba-line is unknown prior to\n> > > > the time SSL connection is made. So the documentation might need\n> > > > amendment. Roughly something like the following?\n> > > \n> > > Well, I realize internally we need a way to indicate clientcert is not\n> > > used, but why do we bother exposing that to the user as a named option?\n> > \n> > Because we think we need any named value for every alternatives\n> > including the default value?\n> \n> Well, not putting clientcert at all gives the default behavior, so why\n> have clientcert=no-verify?\n\nclientcert=verify-ca or verify-full don't allow absence of client\ncertificate. We need an option to allow the absence.\n\n> > > And you are right that the option name 'no-verify' is wrong since it\n> > > will verify the CA if it exists, so it more like 'optionally-verify',\n> > > which seems useless from a user interface perspective.\n> > > \n> > > I guess the behavior of no-verify matches our client-side\n> > > sslmode=prefer, but at least that has the value of using SSL if\n> > > available, which prevents user-visible network traffic, but doesn't\n> > > force it, but I am not sure what the value of optional certificate\n> > > verification is, since verification is all it does. I guess it should\n> > > be called \"prefer-verify\".\n> > \n> > The point of no-verify is to allow the absence of client\n> > certificate. It is similar to \"prefer\" in a sense that it allows the\n> > absence of availability of an SSL connection. (In a similar way to\n> > \"prefer\", we could \"fall back\" to \"no client cert\" SSL connection\n> > after verification failure but I think it's not worth doing.)\n> \n> Well, sslmode=prefer gives encryption without identification. \n> clientcert=no-verify has no value because it is just an optional CA\n> check that has no value because optional authentication is useless. It\n\nThe point of the option is not to do optional CA check if possible,\nbut to allow absence of client cert. We need to have that mode\nregardless of named or not named, and I believe we usually provide a\nname for default mode.\n\n> is like saying you can type in the password if you want, and we will\n> check it, or you can just not type in the password.\n\nYes, since the point is the fact that I'm allowed to skip typing a\npassword. And the reason for the strange-looking behavior is that I\ncan't help entering a password if I had, but the server has no way\nother than checking the password that I provided.\n\nIn the correct words, the server cannot ignore the certificate if\nclient sent it. But the client cannot identify whether the certificate\nis needed by the server before sending it.\n\n> > \"prefer-verify\" seems right in that sense. But I'm not sure we may\n> > break backward compatibility for the reason.\n> \n> True, but right now it is inaccurate so I think it just need to be fixed\n> or removed and documented in the PG 14 release notes.\n\nI'm fine with that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 26 Aug 2020 18:13:23 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: \"cert\" + clientcert=verify-ca in pg_hba.conf?" }, { "msg_contents": "On Wed, Aug 26, 2020 at 06:13:23PM +0900, Kyotaro Horiguchi wrote:\n> At Tue, 25 Aug 2020 22:52:44 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> > > Because we think we need any named value for every alternatives\n> > > including the default value?\n> > \n> > Well, not putting clientcert at all gives the default behavior, so why\n> > have clientcert=no-verify?\n> \n> clientcert=verify-ca or verify-full don't allow absence of client\n> certificate. We need an option to allow the absence.\n\nIsn't the option not specifying clientcert? Here are some valid\npg_hba.conf lines:\n\n\thostssl all all 127.0.0.1/32 trust clientcert=verify-full\n\thostssl all all 127.0.0.1/32 trust clientcert=verify-ca\n\thostssl all all 127.0.0.1/32 trust clientcert=no-verify\n\thostssl all all 127.0.0.1/32 trust\n\nIt is my understanding that the last two lines are the same. Why isn't\nit sufficient to just tell users not to specify clientcert if they want\nthe default behavior? You can do:\n\n\thost all all 192.168.0.0/16 ident map=omicron\n\nbut there is no way to specify the default map value of 'no map', so why\nhave one for clientcert?\n\n> > Well, sslmode=prefer gives encryption without identification. \n> > clientcert=no-verify has no value because it is just an optional CA\n> > check that has no value because optional authentication is useless. It\n> \n> The point of the option is not to do optional CA check if possible,\n> but to allow absence of client cert. We need to have that mode\n> regardless of named or not named, and I believe we usually provide a\n> name for default mode.\n\nUh, see above --- not really. The absense of the option is the default\naction.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Wed, 26 Aug 2020 18:36:50 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: \"cert\" + clientcert=verify-ca in pg_hba.conf?" }, { "msg_contents": "At Wed, 26 Aug 2020 18:36:50 -0400, Bruce Momjian <bruce@momjian.us> wrote in \nbruce> On Wed, Aug 26, 2020 at 06:13:23PM +0900, Kyotaro Horiguchi wrote:\n> > At Tue, 25 Aug 2020 22:52:44 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> > > > Because we think we need any named value for every alternatives\n> > > > including the default value?\n> > > \n> > > Well, not putting clientcert at all gives the default behavior, so why\n> > > have clientcert=no-verify?\n> > \n> > clientcert=verify-ca or verify-full don't allow absence of client\n> > certificate. We need an option to allow the absence.\n> \n> Isn't the option not specifying clientcert? Here are some valid\n> pg_hba.conf lines:\n\nSorry for the ambiguity. Perhaps I understand that we talked at\ndifferent objects. I was mentioning about the option value that is\nstored *internally*, concretely the values for the struct member\nport->hba->clientcert. You are talking about the descriptive option in\npg_hba.conf.\n\nDoes the following discussion make sense?\n\nWe need to use the default value zero (=clientCertOff) for\nport->hba->clientcert to tell server to omit checking against CA if\ncert is not given. I suppose that the value clientCertOff is labeled\nas \"no-verify\" since someone who developed this thought that that\nchoice needs to be explicitly describable in pg_hba.conf. And my\ndiscussion was following that decision.\n\nI understand that the label \"no-verify\" is not essential to specify\nthe behavior, so I don't object to removing \"no-verify\" label itself\nif no one oppose to remove it.\n\nMy point here is just \"are we OK to remove it?\"\n\n> It is my understanding that the last two lines are the same. Why isn't\n> it sufficient to just tell users not to specify clientcert if they want\n> the default behavior? You can do:\n> \n> \thost all all 192.168.0.0/16 ident map=omicron\n> \n> but there is no way to specify the default map value of 'no map', so why\n> have one for clientcert?\n\nThe difference from clientcert is that it gives an arbitrary name that\npoints to a defined mapping, not a choice from an defined\nenumeration. \n\n> > > Well, sslmode=prefer gives encryption without identification.\n> > > clientcert=no-verify has no value because it is just an optional CA\n> > > check that has no value because optional authentication is useless. It\n> > \n> > The point of the option is not to do optional CA check if possible,\n> > but to allow absence of client cert. We need to have that mode\n> > regardless of named or not named, and I believe we usually provide a\n> > name for default mode.\n> \n> Uh, see above --- not really. The absense of the option is the default\n> action.\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 27 Aug 2020 16:09:25 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: \"cert\" + clientcert=verify-ca in pg_hba.conf?" }, { "msg_contents": "On Thu, Aug 27, 2020 at 04:09:25PM +0900, Kyotaro Horiguchi wrote:\n> At Wed, 26 Aug 2020 18:36:50 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> bruce> On Wed, Aug 26, 2020 at 06:13:23PM +0900, Kyotaro Horiguchi wrote:\n> > > At Tue, 25 Aug 2020 22:52:44 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> > > > > Because we think we need any named value for every alternatives\n> > > > > including the default value?\n> > > > \n> > > > Well, not putting clientcert at all gives the default behavior, so why\n> > > > have clientcert=no-verify?\n> > > \n> > > clientcert=verify-ca or verify-full don't allow absence of client\n> > > certificate. We need an option to allow the absence.\n> > \n> > Isn't the option not specifying clientcert? Here are some valid\n> > pg_hba.conf lines:\n> \n> Sorry for the ambiguity. Perhaps I understand that we talked at\n> different objects. I was mentioning about the option value that is\n> stored *internally*, concretely the values for the struct member\n> port->hba->clientcert. You are talking about the descriptive option in\n> pg_hba.conf.\n> \n> Does the following discussion make sense?\n> \n> We need to use the default value zero (=clientCertOff) for\n> port->hba->clientcert to tell server to omit checking against CA if\n> cert is not given. I suppose that the value clientCertOff is labeled\n> as \"no-verify\" since someone who developed this thought that that\n> choice needs to be explicitly describable in pg_hba.conf. And my\n> discussion was following that decision.\n> \n> I understand that the label \"no-verify\" is not essential to specify\n> the behavior, so I don't object to removing \"no-verify\" label itself\n> if no one oppose to remove it.\n> \n> My point here is just \"are we OK to remove it?\"\n\nYes, in PG 14. Security is confusing enough, so having a mis-named\noption that doesn't do anything more than just not specifying clientcert\nis not useful and should be removed.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Thu, 27 Aug 2020 15:41:40 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: \"cert\" + clientcert=verify-ca in pg_hba.conf?" }, { "msg_contents": "On Thu, Aug 27, 2020 at 03:41:40PM -0400, Bruce Momjian wrote:\n> On Thu, Aug 27, 2020 at 04:09:25PM +0900, Kyotaro Horiguchi wrote:\n> > At Wed, 26 Aug 2020 18:36:50 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> > bruce> On Wed, Aug 26, 2020 at 06:13:23PM +0900, Kyotaro Horiguchi wrote:\n> > > > At Tue, 25 Aug 2020 22:52:44 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> > > > > > Because we think we need any named value for every alternatives\n> > > > > > including the default value?\n> > > > > \n> > > > > Well, not putting clientcert at all gives the default behavior, so why\n> > > > > have clientcert=no-verify?\n> > > > \n> > > > clientcert=verify-ca or verify-full don't allow absence of client\n> > > > certificate. We need an option to allow the absence.\n> > > \n> > > Isn't the option not specifying clientcert? Here are some valid\n> > > pg_hba.conf lines:\n> > \n> > Sorry for the ambiguity. Perhaps I understand that we talked at\n> > different objects. I was mentioning about the option value that is\n> > stored *internally*, concretely the values for the struct member\n> > port->hba->clientcert. You are talking about the descriptive option in\n> > pg_hba.conf.\n\nYes, I realize we need an internal vaue for this, but it doesn't need to\nbe visible to the user.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Thu, 27 Aug 2020 15:59:43 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: \"cert\" + clientcert=verify-ca in pg_hba.conf?" }, { "msg_contents": "Hello, Bruce.\n\nAt Thu, 27 Aug 2020 15:41:40 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> > My point here is just \"are we OK to remove it?\"\n> \n> Yes, in PG 14. Security is confusing enough, so having a mis-named\n> option that doesn't do anything more than just not specifying clientcert\n> is not useful and should be removed.\n\nOk, this is that. If we spcify clientcert=no-verify other than for\n\"cert\" authentication, server complains as the following at startup.\n\n> LOG: no-verify or 0 is the default setting that is discouraged to use explicitly for clientcert option\n> HINT: Consider removing the option instead. This option value is going to be deprecated in later version.\n> CONTEXT: line 90 of configuration file \"/home/horiguti/data/data_noverify/pg_hba.conf\"\n\nAnd, cert clientcert=verifry-ca (and no-verify) is correctly rejected.\n\n> LOG: clientcert accepts only \"verify-full\" when using \"cert\" authentication\n\nI once I thought that the deprecation message should e WARNING but\nlater I changed my mind to change it to LOG unifying to surrounding\nsetting error messages.\n\nI'm going to register this to the coming CF.\n\nregrds.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Mon, 31 Aug 2020 17:56:58 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: \"cert\" + clientcert=verify-ca in pg_hba.conf?" }, { "msg_contents": "On Mon, Aug 31, 2020 at 05:56:58PM +0900, Kyotaro Horiguchi wrote:\n> Hello, Bruce.\n> \n> At Thu, 27 Aug 2020 15:41:40 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> > > My point here is just \"are we OK to remove it?\"\n> > \n> > Yes, in PG 14. Security is confusing enough, so having a mis-named\n> > option that doesn't do anything more than just not specifying clientcert\n> > is not useful and should be removed.\n> \n> Ok, this is that. If we spcify clientcert=no-verify other than for\n> \"cert\" authentication, server complains as the following at startup.\n\nWhy does clientcert=no-verify have any value, even for a\ncert-authentication line?\n\n> > LOG: no-verify or 0 is the default setting that is discouraged to use explicitly for clientcert option\n> > HINT: Consider removing the option instead. This option value is going to be deprecated in later version.\n> > CONTEXT: line 90 of configuration file \"/home/horiguti/data/data_noverify/pg_hba.conf\"\n\nI think it should just be removed in PG 14. This is a configuration\nsetting, not an SQL-level item that needs a deprecation period.\n\n> And, cert clientcert=verifry-ca (and no-verify) is correctly rejected.\n> \n> > LOG: clientcert accepts only \"verify-full\" when using \"cert\" authentication\n> \n> I once I thought that the deprecation message should e WARNING but\n> later I changed my mind to change it to LOG unifying to surrounding\n> setting error messages.\n> \n> I'm going to register this to the coming CF.\n\nI plan to apply this once we are done discussing it.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Mon, 31 Aug 2020 11:34:29 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: \"cert\" + clientcert=verify-ca in pg_hba.conf?" }, { "msg_contents": "At Mon, 31 Aug 2020 11:34:29 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> On Mon, Aug 31, 2020 at 05:56:58PM +0900, Kyotaro Horiguchi wrote:\n> > Ok, this is that. If we spcify clientcert=no-verify other than for\n> > \"cert\" authentication, server complains as the following at startup.\n> \n> Why does clientcert=no-verify have any value, even for a\n> cert-authentication line?\n> \n> > > LOG: no-verify or 0 is the default setting that is discouraged to use explicitly for clientcert option\n> > > HINT: Consider removing the option instead. This option value is going to be deprecated in later version.\n> > > CONTEXT: line 90 of configuration file \"/home/horiguti/data/data_noverify/pg_hba.conf\"\n> \n> I think it should just be removed in PG 14. This is a configuration\n> setting, not an SQL-level item that needs a deprecation period.\n\nOk, it is changed to just error out. I tempted to show a suggestion to\nremoving the option in that case like the following, but *didn't* in\nthis version of the patch.\n\n > LOG: invalid value for clientcert: \"no-verify\"\n?? HINT: Instead, consider removing the clinetcert option.\n > CONTEXT: line 90 of configuration file \"/h\n\n\n> > I'm going to register this to the coming CF.\n> \n> I plan to apply this once we are done discussing it.\n\nRoger.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 01 Sep 2020 13:59:25 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: \"cert\" + clientcert=verify-ca in pg_hba.conf?" }, { "msg_contents": "On Tue, Sep 1, 2020 at 01:59:25PM +0900, Kyotaro Horiguchi wrote:\n> At Mon, 31 Aug 2020 11:34:29 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> > On Mon, Aug 31, 2020 at 05:56:58PM +0900, Kyotaro Horiguchi wrote:\n> > > Ok, this is that. If we spcify clientcert=no-verify other than for\n> > > \"cert\" authentication, server complains as the following at startup.\n> > \n> > Why does clientcert=no-verify have any value, even for a\n> > cert-authentication line?\n> > \n> > > > LOG: no-verify or 0 is the default setting that is discouraged to use explicitly for clientcert option\n> > > > HINT: Consider removing the option instead. This option value is going to be deprecated in later version.\n> > > > CONTEXT: line 90 of configuration file \"/home/horiguti/data/data_noverify/pg_hba.conf\"\n> > \n> > I think it should just be removed in PG 14. This is a configuration\n> > setting, not an SQL-level item that needs a deprecation period.\n> \n> Ok, it is changed to just error out. I tempted to show a suggestion to\n> removing the option in that case like the following, but *didn't* in\n> this version of the patch.\n\nOK, I have developed the attached patch based on yours. I reordered the\ntests, simplified the documentation, and removed the hint since they\nwill already get a good error message, and we will document this change\nin the release notes. It is also good you removed the 0/1 values for\nthis, since that was also confusing. We will put that removal in the\nrelease notes too.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee", "msg_date": "Tue, 1 Sep 2020 11:47:34 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: \"cert\" + clientcert=verify-ca in pg_hba.conf?" }, { "msg_contents": "Hello.\n\nAt Tue, 1 Sep 2020 11:47:34 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> On Tue, Sep 1, 2020 at 01:59:25PM +0900, Kyotaro Horiguchi wrote:\n> > At Mon, 31 Aug 2020 11:34:29 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> > > On Mon, Aug 31, 2020 at 05:56:58PM +0900, Kyotaro Horiguchi wrote:\n> > > > Ok, this is that. If we spcify clientcert=no-verify other than for\n> > > > \"cert\" authentication, server complains as the following at startup.\n> > > \n> > > Why does clientcert=no-verify have any value, even for a\n> > > cert-authentication line?\n> > > \n> > > > > LOG: no-verify or 0 is the default setting that is discouraged to use explicitly for clientcert option\n> > > > > HINT: Consider removing the option instead. This option value is going to be deprecated in later version.\n> > > > > CONTEXT: line 90 of configuration file \"/home/horiguti/data/data_noverify/pg_hba.conf\"\n> > > \n> > > I think it should just be removed in PG 14. This is a configuration\n> > > setting, not an SQL-level item that needs a deprecation period.\n> > \n> > Ok, it is changed to just error out. I tempted to show a suggestion to\n> > removing the option in that case like the following, but *didn't* in\n> > this version of the patch.\n> \n> OK, I have developed the attached patch based on yours. I reordered the\n> tests, simplified the documentation, and removed the hint since they\n\nLooks good to me.\n\n> will already get a good error message, and we will document this change\n\nOops! I thought I had removed that in the patch. Sorry for the mistake\nand that also looks good to me.\n\n> in the release notes. It is also good you removed the 0/1 values for\n> this, since that was also confusing. We will put that removal in the\n> release notes too.\n\nThank you for your assistance, Bruce!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 02 Sep 2020 10:45:30 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: \"cert\" + clientcert=verify-ca in pg_hba.conf?" }, { "msg_contents": "On Wed, Sep 2, 2020 at 10:45:30AM +0900, Kyotaro Horiguchi wrote:\n> At Tue, 1 Sep 2020 11:47:34 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> > OK, I have developed the attached patch based on yours. I reordered the\n> > tests, simplified the documentation, and removed the hint since they\n> \n> Looks good to me.\n> \n> > will already get a good error message, and we will document this change\n> \n> Oops! I thought I had removed that in the patch. Sorry for the mistake\n> and that also looks good to me.\n> \n> > in the release notes. It is also good you removed the 0/1 values for\n> > this, since that was also confusing. We will put that removal in the\n> > release notes too.\n> \n> Thank you for your assistance, Bruce!\n\nOK, good. Let's wait a few days and I will then apply it for PG 14.\nThanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Tue, 1 Sep 2020 22:27:03 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: \"cert\" + clientcert=verify-ca in pg_hba.conf?" }, { "msg_contents": "On Tue, Sep 01, 2020 at 10:27:03PM -0400, Bruce Momjian wrote:\n> OK, good. Let's wait a few days and I will then apply it for PG 14.\n\nIt has been a few days, and nothing has happened here. I have not\nlooked at the patch in details, so I cannot say if that's fine or not,\nbut please note that the patch fails to apply per the CF bot.\n--\nMichael", "msg_date": "Thu, 24 Sep 2020 12:44:01 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: \"cert\" + clientcert=verify-ca in pg_hba.conf?" }, { "msg_contents": "On Thu, Sep 24, 2020 at 12:44:01PM +0900, Michael Paquier wrote:\n> On Tue, Sep 01, 2020 at 10:27:03PM -0400, Bruce Momjian wrote:\n> > OK, good. Let's wait a few days and I will then apply it for PG 14.\n> \n> It has been a few days, and nothing has happened here. I have not\n> looked at the patch in details, so I cannot say if that's fine or not,\n> but please note that the patch fails to apply per the CF bot.\n\nI will handle it.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Thu, 24 Sep 2020 11:43:40 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: \"cert\" + clientcert=verify-ca in pg_hba.conf?" }, { "msg_contents": "At Thu, 24 Sep 2020 11:43:40 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> On Thu, Sep 24, 2020 at 12:44:01PM +0900, Michael Paquier wrote:\n> > On Tue, Sep 01, 2020 at 10:27:03PM -0400, Bruce Momjian wrote:\n> > > OK, good. Let's wait a few days and I will then apply it for PG 14.\n> > \n> > It has been a few days, and nothing has happened here. I have not\n> > looked at the patch in details, so I cannot say if that's fine or not,\n> > but please note that the patch fails to apply per the CF bot.\n> \n> I will handle it.\n\nThank you Bruce, Michael. This is a rebased version.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Fri, 25 Sep 2020 09:33:48 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: \"cert\" + clientcert=verify-ca in pg_hba.conf?" }, { "msg_contents": "On Thu, Sep 24, 2020 at 11:43:40AM -0400, Bruce Momjian wrote:\n> I will handle it.\n\nThanks. I have switched the patch as waiting on author due to the\ncomplaint of the CF bot for now, but if you feel that this does not\nrequire an extra round of review after the new rebase, of course\nplease feel free.\n--\nMichael", "msg_date": "Fri, 25 Sep 2020 10:05:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: \"cert\" + clientcert=verify-ca in pg_hba.conf?" }, { "msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> Thank you Bruce, Michael. This is a rebased version.\n\nI really strongly object to all the encoded data in this patch.\nOne cannot read it, one cannot even easily figure out how long\nit is until the tests break by virtue of the certificates expiring.\n\nOne can, however, be entirely certain that they *will* break at\nsome point. I don't like the idea of time bombs in our test suite.\nThat being the case, it'd likely be better to drop all the pre-made\ncertificates and have the test scripts create them on the fly.\nThat'd remove both the documentation problem (i.e., having readable\ninfo as to how the certificates were made) and the expiration problem.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 24 Sep 2020 21:59:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: \"cert\" + clientcert=verify-ca in pg_hba.conf?" }, { "msg_contents": "On Thu, Sep 24, 2020 at 09:59:50PM -0400, Tom Lane wrote:\n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > Thank you Bruce, Michael. This is a rebased version.\n> \n> I really strongly object to all the encoded data in this patch.\n> One cannot read it, one cannot even easily figure out how long\n> it is until the tests break by virtue of the certificates expiring.\n> \n> One can, however, be entirely certain that they *will* break at\n> some point. I don't like the idea of time bombs in our test suite.\n> That being the case, it'd likely be better to drop all the pre-made\n> certificates and have the test scripts create them on the fly.\n> That'd remove both the documentation problem (i.e., having readable\n> info as to how the certificates were made) and the expiration problem.\n\nI am not planning to apply the test parts of this patch. I think\nhaving the committer test it is sufficient.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Fri, 25 Sep 2020 13:30:06 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: \"cert\" + clientcert=verify-ca in pg_hba.conf?" }, { "msg_contents": "Hello.\n\nAt Fri, 25 Sep 2020 13:30:06 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> On Thu, Sep 24, 2020 at 09:59:50PM -0400, Tom Lane wrote:\n> > Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > > Thank you Bruce, Michael. This is a rebased version.\n> > \n> > I really strongly object to all the encoded data in this patch.\n> > One cannot read it, one cannot even easily figure out how long\n> > it is until the tests break by virtue of the certificates expiring.\n\nI thought the same but the current source tree contains generated\ncertificates, perhaps for developer's convenience. This patch follows\nthe policy (if it is correct..). If certificates expiring matters,\ndon't we need to remove the certificates in the current tree?\n\n(Anyway we experenced replacement of existing certificates due to\nobsoletion of a cipher algorithm and will face the same when the\ncurrent cipher algorithm gets obsolete.)\n\n> > One can, however, be entirely certain that they *will* break at\n> > some point. I don't like the idea of time bombs in our test suite.\n> > That being the case, it'd likely be better to drop all the pre-made\n> > certificates and have the test scripts create them on the fly.\n> > That'd remove both the documentation problem (i.e., having readable\n> > info as to how the certificates were made) and the expiration problem.\n> \n> I am not planning to apply the test parts of this patch. I think\n> having the committer test it is sufficient.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 28 Sep 2020 09:21:51 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: \"cert\" + clientcert=verify-ca in pg_hba.conf?" }, { "msg_contents": "On Fri, Sep 25, 2020 at 09:33:48AM +0900, Kyotaro Horiguchi wrote:\n> At Thu, 24 Sep 2020 11:43:40 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> > On Thu, Sep 24, 2020 at 12:44:01PM +0900, Michael Paquier wrote:\n> > > On Tue, Sep 01, 2020 at 10:27:03PM -0400, Bruce Momjian wrote:\n> > > > OK, good. Let's wait a few days and I will then apply it for PG 14.\n> > > \n> > > It has been a few days, and nothing has happened here. I have not\n> > > looked at the patch in details, so I cannot say if that's fine or not,\n> > > but please note that the patch fails to apply per the CF bot.\n> > \n> > I will handle it.\n> \n> Thank you Bruce, Michael. This is a rebased version.\n> \n> regards.\n> \n> -- \n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n\n> >From 2978479ada887284eae0ed36c8acf29f1a002feb Mon Sep 17 00:00:00 2001\n> From: Kyotaro Horiguchi <horikyoga.ntt@gmail.com>\n> Date: Tue, 21 Jul 2020 23:01:27 +0900\n> Subject: [PATCH v2] Allow directory name for GUC ssl_crl_file and connection\n> option sslcrl\n> \n> X509_STORE_load_locations accepts a directory, which leads to\n> on-demand loading method with which method only relevant CRLs are\n> loaded.\n\nUh, I think this CRL patch is the wrong patch. This thread is about the\nclientcert=verify-ca in pg_hba.conf. I will use the patch I developed\nand posted on Tue, 1 Sep 2020 11:47:34 -0400 in this thread.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Fri, 2 Oct 2020 22:55:45 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: \"cert\" + clientcert=verify-ca in pg_hba.conf?" }, { "msg_contents": "At Fri, 2 Oct 2020 22:55:45 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> On Fri, Sep 25, 2020 at 09:33:48AM +0900, Kyotaro Horiguchi wrote:\n> > At Thu, 24 Sep 2020 11:43:40 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> > > On Thu, Sep 24, 2020 at 12:44:01PM +0900, Michael Paquier wrote:\n> > > > On Tue, Sep 01, 2020 at 10:27:03PM -0400, Bruce Momjian wrote:\n> > > > > OK, good. Let's wait a few days and I will then apply it for PG 14.\n> > > > \n> > > > It has been a few days, and nothing has happened here. I have not\n> > > > looked at the patch in details, so I cannot say if that's fine or not,\n> > > > but please note that the patch fails to apply per the CF bot.\n> > > \n> > > I will handle it.\n> > \n> > Thank you Bruce, Michael. This is a rebased version.\n> > \n> > regards.\n> > \n> > -- \n> > Kyotaro Horiguchi\n> > NTT Open Source Software Center\n> \n> > >From 2978479ada887284eae0ed36c8acf29f1a002feb Mon Sep 17 00:00:00 2001\n> > From: Kyotaro Horiguchi <horikyoga.ntt@gmail.com>\n> > Date: Tue, 21 Jul 2020 23:01:27 +0900\n> > Subject: [PATCH v2] Allow directory name for GUC ssl_crl_file and connection\n> > option sslcrl\n> > \n> > X509_STORE_load_locations accepts a directory, which leads to\n> > on-demand loading method with which method only relevant CRLs are\n> > loaded.\n> \n> Uh, I think this CRL patch is the wrong patch. This thread is about the\n> clientcert=verify-ca in pg_hba.conf. I will use the patch I developed\n> and posted on Tue, 1 Sep 2020 11:47:34 -0400 in this thread.\n\nMmmm. Sorry for the silly mistake. I'm confused with another one.\n\nFWIW, the cause is a rewording of \"cannot\" to \"can not\". This is the\nright one.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Mon, 05 Oct 2020 10:25:08 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: \"cert\" + clientcert=verify-ca in pg_hba.conf?" }, { "msg_contents": "On Mon, Oct 5, 2020 at 10:25:08AM +0900, Kyotaro Horiguchi wrote:\n> > > >From 2978479ada887284eae0ed36c8acf29f1a002feb Mon Sep 17 00:00:00 2001\n> > > From: Kyotaro Horiguchi <horikyoga.ntt@gmail.com>\n> > > Date: Tue, 21 Jul 2020 23:01:27 +0900\n> > > Subject: [PATCH v2] Allow directory name for GUC ssl_crl_file and connection\n> > > option sslcrl\n> > > \n> > > X509_STORE_load_locations accepts a directory, which leads to\n> > > on-demand loading method with which method only relevant CRLs are\n> > > loaded.\n> > \n> > Uh, I think this CRL patch is the wrong patch. This thread is about the\n> > clientcert=verify-ca in pg_hba.conf. I will use the patch I developed\n> > and posted on Tue, 1 Sep 2020 11:47:34 -0400 in this thread.\n> \n> Mmmm. Sorry for the silly mistake. I'm confused with another one.\n> \n> FWIW, the cause is a rewording of \"cannot\" to \"can not\". This is the\n> right one.\n\nYes, that is the version I was going to apply. I will do it today. \nThanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Mon, 5 Oct 2020 14:02:34 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: \"cert\" + clientcert=verify-ca in pg_hba.conf?" }, { "msg_contents": "On Mon, Oct 5, 2020 at 02:02:34PM -0400, Bruce Momjian wrote:\n> On Mon, Oct 5, 2020 at 10:25:08AM +0900, Kyotaro Horiguchi wrote:\n> > > > >From 2978479ada887284eae0ed36c8acf29f1a002feb Mon Sep 17 00:00:00 2001\n> > > > From: Kyotaro Horiguchi <horikyoga.ntt@gmail.com>\n> > > > Date: Tue, 21 Jul 2020 23:01:27 +0900\n> > > > Subject: [PATCH v2] Allow directory name for GUC ssl_crl_file and connection\n> > > > option sslcrl\n> > > > \n> > > > X509_STORE_load_locations accepts a directory, which leads to\n> > > > on-demand loading method with which method only relevant CRLs are\n> > > > loaded.\n> > > \n> > > Uh, I think this CRL patch is the wrong patch. This thread is about the\n> > > clientcert=verify-ca in pg_hba.conf. I will use the patch I developed\n> > > and posted on Tue, 1 Sep 2020 11:47:34 -0400 in this thread.\n> > \n> > Mmmm. Sorry for the silly mistake. I'm confused with another one.\n> > \n> > FWIW, the cause is a rewording of \"cannot\" to \"can not\". This is the\n> > right one.\n> \n> Yes, that is the version I was going to apply. I will do it today. \n> Thanks.\n\nPatch applied to master, and the first paragraph diff was applied to PG\n12-13 too.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Mon, 5 Oct 2020 16:07:50 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: \"cert\" + clientcert=verify-ca in pg_hba.conf?" }, { "msg_contents": "At Mon, 5 Oct 2020 16:07:50 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> > Yes, that is the version I was going to apply. I will do it today. \n> > Thanks.\n> \n> Patch applied to master, and the first paragraph diff was applied to PG\n> 12-13 too.\n\nThanks!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 06 Oct 2020 11:31:03 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: \"cert\" + clientcert=verify-ca in pg_hba.conf?" } ]
[ { "msg_contents": "Hi,\n\nI have a cluster of three nodes A, B, C and I'm using postgres bdr for\nreplication.\n\nI have some code to execute after a value change in the table, for this I\nhave added a trigger.\n\nWhen I call POST REST API from any one of the nodes it will execute the\ncode, and on all other nodes the trigger will execute the code. The issue\nis the trigger will execute the code on the same node as well where POST\ncall is happening.\n\nEx. POST REST API on node B. Directly execute a+b. save a and b in DB.\nresponse\n\nTrigger activates on all nodes. And it will execute a+b on all nodes. (but\nI don't want to execute it again on node B)\n\n\nThanks,\nSatish\n\nHi,I have a cluster of three nodes A, B, C and I'm using postgres bdr for replication.I have some code to execute after a value change in the table, for this I have added a trigger.When I call POST REST API from any one of the nodes it will execute the code, and on all other nodes the trigger will execute the code. The issue is the trigger will execute the code on the same node as well where POST call is happening.Ex. POST REST API on node B. Directly execute a+b. save a and b in DB. responseTrigger activates on all nodes. And it will execute a+b on all nodes. (but I don't want to execute it again on node B)Thanks,Satish", "msg_date": "Thu, 16 Jul 2020 12:15:25 +0530", "msg_from": "Satish S <satishcampus@gmail.com>", "msg_from_op": true, "msg_subject": "How to identify trigger is called from the node where row is created" }, { "msg_contents": "On Thu, Jul 16, 2020 at 1:16 AM Satish S <satishcampus@gmail.com> wrote:\n\n> I have a cluster of three nodes A, B, C and I'm using postgres bdr for\n> replication\n>\nThis isn’t the right mailing list for this topic. Core PostgreSQL doesn’t\nhave BDR so this seems like it should be directed to whichever product is\nproviding that capability. I’m pretty sure that nothing in core PostgreSQL\nties nodes to data. But if you want to explore this on the community lists\nyou want to send to the -general list, not -hackers. Per our community\nmailing list listing:\n\nhttps://www.postgresql.org/list/\n\n\"The PostgreSQL developers team lives here. Discussion of current\ndevelopment issues, problems and bugs, and proposed new features. If your\nquestion cannot be answered by people in the other lists, and it is likely\nthat only a developer will know the answer, you may re-post your question\nin this list. You must try elsewhere first!\"\n\nDavid J.\n\nOn Thu, Jul 16, 2020 at 1:16 AM Satish S <satishcampus@gmail.com> wrote:I have a cluster of three nodes A, B, C and I'm using postgres bdr for replicationThis isn’t the right mailing list for this topic.  Core PostgreSQL doesn’t have BDR so this seems like it should be directed to whichever product is providing that capability.  I’m pretty sure that nothing in core PostgreSQL ties nodes to data.  But if you want to explore this on the community lists you want to send to the -general list, not -hackers.  Per our community mailing list listing:https://www.postgresql.org/list/  \"The PostgreSQL developers team lives here. Discussion of current development issues, problems and bugs, and proposed new features. If your question cannot be answered by people in the other lists, and it is likely that only a developer will know the answer, you may re-post your question in this list. You must try elsewhere first!\"David J.", "msg_date": "Thu, 16 Jul 2020 07:45:11 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: How to identify trigger is called from the node where row is\n created" } ]
[ { "msg_contents": "Is there any reason why src/timezone/tznames/Europe.txt is encoded in\nlatin1 and not utf-8?\n\nThe offending lines are these timezones:\n\nMESZ 7200 D # Mitteleurop�ische Sommerzeit (German)\n # (attested in IANA comments though not their code)\n\nMEZ 3600 # Mitteleurop�ische Zeit (German)\n # (attested in IANA comments though not their code)\n\nIt's not important for anything, just general sanity. (Spotted by\nDebian's package checker, lintian.)\n\nChristoph\n\n\n", "msg_date": "Thu, 16 Jul 2020 12:07:43 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": true, "msg_subject": "Encoding of src/timezone/tznames/Europe.txt" }, { "msg_contents": "Christoph Berg <myon@debian.org> writes:\n> Is there any reason why src/timezone/tznames/Europe.txt is encoded in\n> latin1 and not utf-8?\n\n> The offending lines are these timezones:\n\n> MESZ 7200 D # Mitteleuropäische Sommerzeit (German)\n> # (attested in IANA comments though not their code)\n\n> MEZ 3600 # Mitteleuropäische Zeit (German)\n> # (attested in IANA comments though not their code)\n\n> It's not important for anything, just general sanity. (Spotted by\n> Debian's package checker, lintian.)\n\nHm. TBH, my first reaction is \"let's lose the accents\". I agree that\nit's not great to be installing files that are encoded in latin1, but\nit might not be great to be installing files that are encoded in utf8\neither. Aren't we better off insisting that these files be plain ascii?\n\nI notice that the copies of these lines in src/timezone/tznames/Default\nseem to be ascii-ified already. Haven't traced the git history,\nbut I bet somebody fixed Default without noticing the other copy.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 Jul 2020 10:24:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Encoding of src/timezone/tznames/Europe.txt" }, { "msg_contents": "Re: Tom Lane\n> > MESZ 7200 D # Mitteleurop�ische Sommerzeit (German)\n> > # (attested in IANA comments though not their code)\n> \n> > It's not important for anything, just general sanity. (Spotted by\n> > Debian's package checker, lintian.)\n> \n> Hm. TBH, my first reaction is \"let's lose the accents\".\n\nOr that, yes. (The correct German transliteration is\n\"Mitteleuropaeische\" with 'ae'.)\n\nChristoph\n\n\n", "msg_date": "Thu, 16 Jul 2020 21:46:03 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": true, "msg_subject": "Re: Encoding of src/timezone/tznames/Europe.txt" }, { "msg_contents": "On Thu, Jul 16, 2020 at 09:46:03PM +0200, Christoph Berg wrote:\n> Or that, yes. (The correct German transliteration is\n> \"Mitteleuropaeische\" with 'ae'.)\n\ntznames/Europe.txt is iso-latin-1-unix for buffer-file-coding-system\nsince its introduction in d8b5c95, and tznames/Default is using ASCII\nas well since this point. +1 to switch all that to ASCII and give up\non the accents.\n--\nMichael", "msg_date": "Fri, 17 Jul 2020 10:16:47 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Encoding of src/timezone/tznames/Europe.txt" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Thu, Jul 16, 2020 at 09:46:03PM +0200, Christoph Berg wrote:\n>> Or that, yes. (The correct German transliteration is\n>> \"Mitteleuropaeische\" with 'ae'.)\n\n> tznames/Europe.txt is iso-latin-1-unix for buffer-file-coding-system\n> since its introduction in d8b5c95, and tznames/Default is using ASCII\n> as well since this point. +1 to switch all that to ASCII and give up\n> on the accents.\n\nDone that way. I also checked for other discrepancies between\ntznames/Default and the other files, and found a few more trivialities.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 Jul 2020 11:06:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Encoding of src/timezone/tznames/Europe.txt" }, { "msg_contents": "Re: Tom Lane\n> Done that way. I also checked for other discrepancies between\n> tznames/Default and the other files, and found a few more trivialities.\n\nThanks!\n\nChristoph\n\n\n", "msg_date": "Fri, 17 Jul 2020 19:24:28 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": true, "msg_subject": "Re: Encoding of src/timezone/tznames/Europe.txt" }, { "msg_contents": "On Fri, Jul 17, 2020 at 07:24:28PM +0200, Christoph Berg wrote:\n> Re: Tom Lane\n>> Done that way. I also checked for other discrepancies between\n>> tznames/Default and the other files, and found a few more trivialities.\n> \n> Thanks!\n\n+1.\n--\nMichael", "msg_date": "Sat, 18 Jul 2020 10:40:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Encoding of src/timezone/tznames/Europe.txt" } ]
[ { "msg_contents": "Hi:\n\nEvery pg_type has typinput/typoutput and typreceive/typsend\nthey are used for text format and binary format accordingly. What is\nthe difference between them in practice? For example, for a PG user,\nshall they choose binary format or text format? Actually I don't even\nknow how to set this in JDBC. Which one is more common in real\nlife and why?\n\nThe reason I ask this is because I have a task to make numeric output\nsimilar to oracle.\n\nOracle:\n\nSQL> select 2 / 1.0 from dual;\n\n 2/1.0\n----------\n 2\n\nPG:\n\npostgres=# select 2 / 1.0;\n ?column?\n--------------------\n 2.0000000000000000\n(1 row)\n\nIf the user uses text format, I can just hack some numeric_out function,\nbut if they\nuse binary format, looks I have to change the driver they used for it. Am\nI\nunderstand it correctly?\n\n-- \nBest Regards\nAndy Fan\n\nHi:Every pg_type has typinput/typoutput and typreceive/typsend they are used for text format and binary format accordingly.  What isthe difference between them in practice?  For example,  for a PG user,shall they choose binary format or text format?  Actually I don't evenknow how to set this in JDBC.  Which one is more common in real life and why? The reason I ask this is because I have a task to make numeric outputsimilar to oracle. Oracle:SQL> select 2 / 1.0 from dual;     2/1.0----------         2PG:postgres=# select  2 / 1.0;      ?column?-------------------- 2.0000000000000000(1 row)If the user uses text format, I can just hack some numeric_out function, but if theyuse binary format,  looks I have to change the driver they used for it.  Am I understand it correctly?-- Best RegardsAndy Fan", "msg_date": "Fri, 17 Jul 2020 00:52:00 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Difference for Binary format vs Text format for client-server\n communication" }, { "msg_contents": "On 2020-07-16 18:52, Andy Fan wrote:\n> The reason I ask this is because I have a task to make numeric output\n> similar to oracle.\n> \n> Oracle:\n> \n> SQL> select 2 / 1.0 from dual;\n> \n>      2/1.0\n> ----------\n>          2\n> \n> PG:\n> \n> postgres=# select  2 / 1.0;\n>       ?column?\n> --------------------\n>  2.0000000000000000\n> (1 row)\n> \n> If the user uses text format, I can just hack some numeric_out function, \n> but if they\n> use binary format,  looks I have to change the driver they used for it. \n> Am I\n> understand it correctly?\n\nI think what you should be looking at is why the numeric division \nfunction produces that scale and possibly make changes there. By the \ntime the type's output or send function is invoked, that's already \ndecided. The output/send functions are not the place to make scale or \nother semantic adjustments.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 25 Jul 2020 19:49:10 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Difference for Binary format vs Text format for client-server\n communication" }, { "msg_contents": "On Sun, Jul 26, 2020 at 1:49 AM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2020-07-16 18:52, Andy Fan wrote:\n> > The reason I ask this is because I have a task to make numeric output\n> > similar to oracle.\n> >\n> > Oracle:\n> >\n> > SQL> select 2 / 1.0 from dual;\n> >\n> > 2/1.0\n> > ----------\n> > 2\n> >\n> > PG:\n> >\n> > postgres=# select 2 / 1.0;\n> > ?column?\n> > --------------------\n> > 2.0000000000000000\n> > (1 row)\n> >\n> > If the user uses text format, I can just hack some numeric_out function,\n> > but if they\n> > use binary format, looks I have to change the driver they used for it.\n> > Am I\n> > understand it correctly?\n>\n> I think what you should be looking at is why the numeric division\n> function produces that scale and possibly make changes there.\n\n\nThanks, I think you are talking about the select_div_scale function, which\nis\ncalled before the real division task in div_var. so it will be hard to hack\nat that part. Beside that, oracle returns the zero-trim version no matter\nif division\nis involved(I forgot to mention at the first).\n\nAt last, I just hacked the numeric_out function, then it works like Oracle\nnow.\nHowever it just works in text format. I tried JDBC, and it uses text\nformat by\ndefault. The solution is not good enough but it is ok for my purpose\ncurrently.\n\nIIUC, if a driver uses text protocol for a data type, then it works like\nthis: 1). server\ngets a value in binary format. 2). server convert it to string and send it\nvia network,\n3). client gets the string. 4). client converts the string to a given data\ntype. looks it is much\nmore complex than binary protocol. then why text protocol is chosen by\ndefault.\n\n\n> By the\n> time the type's output or send function is invoked, that's already\n> decided. The output/send functions are not the place to make scale or\n> other semantic adjustments.\n>\n> --\n> Peter Eisentraut http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\n-- \nBest Regards\nAndy Fan\n\nOn Sun, Jul 26, 2020 at 1:49 AM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:On 2020-07-16 18:52, Andy Fan wrote:\n> The reason I ask this is because I have a task to make numeric output\n> similar to oracle.\n> \n> Oracle:\n> \n> SQL> select 2 / 1.0 from dual;\n> \n>       2/1.0\n> ----------\n>           2\n> \n> PG:\n> \n> postgres=# select  2 / 1.0;\n>        ?column?\n> --------------------\n>   2.0000000000000000\n> (1 row)\n> \n> If the user uses text format, I can just hack some numeric_out function, \n> but if they\n> use binary format,  looks I have to change the driver they used for it.  \n> Am I\n> understand it correctly?\n\nI think what you should be looking at is why the numeric division \nfunction produces that scale and possibly make changes there. Thanks, I  think you are talking about the select_div_scale function, which iscalled before the real division task in div_var.  so it will be hard to hackat that part.  Beside that,  oracle returns the zero-trim version no matter if divisionis involved(I forgot to mention at the first). At last, I just hacked the numeric_out  function, then it works like Oracle now. However it just works in text format. I tried JDBC,  and it uses text format bydefault.  The solution is not good enough but it is ok for my purpose currently. IIUC, if a driver uses text protocol for a data type,  then it works like this:  1). servergets a value in binary format.  2). server convert it to string and send it via network,  3). client gets the string. 4). client converts the string to a given data type.  looks it is muchmore complex than binary protocol.  then why text protocol is chosen by default.\n  By the \ntime the type's output or send function is invoked, that's already \ndecided.  The output/send functions are not the place to make scale or \nother semantic adjustments.\n\n-- \nPeter Eisentraut              http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n-- Best RegardsAndy Fan", "msg_date": "Sun, 26 Jul 2020 17:36:29 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Difference for Binary format vs Text format for client-server\n communication" } ]
[ { "msg_contents": "Dean Rasheed pointed out that in_range for float4/float8 seems to be\ndoing the wrong thing for infinite offsets, and after some testing\nI concur that it is. For example, a sort key of '-infinity' should\nbe considered to be in-range for a range specified as RANGE BETWEEN\n'inf' PRECEDING AND 'inf' PRECEDING; but with the code as it stands,\nit isn't. I propose the attached patch, which probably should be\nback-patched.\n\nWhen the current row's value is +infinity, actual computation of\nbase - offset would yield NaN, making it a bit unclear whether\nwe should consider -infinity to be in-range. It seems to me that\nwe should, as that gives more natural-looking results in the test\ncases, so that's how the patch does it.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 16 Jul 2020 14:58:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Wrong results from in_range() tests with infinite offset" }, { "msg_contents": "I wrote:\n> When the current row's value is +infinity, actual computation of\n> base - offset would yield NaN, making it a bit unclear whether\n> we should consider -infinity to be in-range. It seems to me that\n> we should, as that gives more natural-looking results in the test\n> cases, so that's how the patch does it.\n\nActually, after staring at those results awhile longer, I decided\nthey were wrong. The results shown here seem actually sane ---\nfor instance, -Infinity shouldn't \"infinitely precede\" itself,\nI think. (Maybe if you got solipsistic enough you could argue\nthat that is valid, but it seems pretty bogus.)\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 16 Jul 2020 17:50:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Wrong results from in_range() tests with infinite offset" }, { "msg_contents": "On Thu, 16 Jul 2020, 22:50 Tom Lane, <tgl@sss.pgh.pa.us> wrote:\n\n> I wrote:\n> > When the current row's value is +infinity, actual computation of\n> > base - offset would yield NaN, making it a bit unclear whether\n> > we should consider -infinity to be in-range. It seems to me that\n> > we should, as that gives more natural-looking results in the test\n> > cases, so that's how the patch does it.\n>\n> Actually, after staring at those results awhile longer, I decided\n> they were wrong. The results shown here seem actually sane ---\n> for instance, -Infinity shouldn't \"infinitely precede\" itself,\n> I think. (Maybe if you got solipsistic enough you could argue\n> that that is valid, but it seems pretty bogus.)\n>\n\nHmm, that code looks a bit fishy to me, but I really need to think about it\nsome more. I'll take another look tomorrow, and maybe it'll become clearer.\n\nRegards,\nDean\n\nOn Thu, 16 Jul 2020, 22:50 Tom Lane, <tgl@sss.pgh.pa.us> wrote:I wrote:\n> When the current row's value is +infinity, actual computation of\n> base - offset would yield NaN, making it a bit unclear whether\n> we should consider -infinity to be in-range.  It seems to me that\n> we should, as that gives more natural-looking results in the test\n> cases, so that's how the patch does it.\n\nActually, after staring at those results awhile longer, I decided\nthey were wrong.  The results shown here seem actually sane ---\nfor instance, -Infinity shouldn't \"infinitely precede\" itself,\nI think.  (Maybe if you got solipsistic enough you could argue\nthat that is valid, but it seems pretty bogus.)Hmm, that code looks a bit fishy to me, but I really need to think about it some more. I'll take another look tomorrow, and maybe it'll become clearer.Regards,Dean", "msg_date": "Fri, 17 Jul 2020 00:47:14 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Wrong results from in_range() tests with infinite offset" }, { "msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> On Thu, 16 Jul 2020, 22:50 Tom Lane, <tgl@sss.pgh.pa.us> wrote:\n>> Actually, after staring at those results awhile longer, I decided\n>> they were wrong. The results shown here seem actually sane ---\n>> for instance, -Infinity shouldn't \"infinitely precede\" itself,\n>> I think. (Maybe if you got solipsistic enough you could argue\n>> that that is valid, but it seems pretty bogus.)\n\n> Hmm, that code looks a bit fishy to me, but I really need to think about it\n> some more. I'll take another look tomorrow, and maybe it'll become clearer.\n\nIt's certainly verbose, so I'd like to find a more concise way to\nwrite the logic. But the v2 results seem right.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 Jul 2020 20:59:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Wrong results from in_range() tests with infinite offset" }, { "msg_contents": "On Fri, 17 Jul 2020 at 01:59, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> > On Thu, 16 Jul 2020, 22:50 Tom Lane, <tgl@sss.pgh.pa.us> wrote:\n> >> Actually, after staring at those results awhile longer, I decided\n> >> they were wrong. The results shown here seem actually sane ---\n> >> for instance, -Infinity shouldn't \"infinitely precede\" itself,\n> >> I think. (Maybe if you got solipsistic enough you could argue\n> >> that that is valid, but it seems pretty bogus.)\n>\n> > Hmm, that code looks a bit fishy to me, but I really need to think about it\n> > some more. I'll take another look tomorrow, and maybe it'll become clearer.\n>\n> It's certainly verbose, so I'd like to find a more concise way to\n> write the logic. But the v2 results seem right.\n>\n\nI'm finding it hard to come up with a principled argument to say\nexactly what the right results should be.\n\nAs things stand (pre-patch), a window frame defined as \"BETWEEN 'inf'\nPRECEDING AND 'inf' PRECEDING\", produces the following:\n\n id | f_float4 | first_value | last_value\n----+-----------+-------------+------------\n 0 | -Infinity | |\n 1 | -3 | |\n 2 | -1 | |\n 3 | 0 | |\n 4 | 1.1 | |\n 5 | 1.12 | |\n 6 | 2 | |\n 7 | 100 | |\n 8 | Infinity | |\n 9 | NaN | 9 | 9\n(10 rows)\n\nwhich is clearly wrong, because -Inf obviously infinitely precedes all\nthe other (non-NaN) values.\n\nWith the first version of the patch, that result became\n\n id | f_float4 | first_value | last_value\n----+-----------+-------------+------------\n 0 | -Infinity | 0 | 0\n 1 | -3 | 0 | 0\n 2 | -1 | 0 | 0\n 3 | 0 | 0 | 0\n 4 | 1.1 | 0 | 0\n 5 | 1.12 | 0 | 0\n 6 | 2 | 0 | 0\n 7 | 100 | 0 | 0\n 8 | Infinity | 0 | 0\n 9 | NaN | 9 | 9\n(10 rows)\n\nwhich is definitely better, but the one obvious problem is last_value\nfor id=8, because all the values in earlier rows infinitely precede\n+Inf, so they should be included in the window frame for that row.\n\nWith the second version of the patch, the result is\n\n id | f_float4 | first_value | last_value\n----+-----------+-------------+------------\n 0 | -Infinity | |\n 1 | -3 | 0 | 0\n 2 | -1 | 0 | 0\n 3 | 0 | 0 | 0\n 4 | 1.1 | 0 | 0\n 5 | 1.12 | 0 | 0\n 6 | 2 | 0 | 0\n 7 | 100 | 0 | 0\n 8 | Infinity | 0 | 7\n 9 | NaN | 9 | 9\n(10 rows)\n\nThat fixes last_value for id=8, using the fact that all values less\nthan +Inf infinitely precede it, and also assuming that +Inf does not\ninfinitely precede itself, which seems reasonable.\n\nThe other change is in the first row, because it now assumes that -Inf\ndoesn't infinitely precede itself, which seems reasonable from a\nconsistency point of view.\n\nHowever, that is also a bit odd because it goes against the documented\ncontract of in_range(), which is supposed to do the tests\n\n val <= base +/- offset1\n val >= base +/- offset2\n\nwhich for \"BETWEEN 'inf' PRECEDING AND 'inf' PRECEDING\" become\n\n val = base - Inf\n\nwhich is -Inf, even if base = -Inf. So I'd say that the window\ninfinitely preceding -Inf contains -Inf, since -Inf - Inf = -Inf.\n\nBut if -Inf infinitely precedes -Inf, it probably also makes sense to\nsay that +Inf infinitely precedes +Inf for consistency, even though\nthat really isn't well-defined, since Inf - Inf = NaN. Doing that is\ncertainly a lot easier to code, because it just needs to return true\nif base +/- offset would be NaN, i.e.,\n\n /*\n * Deal with cases where both base and offset are infinite, and computing\n * base +/- offset would produce NaN. This corresponds to a window frame\n * whose boundary infinitely precedes +inf or infinitely follows -inf,\n * which is not well-defined. For consistency with other cases involving\n * infinities, such as the fact that +inf infinitely follows +inf, we\n * choose to assume that +inf infinitely precedes +inf and -inf infinitely\n * follows -inf, and therefore that all finite and infinite values are in\n * such a window frame.\n */\n if (isinf(base) && isinf(offset))\n {\n if ((base > 0 && sub) || (base < 0 && !sub))\n PG_RETURN_BOOL(true);\n }\n\nand the result is\n\n id | f_float8 | first_value | last_value\n----+-----------+-------------+------------\n 0 | -Infinity | 0 | 0\n 1 | -3 | 0 | 0\n 2 | -1 | 0 | 0\n 3 | 0 | 0 | 0\n 4 | 1.1 | 0 | 0\n 5 | 1.12 | 0 | 0\n 6 | 2 | 0 | 0\n 7 | 100 | 0 | 0\n 8 | Infinity | 0 | 8\n 9 | NaN | 9 | 9\n(10 rows)\n\nwhich looks about equally sensible. To me, the fact that the window\ninfinitely preceding -Inf includes -Inf makes more sense, but the\nmeaning of the window infinitely preceding +Inf is much less obvious,\nand not really well-defined.\n\nRegards,\nDean\n\n\n", "msg_date": "Sat, 18 Jul 2020 09:16:39 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Wrong results from in_range() tests with infinite offset" }, { "msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> if (isinf(base) && isinf(offset))\n> {\n> if ((base > 0 && sub) || (base < 0 && !sub))\n> PG_RETURN_BOOL(true);\n> }\n\nYeah, I'd experimented with more-or-less that logic before arriving at\nmy v2 patch. I didn't like the outcome that \"inf both infinitely precedes\nand infinitely follows itself\". Still, it is nicely simple.\n\nTo make sense of this behavior, you have to argue that +/-inf are not\nin any way concrete values, but represent some sort of infinite ranges;\nthen there could be some members of the class \"inf\" that infinitely\nprecede other members. I thought that was bending the mathematical\nconcept a bit too far. However, this isn't an area of math that I've\nstudied in any detail, so maybe it's a standard interpretation.\n\nStill, I think the results my v2 patch gets make more sense than these.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 18 Jul 2020 10:06:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Wrong results from in_range() tests with infinite offset" }, { "msg_contents": "I wrote:\n> Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n>> if (isinf(base) && isinf(offset))\n>> {\n>> if ((base > 0 && sub) || (base < 0 && !sub))\n>> PG_RETURN_BOOL(true);\n>> }\n\n> Yeah, I'd experimented with more-or-less that logic before arriving at\n> my v2 patch. I didn't like the outcome that \"inf both infinitely precedes\n> and infinitely follows itself\". Still, it is nicely simple.\n\nI spent some more time thinking about this, and came to a couple\nof conclusions.\n\nFirst, let's take it as given that we should only special-case\nsituations where the sum would be computed as NaN. That destroys my\nposition that, for instance, -inf shouldn't be included in the range\nthat ends 'inf preceding' itself, because the normal calculation goes\nthrough as -inf <= (-inf - inf) which yields TRUE without forming any\nNaN. Although that conclusion seems weird at first glance, there\nseems no way to poke a hole in it without rejecting the principle\nthat inf + inf = inf.\n\nSecond, if -inf is included in the range that ends 'inf preceding'\nitself, symmetry dictates that it is also included in the range that\nbegins 'inf following' itself. In that case we'd be trying to compute\n-inf >= (-inf + inf) which does involve a NaN, but this argument says\nwe should return TRUE.\n\nThe other three cases where we'd hit NaNs are likewise symmetric with\nnon-NaN cases that'd return TRUE. Hence, I'm forced to the conclusion\nthat you've got it right above. I might write the code a little\ndifferently, but const-TRUE-for-NaN-cases seems like the right behavior.\n\nSo I withdraw my objection to defining it this way. Unless somebody\nelse weighs in, I'll commit it like that in a day or two.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 18 Jul 2020 17:28:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Wrong results from in_range() tests with infinite offset" }, { "msg_contents": "I wrote:\n> The other three cases where we'd hit NaNs are likewise symmetric with\n> non-NaN cases that'd return TRUE. Hence, I'm forced to the conclusion\n> that you've got it right above. I might write the code a little\n> differently, but const-TRUE-for-NaN-cases seems like the right behavior.\n> So I withdraw my objection to defining it this way. Unless somebody\n> else weighs in, I'll commit it like that in a day or two.\n\nPushed, but I chickened out of back-patching. The improvement in what\nhappens for finite comparison values seems somewhat counterbalanced by\nthe possibility that someone might not like the definition we arrived\nat for infinities. So, it's not quite an open-and-shut bug fix, so\nI just put it in HEAD (for now anyway).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 20 Jul 2020 22:06:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Wrong results from in_range() tests with infinite offset" }, { "msg_contents": "On Tue, 21 Jul 2020 at 03:06, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Pushed, but I chickened out of back-patching. The improvement in what\n> happens for finite comparison values seems somewhat counterbalanced by\n> the possibility that someone might not like the definition we arrived\n> at for infinities. So, it's not quite an open-and-shut bug fix, so\n> I just put it in HEAD (for now anyway).\n>\n\nYeah, that seems fair enough, and it's quite an obscure corner-case\nthat has gone unnoticed for quite some time.\n\nRegards,\nDean\n\n\n", "msg_date": "Tue, 21 Jul 2020 09:06:09 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Wrong results from in_range() tests with infinite offset" } ]
[ { "msg_contents": "Dean Rasheed questioned this longstanding behavior:\n\nregression=# SELECT 'nan'::float8 / '0'::float8;\nERROR: division by zero\n\nAfter a bit of research I think he's right: per IEEE 754 this should\nyield NaN, not an error. Accordingly I propose the attached patch.\nThis is probably not something to back-patch, though.\n\nOne thing that's not very clear to me is which of these spellings\nis preferable:\n\n\tif (unlikely(val2 == 0.0) && !isnan(val1))\n\tif (unlikely(val2 == 0.0 && !isnan(val1)))\n\nI think we can reject this variant:\n\n\tif (unlikely(val2 == 0.0) && unlikely(!isnan(val1)))\n\nsince actually the second condition *is* pretty likely.\nBut I don't know which of the first two would give better\ncode. Andres, any thoughts?\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 16 Jul 2020 15:29:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "NaN divided by zero should yield NaN" }, { "msg_contents": "On Thu, 16 Jul 2020 at 20:29, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Dean Rasheed questioned this longstanding behavior:\n>\n> regression=# SELECT 'nan'::float8 / '0'::float8;\n> ERROR: division by zero\n>\n> After a bit of research I think he's right: per IEEE 754 this should\n> yield NaN, not an error. Accordingly I propose the attached patch.\n> This is probably not something to back-patch, though.\n>\n\nAgreed.\n\n> One thing that's not very clear to me is which of these spellings\n> is preferable:\n>\n> if (unlikely(val2 == 0.0) && !isnan(val1))\n> if (unlikely(val2 == 0.0 && !isnan(val1)))\n>\n\nMy guess is that the first would be better, since it would tell the\ncompiler that it's unlikely to need to do the NaN test, so it would be\nkind of like doing\n\n if (unlikely(val2 == 0.0))\n if (!isnan(val1)))\n\nRegards,\nDean\n\n\n", "msg_date": "Fri, 17 Jul 2020 19:08:53 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: NaN divided by zero should yield NaN" }, { "msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> On Thu, 16 Jul 2020 at 20:29, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> One thing that's not very clear to me is which of these spellings\n>> is preferable:\n>> \tif (unlikely(val2 == 0.0) && !isnan(val1))\n>> \tif (unlikely(val2 == 0.0 && !isnan(val1)))\n\n> My guess is that the first would be better, since it would tell the\n> compiler that it's unlikely to need to do the NaN test,\n\nYeah, that's the straightforward way to think about it, but I've\nfound that gcc is sometimes less than straightforward ;-). Still,\nthere's no obvious reason to do it the second way, so I pushed the\nfirst way.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 20 Jul 2020 19:46:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: NaN divided by zero should yield NaN" } ]
[ { "msg_contents": "Hackers,\n\nAs a reaction to this documentation comment [1] I went through the main\nparagraph of the Database Management Overview and came up with the reworded\nand expanded page. Proposed for HEAD only. Added to the commitfest\n2020-09.\n\n1.\nhttps://www.postgresql.org/message-id/flat/57083a441ddd2f3b9cdc0967c6689384cddeeedb.camel%40cybertec.at#f7198de1af14f7c5d84e7095b6b52bff\n\nDavid J.", "msg_date": "Thu, 16 Jul 2020 14:55:54 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": true, "msg_subject": "Improve Managing Databases Overview Doc Page" }, { "msg_contents": "On Thu, Jul 16, 2020 at 02:55:54PM -0700, David G. Johnston wrote:\n> Hackers,\n> \n> As a reaction to this documentation comment [1] I went through the main\n> paragraph of the Database Management Overview and came up with the reworded and\n> expanded page.� Proposed for HEAD only.� Added to the commitfest 2020-09.\n> \n> 1.�https://www.postgresql.org/message-id/flat/\n> 57083a441ddd2f3b9cdc0967c6689384cddeeedb.camel%40cybertec.at#\n> f7198de1af14f7c5d84e7095b6b52bff\n\nFYI, patch applied.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Fri, 21 Aug 2020 21:00:33 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Improve Managing Databases Overview Doc Page" } ]
[ { "msg_contents": "Hi hackers,\n\nAttached is a patch for supporting queries in the WHEN expression of \nstatement triggers. It is restricted so that the expression can \nreference only the transition tables and the table to which the trigger \nis attached. This seemed to make the most sense in that it follows what \nyou can do in the per row triggers. I did have a look in the standards \ndocument about triggers, and couldn't see any restrictions mentioned, \nbut nevertheless thought it made most sense.\n\nOne possibility controversial aspect is that the patch doesn't use SPI \nto evaluate the expression; it constructs a Query instead and passes it \nto the executor. Don't know what people's thoughts are on doing that?\n\n-Joe", "msg_date": "Thu, 16 Jul 2020 23:22:13 +0100", "msg_from": "\"Joe Wildish\" <joe@lateraljoin.com>", "msg_from_op": true, "msg_subject": "[PATCH] Allow queries in WHEN expression of FOR EACH STATEMENT\n triggers" }, { "msg_contents": "> On 17 Jul 2020, at 00:22, Joe Wildish <joe@lateraljoin.com> wrote:\n\n> Attached is a patch for supporting queries in the WHEN expression of statement triggers.at?\n\nHi!,\n\nPlease create an entry for this patch in the 2020-09 commitfest to make sure\nit's properly tracked:\n\n\thttps://commitfest.postgresql.org/29/\n\ncheers ./daniel\n\n", "msg_date": "Fri, 17 Jul 2020 01:32:56 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Allow queries in WHEN expression of FOR EACH STATEMENT\n triggers" }, { "msg_contents": "Hi Joe,\n\nThis is my review of your patch\nOn Fri, Jul 17, 2020 at 1:22 AM Joe Wildish <joe@lateraljoin.com> wrote:\n\n> Hi hackers,\n>\n> Attached is a patch for supporting queries in the WHEN expression of\n> statement triggers.\n\n\n\n\n- Currently, <literal>WHEN</literal> expressions cannot contain\n\n- subqueries.\n\nsubqueries in row trigger's is not supported in your patch so the the\ndocumentation have to reflect it\n\n\n+ </literal>UPDATE</literal> triggers are able to refer to both\n</literal>OLD</literal>\n\n+ and <literal>NEW</literal>\n\nOpening and ending tag mismatch on UPDATE and OLD literal so documentation\nbuild fails and please update the documentation on server programming\nsection too\n\n\n+ /*\n\n+ * Plan the statement. No need to rewrite as it can only refer to the\n\n+ * transition tables OLD and NEW, and the relation which is being\n\n+ * triggered upon.\n\n+ */\n\n+ stmt = pg_plan_query(query, trigger->tgqual, 0, NULL);\n\n+ dest = CreateDestReceiver(DestTuplestore);\n\n+ store = tuplestore_begin_heap(false, false, work_mem);\n\n+ tupdesc = CreateTemplateTupleDesc(1);\n\n+ whenslot = MakeSingleTupleTableSlot(tupdesc, &TTSOpsMinimalTuple);\n\nInstead of planning every time the trigger fire I suggest to store plan or\nprepared statement node so planning time can be saved\n\n\nThere are server crash on the following sequence of command\n\nCREATE TABLE main_table (a int unique, b int);\n\n\nCREATE FUNCTION trigger_func() RETURNS trigger LANGUAGE plpgsql AS '\n\nBEGIN\n\nRAISE NOTICE ''trigger_func(%) called: action = %, when = %, level = %'',\nTG_ARGV[0], TG_OP, TG_WHEN, TG_LEVEL;\n\nRETURN NULL;\n\nEND;';\n\n\nINSERT INTO main_table DEFAULT VALUES;\n\n\nCREATE TRIGGER after_insert AFTER INSERT ON main_table\n\nREFERENCING NEW TABLE AS NEW FOR EACH STATEMENT\n\nWHEN (500 <= ANY(SELECT b FROM NEW union SELECT a FROM main_table))\n\nEXECUTE PROCEDURE trigger_func('after_insert');\n\n\nINSERT INTO main_table (a, b) VALUES\n\n(101, 498),\n\n(102, 499);\n\nserver crashed\n\n\nregards\n\nSurafel\n\n\n\nHi Joe,\n\nThis is my review of\nyour patch \n\nOn Fri, Jul 17, 2020 at 1:22 AM Joe Wildish <joe@lateraljoin.com> wrote:Hi hackers,\n\nAttached is a patch for supporting queries in the WHEN expression of \nstatement triggers.  \n\n- Currently,\n<literal>WHEN</literal> expressions cannot contain\n- subqueries.\nsubqueries in row\ntrigger's is not supported in your patch so the the documentation\nhave to reflect it\n+ \n</literal>UPDATE</literal> triggers are able to refer to\nboth </literal>OLD</literal>\n+ and\n<literal>NEW</literal>\nOpening and ending\ntag mismatch on UPDATE and OLD literal so documentation build fails\nand please update the documentation on server programming section too\n\n\n+\t\t/*\n+\t\t * Plan the\nstatement. No need to rewrite as it can only refer to the\n+\t\t * transition\ntables OLD and NEW, and the relation which is being\n+\t\t * triggered\nupon.\n+\t\t */\n+\t\tstmt =\npg_plan_query(query, trigger->tgqual, 0, NULL);\n+\t\tdest =\nCreateDestReceiver(DestTuplestore);\n+\t\tstore =\ntuplestore_begin_heap(false, false, work_mem);\n+\t\ttupdesc =\nCreateTemplateTupleDesc(1);\n+\t\twhenslot =\nMakeSingleTupleTableSlot(tupdesc, &TTSOpsMinimalTuple);\nInstead of planning\nevery time the trigger fire I suggest to store plan or prepared\nstatement node so planning time can be saved\n\n\nThere are server\ncrash on the following sequence of command\nCREATE TABLE\nmain_table (a int unique, b int);\nCREATE FUNCTION\ntrigger_func() RETURNS trigger LANGUAGE plpgsql AS '\nBEGIN\n\tRAISE NOTICE\n''trigger_func(%) called: action = %, when = %, level = %'',\nTG_ARGV[0], TG_OP, TG_WHEN, TG_LEVEL;\n\tRETURN NULL;\nEND;';\n\n\nINSERT INTO\nmain_table DEFAULT VALUES;\n\n\nCREATE TRIGGER\nafter_insert AFTER INSERT ON main_table\n REFERENCING NEW\nTABLE AS NEW FOR EACH STATEMENT\n WHEN (500 <=\nANY(SELECT b FROM NEW union SELECT a FROM main_table))\n EXECUTE PROCEDURE\ntrigger_func('after_insert');\n\n\nINSERT INTO\nmain_table (a, b) VALUES\n (101, 498),\n (102, 499);\nserver crashed \n\n\n\nregards \n\nSurafel", "msg_date": "Thu, 3 Sep 2020 21:22:31 +0300", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Allow queries in WHEN expression of FOR EACH STATEMENT\n triggers" }, { "msg_contents": "On Thu, Sep 03, 2020 at 09:22:31PM +0300, Surafel Temesgen wrote:\n> server crashed\n\nThat's a problem. As this feedback has not been answered after two\nweeks, I am marking the patch as returned with feedback.\n--\nMichael", "msg_date": "Wed, 30 Sep 2020 16:37:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Allow queries in WHEN expression of FOR EACH STATEMENT\n triggers" }, { "msg_contents": "Hi Surafel,\n\nOn 3 Sep 2020, at 19:22, Surafel Temesgen wrote:\n\n> This is my review of your patch\n\nThanks for the review.\n\n> subqueries in row trigger's is not supported in your patch so the the\n> documentation have to reflect it\n\nIt is still the case that the documentation says this. But, that may \nhave been unclear as the documentation wouldn't compile (as you noted), \nso it wasn't possible to read it in the rendered form.\n\n>\n> + </literal>UPDATE</literal> triggers are able to refer to both\n> </literal>OLD</literal>\n>\n> + and <literal>NEW</literal>\n>\n> Opening and ending tag mismatch on UPDATE and OLD literal so \n> documentation\n> build fails and please update the documentation on server programming\n> section too\n\nFixed.\n\nI've also amended the server programming section to accurately reflect \nhow WHEN conditions can be used.\n\n> Instead of planning every time the trigger fire I suggest to store \n> plan or\n> prepared statement node so planning time can be saved\n\nYes, that would make sense. I'll look in to what needs to be done.\n\nDo you know if there are other areas of the code that cache plans that \ncould act as a guide as to how best to achieve it?\n\n> There are server crash on the following sequence of command\n>\n> ...\n>\n> INSERT INTO main_table (a, b) VALUES\n>\n> (101, 498),\n>\n> (102, 499);\n>\n> server crashed\n\nThanks. It was an incorrect Assert about NULL returns. Fixed.\n\n-Joe", "msg_date": "Wed, 30 Dec 2020 21:01:26 +0000", "msg_from": "\"Joe Wildish\" <joe@lateraljoin.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Allow queries in WHEN expression of FOR EACH STATEMENT\n triggers" }, { "msg_contents": "Hi Hackers,\n\nAttached is a new version of this patch. I resurrected it after removing it from the commitfest last year; I'll add it back in to the next CF.\n\nThe main change is a switch to using SPI for expression evaluation. The plans are also cached along the same lines as the RI trigger plans.\n\nSome random thoughts on the allowable expressions:\n\na. I originally disallowed functions and table-valued functions from appearing in the expression as they could potentially do anything and everything. However, I noticed that we allow functions in FOR EACH ROW triggers so we are already in that position. Do we want to continue allowing that in FOR EACH STATEMENT triggers? If so, then the choice to restrict the expression to just OLD, NEW and the table being triggered against might be wrong.\n\nb. If a WHEN expression is defined as \"n = (SELECT ...)\", there is the possibility that a user gets the error \"more than one row returned by a subquery used as an expression\" when performing DML, which would be rather cryptic if they didn't know there was a trigger involved. To avoid this, we could disallow scalar expressions, with a hint to use the ANY/ALL quantifiers.\n\n-Joe", "msg_date": "Wed, 02 Jun 2021 13:19:17 +0100", "msg_from": "\"Joe Wildish\" <joe@lateraljoin.com>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?Q?Re:_[PATCH]_Allow_queries_in_WHEN_expression_of_FOR_EACH_STATE?=\n =?UTF-8?Q?MENT_triggers?=" }, { "msg_contents": "\"Joe Wildish\" <joe@lateraljoin.com> writes:\n> The main change is a switch to using SPI for expression evaluation. The plans are also cached along the same lines as the RI trigger plans.\n\nI really dislike this implementation technique. Aside from the likely\nperformance hit for existing triggers, I think it opens serious security\nholes, because we can't fully guarantee that deparse-and-reparse doesn't\nchange the semantics. For comparison, the RI trigger code goes to\nridiculous lengths to force exact parsing of the queries it makes,\nand succeeds only because it needs just a very stylized subset of SQL.\nWith a generic user-written expression, we'd be at risk for several\ninherently-ambiguous SQL constructs such as IS DISTINCT FROM (see\nrelevant reading at [1]).\n\nYou could argue that the same hazards apply if the user writes the same\nquery within the body of the trigger, and you'd have a point. But\nwe've made a policy decision that users are on the hook to write their\nfunctions securely. No such decision has ever been taken with respect\nto pre-parsed expression trees. In general, users may expect that\nonce those are parsed by the accepting DDL command, they'll hold still,\nnot get re-interpreted at runtime.\n\n> a. I originally disallowed functions and table-valued functions from appearing in the expression as they could potentially do anything and everything. However, I noticed that we allow functions in FOR EACH ROW triggers so we are already in that position. Do we want to continue allowing that in FOR EACH STATEMENT triggers? If so, then the choice to restrict the expression to just OLD, NEW and the table being triggered against might be wrong.\n\nMeh ... users have always been able to write CHECK constraints, WHEN\nclauses, etc, that have side-effects --- they just have to bury that\ninside a function. It's only their own good taste and the lack of\npredictability of when the side-effects will happen that stop them.\nI don't see much point in enforcing restrictions that are easily\nevaded by making a function.\n\n(Relevant to that, I wonder why this patch is only concerned with\nWHEN clauses and not all the other places where we forbid subqueries\nfor implementation simplicity.)\n\n> b. If a WHEN expression is defined as \"n = (SELECT ...)\", there is the possibility that a user gets the error \"more than one row returned by a subquery used as an expression\" when performing DML, which would be rather cryptic if they didn't know there was a trigger involved. To avoid this, we could disallow scalar expressions, with a hint to use the ANY/ALL quantifiers.\n\nHow is that any more cryptic than any other error that can occur\nin a WHEN expression?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/10492.1531515255%40sss.pgh.pa.us\n\n\n", "msg_date": "Wed, 22 Sep 2021 12:09:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re:\n =?UTF-8?Q?Re:_[PATCH]_Allow_queries_in_WHEN_expression_of_FOR_EACH_STATE?=\n =?UTF-8?Q?MENT_triggers?=" }, { "msg_contents": "Hi Tom,\n\nOn Wed, 22 Sep 2021, at 17:09, Tom Lane wrote:\n> > The main change is a switch to using SPI for expression evaluation. The plans are also cached along the same lines as the RI trigger plans.\n> \n> I really dislike this implementation technique. Aside from the likely\n> performance hit for existing triggers, I think it opens serious security\n> holes, because we can't fully guarantee that deparse-and-reparse doesn't\n> change the semantics. For comparison, the RI trigger code goes to\n> ridiculous lengths to force exact parsing of the queries it makes,\n> and succeeds only because it needs just a very stylized subset of SQL.\n> With a generic user-written expression, we'd be at risk for several\n> inherently-ambiguous SQL constructs such as IS DISTINCT FROM (see\n> relevant reading at [1]).\n\nWhere do you consider the performance hit to be? I just read the code again. The only time the new code paths are hit are when a FOR EACH STATEMENT trigger fires that has a WHEN condition. Given the existing restrictions for such a scenario, that can only mean a WHEN condition that is a simple function call; so, \"SELECT foo()\" vs \"foo()\"? Or have I misunderstood?\n\nRegarding the deparse-and-reparse --- if I understand correctly, the core problem is that we have no way of going from a node tree to a string, such that the string is guaranteed to have the same meaning as the node tree? (I did try just now to produce such a scenario with the patch but I couldn't get ruleutils to emit the wrong thing). Moreover, we couldn't store the string for use with SPI, as the string would be subject to trigger-time search path lookups. That pretty much rules out SPI for this then. Do you have a suggestion for an alternative? I guess it would be go to the planner/executor directly with the node tree?\n\n> In general, users may expect that\n> once those are parsed by the accepting DDL command, they'll hold still,\n> not get re-interpreted at runtime.\n\nI agree entirely. I wasn't aware of the deparsing/reparsing problem.\n\n> ...\n> (Relevant to that, I wonder why this patch is only concerned with\n> WHEN clauses and not all the other places where we forbid subqueries\n> for implementation simplicity.)\n\nI don't know how many other places WHEN clauses are used. Rules, perhaps? The short answer is this patch was written to solve a specific problem I had rather than it being a more general attempt to remove all \"subquery forbidden\" restrictions.\n\n> \n> > b. If a WHEN expression is defined as \"n = (SELECT ...)\", there is the possibility that a user gets the error \"more than one row returned by a subquery used as an expression\" when performing DML, which would be rather cryptic if they didn't know there was a trigger involved. To avoid this, we could disallow scalar expressions, with a hint to use the ANY/ALL quantifiers.\n> \n> How is that any more cryptic than any other error that can occur\n> in a WHEN expression?\n\nIt isn't. What *is* different about it, is that -- AFAIK -- it is the only cryptic message that can come about due to the syntactic structure of an expression. Yes, someone could have a function that does \"RAISE ERROR 'foo'\". There's not a lot that can be done about that. But scalar subqueries are detectable and they have an obvious rewrite using the quantifiers, hence the suggestion. However, it was just that; a suggestion.\n\n-Joe\n\nHi Tom,On Wed, 22 Sep 2021, at 17:09, Tom Lane wrote:> The main change is a switch to using SPI for expression evaluation.  The plans are also cached along the same lines as the RI trigger plans.I really dislike this implementation technique.  Aside from the likelyperformance hit for existing triggers, I think it opens serious securityholes, because we can't fully guarantee that deparse-and-reparse doesn'tchange the semantics.  For comparison, the RI trigger code goes toridiculous lengths to force exact parsing of the queries it makes,and succeeds only because it needs just a very stylized subset of SQL.With a generic user-written expression, we'd be at risk for severalinherently-ambiguous SQL constructs such as IS DISTINCT FROM (seerelevant reading at [1]).Where do you consider the performance hit to be? I just read the code again. The only time the new code paths are hit are when a FOR EACH STATEMENT trigger fires that has a WHEN condition. Given the existing restrictions for such a scenario, that can only mean a WHEN condition that is a simple function call; so, \"SELECT foo()\" vs \"foo()\"? Or have I misunderstood?Regarding the deparse-and-reparse --- if I understand correctly, the core problem is that we have no way of going from a node tree to a string, such that the string is guaranteed to have the same meaning as the node tree? (I did try just now to produce such a scenario with the patch but I couldn't get ruleutils to emit the wrong thing).  Moreover, we couldn't store the string for use with SPI, as the string would be subject to trigger-time search path lookups.  That pretty much rules out SPI for this then.  Do you have a suggestion for an alternative? I guess it would be go to the planner/executor directly with the node tree? In general, users may expect thatonce those are parsed by the accepting DDL command, they'll hold still,not get re-interpreted at runtime.I agree entirely. I wasn't aware of the deparsing/reparsing problem....(Relevant to that, I wonder why this patch is only concerned withWHEN clauses and not all the other places where we forbid subqueriesfor implementation simplicity.)I don't know how many other places WHEN clauses are used. Rules, perhaps? The short answer is this patch was written to solve a specific problem I had rather than it being a more general attempt to remove all \"subquery forbidden\" restrictions.> b. If a WHEN expression is defined as \"n = (SELECT ...)\", there is the possibility that a user gets the error \"more than one row returned by a subquery used as an expression\" when performing DML, which would be rather cryptic if they didn't know there was a trigger involved.  To avoid this, we could disallow scalar expressions, with a hint to use the ANY/ALL quantifiers.How is that any more cryptic than any other error that can occurin a WHEN expression?It isn't.  What *is* different about it, is that -- AFAIK -- it is the only cryptic message that can come about due to the syntactic structure of an expression.  Yes, someone could have a function that does \"RAISE ERROR 'foo'\".  There's not a lot that can be done about that.  But scalar subqueries are detectable and they have an obvious rewrite using the quantifiers, hence the suggestion. However, it was just that; a suggestion.-Joe", "msg_date": "Thu, 23 Sep 2021 10:33:32 +0100", "msg_from": "\"Joe Wildish\" <joe@lateraljoin.com>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?Q?Re:_[PATCH]_Allow_queries_in_WHEN_expression_of_FOR_EACH_STATE?=\n =?UTF-8?Q?MENT_triggers?=" }, { "msg_contents": "Joe Wildish\" <joe@lateraljoin.com> writes:\n> On Wed, 22 Sep 2021, at 17:09, Tom Lane wrote:\n> The main change is a switch to using SPI for expression evaluation. The plans are also cached along the same lines as the RI trigger plans.\n>> \n>> I really dislike this implementation technique. Aside from the likely\n>> performance hit for existing triggers, I think it opens serious security\n>> holes, because we can't fully guarantee that deparse-and-reparse doesn't\n>> change the semantics.\n\n> Where do you consider the performance hit to be?\n\nThe deparse/reparse cost might not be negligible, and putting SPI into\nthe equation where it was not before is certainly going to add overhead.\nNow maybe those things are negligible in context, but I wouldn't believe\nit without seeing some performance numbers.\n\n> Do you have a suggestion for an alternative? I guess it would be go to the planner/executor directly with the node tree?\n\nWhat I'd be thinking about is what it'd take to extend expression_planner\nand related infrastructure to allow expressions containing SubLinks.\nI fear there are a lot of moving parts there though, since the restriction\nhas been in place so long.\n\n>> (Relevant to that, I wonder why this patch is only concerned with\n>> WHEN clauses and not all the other places where we forbid subqueries\n>> for implementation simplicity.)\n\n> I don't know how many other places WHEN clauses are used. Rules, perhaps?\n\nI'm thinking of things like CHECK constraints. Grepping for calls to\nexpression_planner would give you a clearer idea of the scope.\n\n> The short answer is this patch was written to solve a specific problem I had rather than it being a more general attempt to remove all \"subquery forbidden\" restrictions.\n\nI won't carp too much if the initial patch only removes the restriction\nfor WHEN; but I'd like to see it lay the groundwork to remove the\nrestriction elsewhere as well.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 23 Sep 2021 15:53:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re:\n =?UTF-8?Q?Re:_[PATCH]_Allow_queries_in_WHEN_expression_of_FOR_EACH_STATE?=\n =?UTF-8?Q?MENT_triggers?=" }, { "msg_contents": "On Thu, Sep 23, 2021 at 5:34 AM Joe Wildish <joe@lateraljoin.com> wrote:\n> Regarding the deparse-and-reparse --- if I understand correctly, the core problem is that we have no way of going from a node tree to a string, such that the string is guaranteed to have the same meaning as the node tree? (I did try just now to produce such a scenario with the patch but I couldn't get ruleutils to emit the wrong thing). Moreover, we couldn't store the string for use with SPI, as the string would be subject to trigger-time search path lookups. That pretty much rules out SPI for this then. Do you have a suggestion for an alternative? I guess it would be go to the planner/executor directly with the node tree?\n\nI think hoping that you can ever make deparse and reparse reliably\nproduce the same result is a hopeless endeavor. Tom mentioned hazards\nrelated to ambiguous constructs, but there's also often the risk of\nconcurrent DDL. Commit 5f173040e324f6c2eebb90d86cf1b0cdb5890f0a is a\ncautionary tale, demonstrating that you can't even count on\nschema_name.table_name to resolve to the same OID for the entire\nduration of a single DDL command. The same hazard exists for\nfunctions, operators, and anything else that gets looked up in a\nsystem catalog.\n\nI don't know what all of that means for your patch, but just wanted to\nget my $0.02 in on the general topic.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 23 Sep 2021 16:02:38 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Allow queries in WHEN expression of FOR EACH STATEMENT\n triggers" } ]
[ { "msg_contents": "The attached patch allows the vacuum to continue by emitting WARNING\nfor the corrupted tuple instead of immediately error out as discussed\nat [1].\n\nBasically, it provides a new GUC called vacuum_tolerate_damage, to\ncontrol whether to continue the vacuum or to stop on the occurrence of\na corrupted tuple. So if the vacuum_tolerate_damage is set then in\nall the cases in heap_prepare_freeze_tuple where the corrupted xid is\ndetected, it will emit a warning and return that nothing is changed in\nthe tuple and the 'tuple_totally_frozen' will also be set to false.\nSince we are returning false the caller will not try to freeze such\ntuple and the tuple_totally_frozen is also set to false so that the\npage will not be marked to all frozen even if all other tuples in the\npage are frozen.\n\nAlternatively, we can try to freeze other XIDs in the tuple which is\nnot corrupted but I don't think we will gain anything from this,\nbecause if one of the xmin or xmax is wrong then next time also if we\nrun the vacuum then we are going to get the same WARNING or the ERROR.\nIs there any other opinion on this?\n\n[1] http://postgr.es/m/CA+TgmoaZwZHtFFU6NUJgEAp6adDs-qWfNOXpZGQpZMmm0VTDfg@mail.gmail.com\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 17 Jul 2020 16:16:23 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Allow ERROR from heap_prepare_freeze_tuple to be downgraded to\n WARNING" }, { "msg_contents": "Hi Dilip!\n\n\n> 17 июля 2020 г., в 15:46, Dilip Kumar <dilipbalaut@gmail.com> написал(а):\n> \n> The attached patch allows the vacuum to continue by emitting WARNING\n> for the corrupted tuple instead of immediately error out as discussed\n> at [1].\n> \n> Basically, it provides a new GUC called vacuum_tolerate_damage, to\n> control whether to continue the vacuum or to stop on the occurrence of\n> a corrupted tuple. So if the vacuum_tolerate_damage is set then in\n> all the cases in heap_prepare_freeze_tuple where the corrupted xid is\n> detected, it will emit a warning and return that nothing is changed in\n> the tuple and the 'tuple_totally_frozen' will also be set to false.\n> Since we are returning false the caller will not try to freeze such\n> tuple and the tuple_totally_frozen is also set to false so that the\n> page will not be marked to all frozen even if all other tuples in the\n> page are frozen.\n> \n> Alternatively, we can try to freeze other XIDs in the tuple which is\n> not corrupted but I don't think we will gain anything from this,\n> because if one of the xmin or xmax is wrong then next time also if we\n> run the vacuum then we are going to get the same WARNING or the ERROR.\n> Is there any other opinion on this?\n\nFWIW AFAIK this ERROR was the reason why we had to use older versions of heap_prepare_freeze_tuple() in our recovery kit [0].\nSo +1 from me.\nBut I do not think that just ignoring corruption here is sufficient. Soon after this freeze problem user will, probably, have to deal with absent CLOG.\nI think this GUC is only a part of an incomplete solution.\nPersonally I'd be happy if this is backported - our recovery kit would be much smaller. But this does not seem like a valid reason.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n\n[0] https://github.com/dsarafan/pg_dirty_hands/blob/master/src/pg_dirty_hands.c#L443\n\n\n\n\n", "msg_date": "Sun, 19 Jul 2020 16:26:54 +0500", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: Allow ERROR from heap_prepare_freeze_tuple to be downgraded to\n WARNING" }, { "msg_contents": "On Fri, Jul 17, 2020 at 4:16 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> The attached patch allows the vacuum to continue by emitting WARNING\n> for the corrupted tuple instead of immediately error out as discussed\n> at [1].\n>\n> Basically, it provides a new GUC called vacuum_tolerate_damage, to\n> control whether to continue the vacuum or to stop on the occurrence of\n> a corrupted tuple. So if the vacuum_tolerate_damage is set then in\n> all the cases in heap_prepare_freeze_tuple where the corrupted xid is\n> detected, it will emit a warning and return that nothing is changed in\n> the tuple and the 'tuple_totally_frozen' will also be set to false.\n> Since we are returning false the caller will not try to freeze such\n> tuple and the tuple_totally_frozen is also set to false so that the\n> page will not be marked to all frozen even if all other tuples in the\n> page are frozen.\n>\n> Alternatively, we can try to freeze other XIDs in the tuple which is\n> not corrupted but I don't think we will gain anything from this,\n> because if one of the xmin or xmax is wrong then next time also if we\n> run the vacuum then we are going to get the same WARNING or the ERROR.\n> Is there any other opinion on this?\n\nRobert has mentioned at [1] that we probably should refuse to update\n'relfrozenxid/relminmxid' when we encounter such tuple and emit\nWARNING instead of an error. I think we shall do that in some cases\nbut IMHO it's not a very good idea in all the cases. Basically, if\nthe xmin precedes the relfrozenxid then probably we should allow to\nupdate the relfrozenxid whereas if the xmin precedes cutoff xid and\nstill uncommitted then probably we might stop relfrozenxid from being\nupdated so that we can stop CLOG from getting truncated. I will make\nthese changes if we agree with the idea? Or we should keep it simple\nand never allow to update 'relfrozenxid/relminmxid' in such cases?\n\n[1] http://postgr.es/m/CA+TgmoaZwZHtFFU6NUJgEAp6adDs-qWfNOXpZGQpZMmm0VTDfg@mail.gmail.com\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 Jul 2020 14:55:10 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow ERROR from heap_prepare_freeze_tuple to be downgraded to\n WARNING" }, { "msg_contents": "On Sun, Jul 19, 2020 at 4:56 PM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>\n> Hi Dilip!\n>\n>\n> > 17 июля 2020 г., в 15:46, Dilip Kumar <dilipbalaut@gmail.com> написал(а):\n> >\n> > The attached patch allows the vacuum to continue by emitting WARNING\n> > for the corrupted tuple instead of immediately error out as discussed\n> > at [1].\n> >\n> > Basically, it provides a new GUC called vacuum_tolerate_damage, to\n> > control whether to continue the vacuum or to stop on the occurrence of\n> > a corrupted tuple. So if the vacuum_tolerate_damage is set then in\n> > all the cases in heap_prepare_freeze_tuple where the corrupted xid is\n> > detected, it will emit a warning and return that nothing is changed in\n> > the tuple and the 'tuple_totally_frozen' will also be set to false.\n> > Since we are returning false the caller will not try to freeze such\n> > tuple and the tuple_totally_frozen is also set to false so that the\n> > page will not be marked to all frozen even if all other tuples in the\n> > page are frozen.\n> >\n> > Alternatively, we can try to freeze other XIDs in the tuple which is\n> > not corrupted but I don't think we will gain anything from this,\n> > because if one of the xmin or xmax is wrong then next time also if we\n> > run the vacuum then we are going to get the same WARNING or the ERROR.\n> > Is there any other opinion on this?\n>\n> FWIW AFAIK this ERROR was the reason why we had to use older versions of heap_prepare_freeze_tuple() in our recovery kit [0].\n> So +1 from me.\n\nThanks for showing interest in this patch.\n\n> But I do not think that just ignoring corruption here is sufficient. Soon after this freeze problem user will, probably, have to deal with absent CLOG.\n> I think this GUC is only a part of an incomplete solution.\n> Personally I'd be happy if this is backported - our recovery kit would be much smaller. But this does not seem like a valid reason.\n\nI agree that this is just solving one part of the problem and in some\ncases, it may not work if the CLOG itself is corrupted i.e does not\nexist for the xid which are not yet frozen.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 Jul 2020 16:51:57 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow ERROR from heap_prepare_freeze_tuple to be downgraded to\n WARNING" }, { "msg_contents": "On 2020-Jul-20, Dilip Kumar wrote:\n\n> On Fri, Jul 17, 2020 at 4:16 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> > So if the vacuum_tolerate_damage is set then in\n> > all the cases in heap_prepare_freeze_tuple where the corrupted xid is\n> > detected, it will emit a warning and return that nothing is changed in\n> > the tuple and the 'tuple_totally_frozen' will also be set to false.\n> > Since we are returning false the caller will not try to freeze such\n> > tuple and the tuple_totally_frozen is also set to false so that the\n> > page will not be marked to all frozen even if all other tuples in the\n> > page are frozen.\n\n> Robert has mentioned at [1] that we probably should refuse to update\n> 'relfrozenxid/relminmxid' when we encounter such tuple and emit\n> WARNING instead of an error.\n\nIsn't this already happening per your description above?\n\n> I think we shall do that in some cases\n> but IMHO it's not a very good idea in all the cases. Basically, if\n> the xmin precedes the relfrozenxid then probably we should allow to\n> update the relfrozenxid whereas if the xmin precedes cutoff xid and\n> still uncommitted then probably we might stop relfrozenxid from being\n> updated so that we can stop CLOG from getting truncated.\n\nI'm not sure I understand 100% what you're talking about here (the first\nhalf seems dangerous unless you misspoke), but in any case it seems a\npointless optimization. I mean, if the heap is corrupted, you can hope\nto complete the vacuum (which will hopefully return which *other* tuples\nare similarly corrupt) but trying to advance relfrozenxid is a lost\ncause.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 20 Jul 2020 12:44:39 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Allow ERROR from heap_prepare_freeze_tuple to be downgraded to\n WARNING" }, { "msg_contents": "\n\n> 20 июля 2020 г., в 21:44, Alvaro Herrera <alvherre@2ndquadrant.com> написал(а):\n> \n>> I think we shall do that in some cases\n>> but IMHO it's not a very good idea in all the cases. Basically, if\n>> the xmin precedes the relfrozenxid then probably we should allow to\n>> update the relfrozenxid whereas if the xmin precedes cutoff xid and\n>> still uncommitted then probably we might stop relfrozenxid from being\n>> updated so that we can stop CLOG from getting truncated.\n> \n> I'm not sure I understand 100% what you're talking about here (the first\n> half seems dangerous unless you misspoke), but in any case it seems a\n> pointless optimization. I mean, if the heap is corrupted, you can hope\n> to complete the vacuum (which will hopefully return which *other* tuples\n> are similarly corrupt) but trying to advance relfrozenxid is a lost\n> cause.\n\nI think the point here is to actually move relfrozenxid back. But the mince can't be turned back. If CLOG is rotated - the table is corrupted beyond easy repair.\n\nI'm not sure it's Dilip's case, but I'll try to describe what I was encountering.\n\nWe were observing this kind of corruption in three cases:\n1. With a bug in patched Linux kernel page cache we could loose FS page write\n2. With a bug in WAL-G block-level incremental backup - we could loose update of the page.\n3. With a firmware bug in SSD drives from one vendor - one write to block storage device was lost\nOne page in a database is of some non-latest version (but with correct checksum, it's just an old version). And in our case usually a VACUUMing of a page was lost (with freezes of all tuples). Some tuples are not marked as frozen, while VM has frozen bit for page. Everything works just fine until someone updates a tuple on the same page: VM bit is reset and eventually user will try to consult CLOG, which is already truncated.\n\nThis is why we may need to defer CLOG truncation or even move relfrozenxid back.\n\nFWIW we coped with this by actively monitoring this kind of corruption with this amcheck patch [0]. One can observe this lost page updates cheaply in indexes and act on first sight of corruption: identify source of the buggy behaviour.\n\nDilip, does this sound like a corruption case you are working on?\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n[0] https://commitfest.postgresql.org/24/2254/\n\n", "msg_date": "Mon, 20 Jul 2020 23:37:24 +0500", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: Allow ERROR from heap_prepare_freeze_tuple to be downgraded to\n WARNING" }, { "msg_contents": "On 2020-Jul-20, Andrey M. Borodin wrote:\n\n> I think the point here is to actually move relfrozenxid back. But the\n> mince can't be turned back. If CLOG is rotated - the table is\n> corrupted beyond easy repair.\n\nOh, I see. Hmm. Well, if you discover relfrozenxid that's newer and\nthe pg_clog files are still there, then yeah it might make sense to move\nrelfrozenxid back. But it seems difficult to do correctly ... you have\nto move datfrozenxid back too ... frankly, I'd rather not go there.\n\n> I'm not sure it's Dilip's case, but I'll try to describe what I was encountering.\n> \n> We were observing this kind of corruption in three cases:\n> 1. With a bug in patched Linux kernel page cache we could loose FS page write\n\nI think I've seen this too. (Or possibly your #3, which from Postgres\nPOV is the same thing -- a write was claimed done but actually not\ndone.)\n\n> FWIW we coped with this by actively monitoring this kind of corruption\n> with this amcheck patch [0]. One can observe this lost page updates\n> cheaply in indexes and act on first sight of corruption: identify\n> source of the buggy behaviour.\n\nRight.\n\nI wish we had some way to better protect against this kind of problems,\nbut I don't have any ideas. Some things can be protected against with\nchecksums, but if you just lose a write, there's nothing to indicate\nthat. We don't have a per-page write counter, or a central repository\nof per-page LSNs or checksums, and it seems very expensive to maintain\nsuch things.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 20 Jul 2020 15:36:06 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Allow ERROR from heap_prepare_freeze_tuple to be downgraded to\n WARNING" }, { "msg_contents": "Hi,\n\nOn 2020-07-17 16:16:23 +0530, Dilip Kumar wrote:\n> The attached patch allows the vacuum to continue by emitting WARNING\n> for the corrupted tuple instead of immediately error out as discussed\n> at [1].\n> \n> Basically, it provides a new GUC called vacuum_tolerate_damage, to\n> control whether to continue the vacuum or to stop on the occurrence of\n> a corrupted tuple. So if the vacuum_tolerate_damage is set then in\n> all the cases in heap_prepare_freeze_tuple where the corrupted xid is\n> detected, it will emit a warning and return that nothing is changed in\n> the tuple and the 'tuple_totally_frozen' will also be set to false.\n> Since we are returning false the caller will not try to freeze such\n> tuple and the tuple_totally_frozen is also set to false so that the\n> page will not be marked to all frozen even if all other tuples in the\n> page are frozen.\n\nI'm extremely doubtful this is a good idea. In all likelihood this will\njust exascerbate corruption.\n\nYou cannot just stop freezing tuples, that'll lead to relfrozenxid\ngetting *further* out of sync with the actual table contents. And you\ncannot just freeze such tuples, because that has a good chance of making\ndeleted tuples suddenly visible, leading to unique constraint violations\netc. Which will then subsequently lead to clog lookup errors and such.\n\nAt the very least you'd need to signal up that relfrozenxid/relminmxid\ncannot be increased. Without that I think it's entirely unacceptable to\ndo this.\n\n\nIf we really were to do something like this the option would need to be\ncalled vacuum_allow_making_corruption_worse or such. Its need to be\n*exceedingly* clear that it will likely lead to making everything much\nworse.\n\n\n> @@ -6123,6 +6124,8 @@ heap_prepare_freeze_tuple(HeapTupleHeader tuple,\n> \tfrz->t_infomask = tuple->t_infomask;\n> \tfrz->xmax = HeapTupleHeaderGetRawXmax(tuple);\n\nI don't think it can be right to just update heap_prepare_freeze_tuple()\nbut not FreezeMultiXactId().\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 20 Jul 2020 13:30:46 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Allow ERROR from heap_prepare_freeze_tuple to be downgraded to\n WARNING" }, { "msg_contents": "\n\n> 21 июля 2020 г., в 00:36, Alvaro Herrera <alvherre@2ndquadrant.com> написал(а):\n> \n> \n>> FWIW we coped with this by actively monitoring this kind of corruption\n>> with this amcheck patch [0]. One can observe this lost page updates\n>> cheaply in indexes and act on first sight of corruption: identify\n>> source of the buggy behaviour.\n> \n> Right.\n> \n> I wish we had some way to better protect against this kind of problems,\n> but I don't have any ideas. Some things can be protected against with\n> checksums, but if you just lose a write, there's nothing to indicate\n> that. We don't have a per-page write counter, or a central repository\n> of per-page LSNs or checksums, and it seems very expensive to maintain\n> such things.\n\nIf we had data checksums in another fork we could flush them on checkpoint.\nThis checksums could protect from lost page update.\nAnd it would be much easier to maintain these checksums for SLRUs.\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Tue, 21 Jul 2020 07:53:59 +0500", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: Allow ERROR from heap_prepare_freeze_tuple to be downgraded to\n WARNING" }, { "msg_contents": "On Mon, Jul 20, 2020 at 10:14 PM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n>\n> On 2020-Jul-20, Dilip Kumar wrote:\n>\n> > On Fri, Jul 17, 2020 at 4:16 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> > > So if the vacuum_tolerate_damage is set then in\n> > > all the cases in heap_prepare_freeze_tuple where the corrupted xid is\n> > > detected, it will emit a warning and return that nothing is changed in\n> > > the tuple and the 'tuple_totally_frozen' will also be set to false.\n> > > Since we are returning false the caller will not try to freeze such\n> > > tuple and the tuple_totally_frozen is also set to false so that the\n> > > page will not be marked to all frozen even if all other tuples in the\n> > > page are frozen.\n>\n> > Robert has mentioned at [1] that we probably should refuse to update\n> > 'relfrozenxid/relminmxid' when we encounter such tuple and emit\n> > WARNING instead of an error.\n>\n> Isn't this already happening per your description above?\n\nAs per the above description, we are avoiding to set the page as all\nfrozen. But the vacrelstats->scanned_pages count has already been\nincreased for this page. Now, right after the lazy_scan_heap, we\nwill update the pg_class tuple with the new FreezeLimit and\nMultiXactCutoff.\n\n>\n> > I think we shall do that in some cases\n> > but IMHO it's not a very good idea in all the cases. Basically, if\n> > the xmin precedes the relfrozenxid then probably we should allow to\n> > update the relfrozenxid whereas if the xmin precedes cutoff xid and\n> > still uncommitted then probably we might stop relfrozenxid from being\n> > updated so that we can stop CLOG from getting truncated.\n>\n> I'm not sure I understand 100% what you're talking about here (the first\n> half seems dangerous unless you misspoke), but in any case it seems a\n> pointless optimization. I mean, if the heap is corrupted, you can hope\n> to complete the vacuum (which will hopefully return which *other* tuples\n> are similarly corrupt) but trying to advance relfrozenxid is a lost\n> cause.\n\nI agree with your point. I think we just need to avoid advancing the\nrelfrozenxid in all such cases.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 21 Jul 2020 08:56:33 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow ERROR from heap_prepare_freeze_tuple to be downgraded to\n WARNING" }, { "msg_contents": "On Tue, Jul 21, 2020 at 2:00 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-07-17 16:16:23 +0530, Dilip Kumar wrote:\n> > The attached patch allows the vacuum to continue by emitting WARNING\n> > for the corrupted tuple instead of immediately error out as discussed\n> > at [1].\n> >\n> > Basically, it provides a new GUC called vacuum_tolerate_damage, to\n> > control whether to continue the vacuum or to stop on the occurrence of\n> > a corrupted tuple. So if the vacuum_tolerate_damage is set then in\n> > all the cases in heap_prepare_freeze_tuple where the corrupted xid is\n> > detected, it will emit a warning and return that nothing is changed in\n> > the tuple and the 'tuple_totally_frozen' will also be set to false.\n> > Since we are returning false the caller will not try to freeze such\n> > tuple and the tuple_totally_frozen is also set to false so that the\n> > page will not be marked to all frozen even if all other tuples in the\n> > page are frozen.\n>\n> I'm extremely doubtful this is a good idea. In all likelihood this will\n> just exascerbate corruption.\n>\n> You cannot just stop freezing tuples, that'll lead to relfrozenxid\n> getting *further* out of sync with the actual table contents. And you\n> cannot just freeze such tuples, because that has a good chance of making\n> deleted tuples suddenly visible, leading to unique constraint violations\n> etc. Which will then subsequently lead to clog lookup errors and such.\n\nI agree with the point. But, if we keep giving the ERROR in such\ncases then also the situation is not any better. Basically, we are\nnot freezing such tuple as well as we can not advance the\nrelfrozenxid. So if we follow the same rule that we don't freeze\nthose tuples and also don't advance the relfrozenxid. The only\ndifference is, allow the vacuum to continue with other tuples.\n\n> At the very least you'd need to signal up that relfrozenxid/relminmxid\n> cannot be increased. Without that I think it's entirely unacceptable to\n> do this.\n\nI agree with that point. I was just confused that shall we disallow\nto advance the relfrozenxid in all such cases or in some cases where\nthe xid already precedes the relfrozenxid, we can allow it to advance\nas it can not become any worse. But, as Alvaro pointed out that there\nis no point in optimizing such cases. I will update the patch to\nstop advancing the relfrozenxid if we find any corrupted xid during\ntuple freeze.\n\n> If we really were to do something like this the option would need to be\n> called vacuum_allow_making_corruption_worse or such. Its need to be\n> *exceedingly* clear that it will likely lead to making everything much\n> worse.\n>\nMaybe we can clearly describe this in the document.\n\n> > @@ -6123,6 +6124,8 @@ heap_prepare_freeze_tuple(HeapTupleHeader tuple,\n> > frz->t_infomask = tuple->t_infomask;\n> > frz->xmax = HeapTupleHeaderGetRawXmax(tuple);\n>\n> I don't think it can be right to just update heap_prepare_freeze_tuple()\n> but not FreezeMultiXactId().\n\noh, I missed this part. I will fix it.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 21 Jul 2020 11:00:10 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow ERROR from heap_prepare_freeze_tuple to be downgraded to\n WARNING" }, { "msg_contents": "On Tue, Jul 21, 2020 at 11:00 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Jul 21, 2020 at 2:00 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2020-07-17 16:16:23 +0530, Dilip Kumar wrote:\n> > > The attached patch allows the vacuum to continue by emitting WARNING\n> > > for the corrupted tuple instead of immediately error out as discussed\n> > > at [1].\n> > >\n> > > Basically, it provides a new GUC called vacuum_tolerate_damage, to\n> > > control whether to continue the vacuum or to stop on the occurrence of\n> > > a corrupted tuple. So if the vacuum_tolerate_damage is set then in\n> > > all the cases in heap_prepare_freeze_tuple where the corrupted xid is\n> > > detected, it will emit a warning and return that nothing is changed in\n> > > the tuple and the 'tuple_totally_frozen' will also be set to false.\n> > > Since we are returning false the caller will not try to freeze such\n> > > tuple and the tuple_totally_frozen is also set to false so that the\n> > > page will not be marked to all frozen even if all other tuples in the\n> > > page are frozen.\n> >\n> > I'm extremely doubtful this is a good idea. In all likelihood this will\n> > just exascerbate corruption.\n> >\n> > You cannot just stop freezing tuples, that'll lead to relfrozenxid\n> > getting *further* out of sync with the actual table contents. And you\n> > cannot just freeze such tuples, because that has a good chance of making\n> > deleted tuples suddenly visible, leading to unique constraint violations\n> > etc. Which will then subsequently lead to clog lookup errors and such.\n>\n> I agree with the point. But, if we keep giving the ERROR in such\n> cases then also the situation is not any better. Basically, we are\n> not freezing such tuple as well as we can not advance the\n> relfrozenxid. So if we follow the same rule that we don't freeze\n> those tuples and also don't advance the relfrozenxid. The only\n> difference is, allow the vacuum to continue with other tuples.\n>\n> > At the very least you'd need to signal up that relfrozenxid/relminmxid\n> > cannot be increased. Without that I think it's entirely unacceptable to\n> > do this.\n>\n> I agree with that point. I was just confused that shall we disallow\n> to advance the relfrozenxid in all such cases or in some cases where\n> the xid already precedes the relfrozenxid, we can allow it to advance\n> as it can not become any worse. But, as Alvaro pointed out that there\n> is no point in optimizing such cases. I will update the patch to\n> stop advancing the relfrozenxid if we find any corrupted xid during\n> tuple freeze.\n>\n> > If we really were to do something like this the option would need to be\n> > called vacuum_allow_making_corruption_worse or such. Its need to be\n> > *exceedingly* clear that it will likely lead to making everything much\n> > worse.\n> >\n> Maybe we can clearly describe this in the document.\n>\n> > > @@ -6123,6 +6124,8 @@ heap_prepare_freeze_tuple(HeapTupleHeader tuple,\n> > > frz->t_infomask = tuple->t_infomask;\n> > > frz->xmax = HeapTupleHeaderGetRawXmax(tuple);\n> >\n> > I don't think it can be right to just update heap_prepare_freeze_tuple()\n> > but not FreezeMultiXactId().\n>\n> oh, I missed this part. I will fix it.\n\nPlease find the updated patch. In this version, we don't allow the\nrelfrozenxid and relminmxid to advance if the corruption is detected\nand also added the handling in FreezeMultiXactId.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 21 Jul 2020 16:08:36 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow ERROR from heap_prepare_freeze_tuple to be downgraded to\n WARNING" }, { "msg_contents": "On Tue, Jul 21, 2020 at 4:08 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Jul 21, 2020 at 11:00 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Tue, Jul 21, 2020 at 2:00 AM Andres Freund <andres@anarazel.de> wrote:\n> > >\n> > > Hi,\n> > >\n> > > On 2020-07-17 16:16:23 +0530, Dilip Kumar wrote:\n> > > > The attached patch allows the vacuum to continue by emitting WARNING\n> > > > for the corrupted tuple instead of immediately error out as discussed\n> > > > at [1].\n> > > >\n> > > > Basically, it provides a new GUC called vacuum_tolerate_damage, to\n> > > > control whether to continue the vacuum or to stop on the occurrence of\n> > > > a corrupted tuple. So if the vacuum_tolerate_damage is set then in\n> > > > all the cases in heap_prepare_freeze_tuple where the corrupted xid is\n> > > > detected, it will emit a warning and return that nothing is changed in\n> > > > the tuple and the 'tuple_totally_frozen' will also be set to false.\n> > > > Since we are returning false the caller will not try to freeze such\n> > > > tuple and the tuple_totally_frozen is also set to false so that the\n> > > > page will not be marked to all frozen even if all other tuples in the\n> > > > page are frozen.\n> > >\n> > > I'm extremely doubtful this is a good idea. In all likelihood this will\n> > > just exascerbate corruption.\n> > >\n> > > You cannot just stop freezing tuples, that'll lead to relfrozenxid\n> > > getting *further* out of sync with the actual table contents. And you\n> > > cannot just freeze such tuples, because that has a good chance of making\n> > > deleted tuples suddenly visible, leading to unique constraint violations\n> > > etc. Which will then subsequently lead to clog lookup errors and such.\n> >\n> > I agree with the point. But, if we keep giving the ERROR in such\n> > cases then also the situation is not any better. Basically, we are\n> > not freezing such tuple as well as we can not advance the\n> > relfrozenxid. So if we follow the same rule that we don't freeze\n> > those tuples and also don't advance the relfrozenxid. The only\n> > difference is, allow the vacuum to continue with other tuples.\n> >\n> > > At the very least you'd need to signal up that relfrozenxid/relminmxid\n> > > cannot be increased. Without that I think it's entirely unacceptable to\n> > > do this.\n> >\n> > I agree with that point. I was just confused that shall we disallow\n> > to advance the relfrozenxid in all such cases or in some cases where\n> > the xid already precedes the relfrozenxid, we can allow it to advance\n> > as it can not become any worse. But, as Alvaro pointed out that there\n> > is no point in optimizing such cases. I will update the patch to\n> > stop advancing the relfrozenxid if we find any corrupted xid during\n> > tuple freeze.\n> >\n> > > If we really were to do something like this the option would need to be\n> > > called vacuum_allow_making_corruption_worse or such. Its need to be\n> > > *exceedingly* clear that it will likely lead to making everything much\n> > > worse.\n> > >\n> > Maybe we can clearly describe this in the document.\n> >\n> > > > @@ -6123,6 +6124,8 @@ heap_prepare_freeze_tuple(HeapTupleHeader tuple,\n> > > > frz->t_infomask = tuple->t_infomask;\n> > > > frz->xmax = HeapTupleHeaderGetRawXmax(tuple);\n> > >\n> > > I don't think it can be right to just update heap_prepare_freeze_tuple()\n> > > but not FreezeMultiXactId().\n> >\n> > oh, I missed this part. I will fix it.\n>\n> Please find the updated patch. In this version, we don't allow the\n> relfrozenxid and relminmxid to advance if the corruption is detected\n> and also added the handling in FreezeMultiXactId.\n\nIn the previous version, the feature was enabled for cluster/vacuum\nfull command as well. in the attached patch I have enabled it only\nif we are running vacuum command. It will not be enabled during a\ntable rewrite. If we think that it should be enabled for the 'vacuum\nfull' then we might need to pass a flag from the cluster_rel, all the\nway down to the heap_freeze_tuple.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 21 Jul 2020 18:51:17 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow ERROR from heap_prepare_freeze_tuple to be downgraded to\n WARNING" }, { "msg_contents": "On Mon, Jul 20, 2020 at 4:30 PM Andres Freund <andres@anarazel.de> wrote:\n> I'm extremely doubtful this is a good idea. In all likelihood this will\n> just exascerbate corruption.\n>\n> You cannot just stop freezing tuples, that'll lead to relfrozenxid\n> getting *further* out of sync with the actual table contents. And you\n> cannot just freeze such tuples, because that has a good chance of making\n> deleted tuples suddenly visible, leading to unique constraint violations\n> etc. Which will then subsequently lead to clog lookup errors and such.\n\nI think that the behavior ought to be:\n\n- If we encounter any damaged tuples (e.g. tuple xid < relfrozenxid),\nwe give up on advancing relfrozenxid and relminmxid. This vacuum won't\nchange them at all.\n\n- We do nothing to the damaged tuples themselves.\n\n- We can still prune pages, and we can still freeze tuples that do not\nappear to be damaged.\n\nThis amounts to an assumption that relfrozenxid is probably sane, and\nthat there are individual tuples that are messed up. It's probably not\nthe right thing if relfrozenxid got overwritten with a nonsense value\nwithout changing the table contents. But, I think it's difficult to\ncater to all contingencies. In my experience, the normal problem here\nis that there are a few tuples or pages in the table that somehow\nescaped vacuuming for long enough that they contain references to XIDs\nfrom before the last time relfrozenxid was advanced - so continuing to\ndo what we can to the rest of the table is the right thing to do.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 3 Aug 2020 10:20:15 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow ERROR from heap_prepare_freeze_tuple to be downgraded to\n WARNING" }, { "msg_contents": "On Tue, Jul 21, 2020 at 9:21 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> In the previous version, the feature was enabled for cluster/vacuum\n> full command as well. in the attached patch I have enabled it only\n> if we are running vacuum command. It will not be enabled during a\n> table rewrite. If we think that it should be enabled for the 'vacuum\n> full' then we might need to pass a flag from the cluster_rel, all the\n> way down to the heap_freeze_tuple.\n\nI think this is a somewhat clunky way of accomplishing this. The\ncaller passes down a flag to heap_prepare_freeze_tuple() which decides\nwhether or not an error is forced, and then that function and\nFreezeMultiXactId use vacuum_damage_elevel() to combine the results of\nthat flag with the value of the vacuum_tolerate_damage GUC. But that\nmeans that a decision that could be made in one place is instead made\nin many places. If we just had heap_freeze_tuple() and\nFreezeMultiXactId() take an argument int vacuum_damage_elevel, then\nheap_freeze_tuple() could pass ERROR and lazy_scan_heap() could\narrange to pass WARNING or ERROR based on the value of\nvacuum_tolerate_damage. I think that would likely end up cleaner. What\ndo you think?\n\nI also suggest renaming is_corrupted_xid to found_corruption. With the\ncurrent name, it's not very clear which XID we're saying is corrupted;\nin fact, the problem might be a MultiXactId rather than an XID, and\nthe real issue might be with the table's pg_class entry or something.\n\nThe new arguments to heap_prepare_freeze_tuple() need to be documented\nin its header comment.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 28 Aug 2020 12:19:24 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow ERROR from heap_prepare_freeze_tuple to be downgraded to\n WARNING" }, { "msg_contents": "On Mon, Jul 20, 2020 at 4:30 PM Andres Freund <andres@anarazel.de> wrote:\n> If we really were to do something like this the option would need to be\n> called vacuum_allow_making_corruption_worse or such. Its need to be\n> *exceedingly* clear that it will likely lead to making everything much\n> worse.\n\nI don't really understand this objection. How does letting VACUUM\ncontinue after problems have been detected make anything worse? I\nagree that if it does, it shouldn't touch relfrozenxid or relminmxid,\nbut the patch has been adjusted to work that way. Assuming you don't\ntouch relfrozenxid or relminmxid, what harm befalls if you continue\nfreezing undamaged tuples and continue removing dead tuples after\nfinding a bad tuple? You may have already done an arbitrary amount of\nthat before encountering the damage, and doing it afterward is no\ndifferent. Doing the index vacuuming step is different, but I don't\nsee how that would exacerbate corruption either.\n\nThe point is that when you make VACUUM fail, you not only don't\nadvance relfrozenxid/relminmxid, but also don't remove dead tuples. In\nthe long run, either thing will kill you, but it is not difficult to\nhave a situation where failing to remove dead tuples kills you a lot\nfaster. The table can just bloat until performance tanks, and then the\napplication goes down, even if you still had 100+ million XIDs before\nyou needed a wraparound vacuum.\n\nHonestly, I wonder why continuing (but without advancing relfrozenxid\nor relminmxid) shouldn't be the default behavior. I mean, if it\nactually corrupts your data, then it clearly shouldn't be, and\nprobably shouldn't even be an optional behavior, but I still don't see\nwhy it would do that.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 28 Aug 2020 12:37:17 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow ERROR from heap_prepare_freeze_tuple to be downgraded to\n WARNING" }, { "msg_contents": "Hi,\n\nOn 2020-08-28 12:37:17 -0400, Robert Haas wrote:\n> On Mon, Jul 20, 2020 at 4:30 PM Andres Freund <andres@anarazel.de> wrote:\n> > If we really were to do something like this the option would need to be\n> > called vacuum_allow_making_corruption_worse or such. Its need to be\n> > *exceedingly* clear that it will likely lead to making everything much\n> > worse.\n> \n> I don't really understand this objection. How does letting VACUUM\n> continue after problems have been detected make anything worse?\n\nIt can break HOT chains, plain ctid chains etc, for example. Which, if\nearlier / follower tuples are removed can't be detected anymore at a\nlater time.\n\n\n> The point is that when you make VACUUM fail, you not only don't\n> advance relfrozenxid/relminmxid, but also don't remove dead tuples. In\n> the long run, either thing will kill you, but it is not difficult to\n> have a situation where failing to remove dead tuples kills you a lot\n> faster. The table can just bloat until performance tanks, and then the\n> application goes down, even if you still had 100+ million XIDs before\n> you needed a wraparound vacuum.\n> \n> Honestly, I wonder why continuing (but without advancing relfrozenxid\n> or relminmxid) shouldn't be the default behavior. I mean, if it\n> actually corrupts your data, then it clearly shouldn't be, and\n> probably shouldn't even be an optional behavior, but I still don't see\n> why it would do that.\n\nI think it's an EXTREMELY bad idea to enable anything like this by\ndefault. It'll make bugs entirely undiagnosable, because we'll remove a\nlot of the evidence of what the problem is. And we've had many long\nstanding bugs in this area, several only found because we actually\nstarted to bleat about them. And quite evidently, we have more bugs to\nfix in the area.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 28 Aug 2020 10:29:16 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Allow ERROR from heap_prepare_freeze_tuple to be downgraded to\n WARNING" }, { "msg_contents": "On Fri, Aug 28, 2020 at 1:29 PM Andres Freund <andres@anarazel.de> wrote:\n> It can break HOT chains, plain ctid chains etc, for example. Which, if\n> earlier / follower tuples are removed can't be detected anymore at a\n> later time.\n\nI think I need a more specific example here to understand the problem.\nIf the xmax of one tuple matches the xmin of the next, then either\nboth values precede relfrozenxid or both follow it. In the former\ncase, neither tuple should be frozen and the chain should not get\nbroken; in the latter case, everything's normal anyway. If the xmax\nand xmin don't match, then the chain was already broken. Perhaps we\nare removing important evidence, though it seems like that might've\nhappened anyway prior to reaching the damaged page, but we're not\nmaking whatever corruption may exist any worse. At least, not as far\nas I can see.\n\n> And we've had many long\n> standing bugs in this area, several only found because we actually\n> started to bleat about them. And quite evidently, we have more bugs to\n> fix in the area.\n\nI agree with all of this, but I do not think that it establishes that\nwe should abandon the entire VACUUM. \"Bleating\" about something\nusually means logging it, and I think you understand that I am not now\nnor have I ever complained about the logging we are doing here. I also\nthink you understand why I don't like the current behavior, and that\nEDB has actual customers who have actually been damaged by it. All the\nsame, I don't expect to win an argument about changing the default,\nbut I hope to win one about at least providing an option. And if we're\nnot even going to do that much, then I hope to come out of this\ndiscussion with a clear understanding of exactly why that's a bad\nidea. I don't think \"we need the data for forensics\" is a sufficient\njustification for \"if you end up with one corrupted XID in a\nbillion-row table, your entire table will bloat out the wazoo, and\nthere is no option to get any other behavior.\"\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 28 Aug 2020 16:15:58 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow ERROR from heap_prepare_freeze_tuple to be downgraded to\n WARNING" }, { "msg_contents": "On Fri, Aug 28, 2020 at 9:49 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Jul 21, 2020 at 9:21 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > In the previous version, the feature was enabled for cluster/vacuum\n> > full command as well. in the attached patch I have enabled it only\n> > if we are running vacuum command. It will not be enabled during a\n> > table rewrite. If we think that it should be enabled for the 'vacuum\n> > full' then we might need to pass a flag from the cluster_rel, all the\n> > way down to the heap_freeze_tuple.\n>\n> I think this is a somewhat clunky way of accomplishing this. The\n> caller passes down a flag to heap_prepare_freeze_tuple() which decides\n> whether or not an error is forced, and then that function and\n> FreezeMultiXactId use vacuum_damage_elevel() to combine the results of\n> that flag with the value of the vacuum_tolerate_damage GUC. But that\n> means that a decision that could be made in one place is instead made\n> in many places. If we just had heap_freeze_tuple() and\n> FreezeMultiXactId() take an argument int vacuum_damage_elevel, then\n> heap_freeze_tuple() could pass ERROR and lazy_scan_heap() could\n> arrange to pass WARNING or ERROR based on the value of\n> vacuum_tolerate_damage. I think that would likely end up cleaner. What\n> do you think?\n\nI agree this way it is much more cleaner. I have changed in the attached patch.\n\n> I also suggest renaming is_corrupted_xid to found_corruption. With the\n> current name, it's not very clear which XID we're saying is corrupted;\n> in fact, the problem might be a MultiXactId rather than an XID, and\n> the real issue might be with the table's pg_class entry or something.\n\nOkay, changed to found_corruption.\n\n> The new arguments to heap_prepare_freeze_tuple() need to be documented\n> in its header comment.\n\nDone.\n\nI have also done a few more cosmetic changes to the patch.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sat, 29 Aug 2020 13:38:59 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow ERROR from heap_prepare_freeze_tuple to be downgraded to\n WARNING" }, { "msg_contents": "On Sat, Aug 29, 2020 at 1:46 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Aug 28, 2020 at 1:29 PM Andres Freund <andres@anarazel.de> wrote:\n> > It can break HOT chains, plain ctid chains etc, for example. Which, if\n> > earlier / follower tuples are removed can't be detected anymore at a\n> > later time.\n>\n> I think I need a more specific example here to understand the problem.\n> If the xmax of one tuple matches the xmin of the next, then either\n> both values precede relfrozenxid or both follow it. In the former\n> case, neither tuple should be frozen and the chain should not get\n> broken; in the latter case, everything's normal anyway. If the xmax\n> and xmin don't match, then the chain was already broken. Perhaps we\n> are removing important evidence, though it seems like that might've\n> happened anyway prior to reaching the damaged page, but we're not\n> making whatever corruption may exist any worse. At least, not as far\n> as I can see.\n\nOne example is, suppose during vacuum, there are 2 tuples in the hot\nchain, and the xmin of the first tuple is corrupted (i.e. smaller\nthan relfrozenxid). And the xmax of this tuple (which is same as the\nxmin of the second tuple) is smaller than the cutoff_xid while trying\nto freeze the tuple. As a result, it will freeze the second tuple but\nthe first tuple will be left untouched.\n\nNow, if we come for the heap_hot_search_buffer, then the xmax of the\nfirst tuple will not match the xmin of the second tuple as we have\nfrozen the second tuple. But, I feel this is easily fixable right? I\nmean instead of not doing anything to the corrupted tuple we can\npartially freeze it? I mean we can just leave the corrupted xid alone\nbut mark the other xid as frozen if that is smaller then cutoff_xid.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 29 Aug 2020 14:06:24 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow ERROR from heap_prepare_freeze_tuple to be downgraded to\n WARNING" }, { "msg_contents": "On Sat, Aug 29, 2020 at 4:36 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> One example is, suppose during vacuum, there are 2 tuples in the hot\n> chain, and the xmin of the first tuple is corrupted (i.e. smaller\n> than relfrozenxid). And the xmax of this tuple (which is same as the\n> xmin of the second tuple) is smaller than the cutoff_xid while trying\n> to freeze the tuple. As a result, it will freeze the second tuple but\n> the first tuple will be left untouched.\n>\n> Now, if we come for the heap_hot_search_buffer, then the xmax of the\n> first tuple will not match the xmin of the second tuple as we have\n> frozen the second tuple. But, I feel this is easily fixable right? I\n> mean instead of not doing anything to the corrupted tuple we can\n> partially freeze it? I mean we can just leave the corrupted xid alone\n> but mark the other xid as frozen if that is smaller then cutoff_xid.\n\nThat seems reasonable to me. Andres, what do you think?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 14 Sep 2020 13:26:27 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow ERROR from heap_prepare_freeze_tuple to be downgraded to\n WARNING" }, { "msg_contents": "Hi,\n\nOn 2020-09-14 13:26:27 -0400, Robert Haas wrote:\n> On Sat, Aug 29, 2020 at 4:36 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > One example is, suppose during vacuum, there are 2 tuples in the hot\n> > chain, and the xmin of the first tuple is corrupted (i.e. smaller\n> > than relfrozenxid). And the xmax of this tuple (which is same as the\n> > xmin of the second tuple) is smaller than the cutoff_xid while trying\n> > to freeze the tuple. As a result, it will freeze the second tuple but\n> > the first tuple will be left untouched.\n> >\n> > Now, if we come for the heap_hot_search_buffer, then the xmax of the\n> > first tuple will not match the xmin of the second tuple as we have\n> > frozen the second tuple. But, I feel this is easily fixable right? I\n> > mean instead of not doing anything to the corrupted tuple we can\n> > partially freeze it? I mean we can just leave the corrupted xid alone\n> > but mark the other xid as frozen if that is smaller then cutoff_xid.\n> \n> That seems reasonable to me. Andres, what do you think?\n\nIt seems pretty dangerous to me. What exactly are you going to put into\nxmin/xmax here? And how would anything you put into the first tuple not\nbreak index lookups? There's no such thing as a frozen xmax (so far), so\nwhat are you going to put in there? A random different xid?\nFrozenTransactionId? HEAP_XMAX_INVALID?\n\nThis whole approach just seems likely to exascerbate corruption while\nalso making it impossible to debug. That's ok enough if it's an explicit\nuser action, but doing it based on a config variable setting seems\nabsurdly dangerous to me.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 14 Sep 2020 11:39:05 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Allow ERROR from heap_prepare_freeze_tuple to be downgraded to\n WARNING" }, { "msg_contents": "On 2020-Sep-14, Andres Freund wrote:\n\n> It seems pretty dangerous to me. What exactly are you going to put into\n> xmin/xmax here? And how would anything you put into the first tuple not\n> break index lookups? There's no such thing as a frozen xmax (so far), so\n> what are you going to put in there? A random different xid?\n> FrozenTransactionId? HEAP_XMAX_INVALID?\n> \n> This whole approach just seems likely to exascerbate corruption while\n> also making it impossible to debug. That's ok enough if it's an explicit\n> user action, but doing it based on a config variable setting seems\n> absurdly dangerous to me.\n\nFWIW I agree with Andres' stance on this. The current system is *very*\ncomplicated and bugs are obscure already. If we hide them, what we'll\nbe getting is a system where data can become corrupted for no apparent\nreason.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 14 Sep 2020 16:00:18 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Allow ERROR from heap_prepare_freeze_tuple to be downgraded to\n WARNING" }, { "msg_contents": "On Mon, Sep 14, 2020 at 3:00 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> FWIW I agree with Andres' stance on this. The current system is *very*\n> complicated and bugs are obscure already. If we hide them, what we'll\n> be getting is a system where data can become corrupted for no apparent\n> reason.\n\nI think I might have to give up on this proposal given the level of\nopposition to it, but the nature of the opposition doesn't make any\nsense to me on a technical level. Suppose a tuple with tid A has been\nupdated, producing a new version at tid B. The argument that is now\nbeing offered is that if A has been found to be corrupt then we'd\nbetter stop vacuuming the table altogether lest we reach B and vacuum\nit too, further corrupting the table and destroying forensic evidence.\nBut even ignoring the fact that many users want to get the database\nrunning again more than they want to do forensics, it's entirely\npossible that B < A, in which case the damage has already been done.\nTherefore, I can't see any argument that this patch creates any\nscenario that can't happen already. It seems entirely reasonable to me\nto say, as a review comment, hey, you haven't sufficiently considered\nthis particular scenario, that still needs work. But the argument here\nis much more about whether this is a reasonable thing to do in general\nand under any circumstances, and it feels to me like you guys are\nsaying \"no\" without offering any really convincing evidence that there\nare unfixable problems here. IOW, I agree that having a GUC\ncorrupt_my_tables_more=true is not a reasonable thing, but I disagree\nthat the proposal on the table is tantamount to that.\n\nThe big picture here is that people have terabyte-scale tables, 1 or 2\ntuples get corrupted, and right now the only real fix is to dump and\nrestore the whole table, which leads to prolonged downtime. The\npg_surgery stuff should help with that, and the work to make VACUUM\nreport the exact TID will also help, and if we can get the heapcheck\nstuff Mark Dilger is working on committed, that will provide an\nalternative and probably better way of finding this kind of\ncorruption, which is all to the good. However, I disagree with the\nidea that a typical user who has a 2TB with one corrupted tuple on\npage 0 probably wants VACUUM to fail over and over again, letting the\ntable bloat like crazy, instead of bleating loudly but still vacuuming\nthe other 0.999999% of the table. I mean, somebody probably wants\nthat, and that's fine. But I have a hard time imagining it as a\ntypical view. Am I just lacking in imagination?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 14 Sep 2020 15:50:49 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow ERROR from heap_prepare_freeze_tuple to be downgraded to\n WARNING" }, { "msg_contents": "Hi,\n\nOn 2020-09-14 15:50:49 -0400, Robert Haas wrote:\n> On Mon, Sep 14, 2020 at 3:00 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > FWIW I agree with Andres' stance on this. The current system is *very*\n> > complicated and bugs are obscure already. If we hide them, what we'll\n> > be getting is a system where data can become corrupted for no apparent\n> > reason.\n> \n> I think I might have to give up on this proposal given the level of\n> opposition to it, but the nature of the opposition doesn't make any\n> sense to me on a technical level. Suppose a tuple with tid A has been\n> updated, producing a new version at tid B. The argument that is now\n> being offered is that if A has been found to be corrupt then we'd\n> better stop vacuuming the table altogether lest we reach B and vacuum\n> it too, further corrupting the table and destroying forensic evidence.\n> But even ignoring the fact that many users want to get the database\n> running again more than they want to do forensics, it's entirely\n> possible that B < A, in which case the damage has already been done.\n\nMy understanding of the case we're discussing is that it's corruption\n(e.g. relfrozenxid being different than table contents) affecting a HOT\nchain. I.e. by definition all within a single page. We won't have\nmodified part of it independent of B < A, because freezing is\nall-or-nothing. Just breaking the HOT chain into two or something like\nthat will just make things worse, because indexes won't find tuples, and\nbecause reindexing might then get confused e.g. by HOT chains without a\nvalid start, or by having two visible tuples for the same PK.\n\n\n> But even ignoring the fact that many users want to get the database\n> running again more than they want to do forensics\n\nThe user isn't always right. And I am not objecting against providing a\ntool to get things running. I'm objecting to VACUUM doing so, especially\nwhen it's a system wide config option triggering that behaviour.\n\n\n> Therefore, I can't see any argument that this patch creates any\n> scenario that can't happen already. It seems entirely reasonable to me\n> to say, as a review comment, hey, you haven't sufficiently considered\n> this particular scenario, that still needs work. But the argument here\n> is much more about whether this is a reasonable thing to do in general\n> and under any circumstances, and it feels to me like you guys are\n> saying \"no\" without offering any really convincing evidence that there\n> are unfixable problems here.\n\nI don't think that's quite the calculation. You're suggesting to make\nalready really complicated and failure prone code even more complicated\nby adding heuristic error recovery to it. That has substantial cost,\neven if we were to get it perfectly right (which I don't believe we\nwill).\n\n\n> The big picture here is that people have terabyte-scale tables, 1 or 2\n> tuples get corrupted, and right now the only real fix is to dump and\n> restore the whole table, which leads to prolonged downtime. The\n> pg_surgery stuff should help with that, and the work to make VACUUM\n> report the exact TID will also help, and if we can get the heapcheck\n> stuff Mark Dilger is working on committed, that will provide an\n> alternative and probably better way of finding this kind of\n> corruption, which is all to the good.\n\nAgreed.\n\n\n> However, I disagree with the idea that a typical user who has a 2TB\n> with one corrupted tuple on page 0 probably wants VACUUM to fail over\n> and over again, letting the table bloat like crazy, instead of\n> bleating loudly but still vacuuming the other 0.999999% of the\n> table. I mean, somebody probably wants that, and that's fine. But I\n> have a hard time imagining it as a typical view. Am I just lacking in\n> imagination?\n\nI know that that kind of user exists, but yea, I disagree extremely\nstrongly that that's a reasonable thing that the majority of users\nwant. And I don't think that that's something we should encourage. Those\ncases indicate that either postgres has a bug, or their storage / memory\n/ procedures have an issue. Reacting by making it harder to diagnose is\njust a bad idea.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 14 Sep 2020 13:13:35 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Allow ERROR from heap_prepare_freeze_tuple to be downgraded to\n WARNING" }, { "msg_contents": "On Mon, Sep 14, 2020 at 4:13 PM Andres Freund <andres@anarazel.de> wrote:\n> My understanding of the case we're discussing is that it's corruption\n> (e.g. relfrozenxid being different than table contents) affecting a HOT\n> chain. I.e. by definition all within a single page. We won't have\n> modified part of it independent of B < A, because freezing is\n> all-or-nothing. Just breaking the HOT chain into two or something like\n> that will just make things worse, because indexes won't find tuples, and\n> because reindexing might then get confused e.g. by HOT chains without a\n> valid start, or by having two visible tuples for the same PK.\n\nIf we adopt the proposal made by Dilip, we will not do that. We must\nhave a.xmax = b.xmin, and that value is either less than relfrozenxid\nor it is not. If we skip an entire tuple because one XID is bad, then\nwe could break the HOT chain when a.xmin is bad and the remaining\nvalues are OK. But if we decide separately for xmin and xmax then we\nshould be alright. Alternately, if we're only concerned about HOT\nchains, we could skip the entire page if any tuple on the page shows\nevidence of damage.\n\n> I don't think that's quite the calculation. You're suggesting to make\n> already really complicated and failure prone code even more complicated\n> by adding heuristic error recovery to it. That has substantial cost,\n> even if we were to get it perfectly right (which I don't believe we\n> will).\n\nThat's a legitimate concern, but I think it would make more sense to\nfirst make the design as good as we can and then decide whether it's\nadequate than to decide ab initio that there's no way to make it good\nenough.\n\n> > However, I disagree with the idea that a typical user who has a 2TB\n> > with one corrupted tuple on page 0 probably wants VACUUM to fail over\n> > and over again, letting the table bloat like crazy, instead of\n> > bleating loudly but still vacuuming the other 0.999999% of the\n> > table. I mean, somebody probably wants that, and that's fine. But I\n> > have a hard time imagining it as a typical view. Am I just lacking in\n> > imagination?\n>\n> I know that that kind of user exists, but yea, I disagree extremely\n> strongly that that's a reasonable thing that the majority of users\n> want. And I don't think that that's something we should encourage. Those\n> cases indicate that either postgres has a bug, or their storage / memory\n> / procedures have an issue. Reacting by making it harder to diagnose is\n> just a bad idea.\n\nWell, the people I tend to deal with are not going to let me conduct a\nlengthy investigation almost no matter what, and the more severe the\noperational consequences of the problem are, the less likely it is\nthat I'm going to have time to figure anything out. Being able to\ncreate some kind of breathing room is pretty valuable.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 14 Sep 2020 17:00:48 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow ERROR from heap_prepare_freeze_tuple to be downgraded to\n WARNING" }, { "msg_contents": "Hi,\n\nOn 2020-09-14 17:00:48 -0400, Robert Haas wrote:\n> On Mon, Sep 14, 2020 at 4:13 PM Andres Freund <andres@anarazel.de> wrote:\n> > My understanding of the case we're discussing is that it's corruption\n> > (e.g. relfrozenxid being different than table contents) affecting a HOT\n> > chain. I.e. by definition all within a single page. We won't have\n> > modified part of it independent of B < A, because freezing is\n> > all-or-nothing. Just breaking the HOT chain into two or something like\n> > that will just make things worse, because indexes won't find tuples, and\n> > because reindexing might then get confused e.g. by HOT chains without a\n> > valid start, or by having two visible tuples for the same PK.\n> \n> If we adopt the proposal made by Dilip, we will not do that. We must\n> have a.xmax = b.xmin, and that value is either less than relfrozenxid\n> or it is not. If we skip an entire tuple because one XID is bad, then\n> we could break the HOT chain when a.xmin is bad and the remaining\n> values are OK. But if we decide separately for xmin and xmax then we\n> should be alright.\n\nI thought I precisely addressed this case:\n\n> What exactly are you going to put into xmin/xmax here? And how would\n> anything you put into the first tuple not break index lookups? There's\n> no such thing as a frozen xmax (so far), so what are you going to put\n> in there? A random different xid? FrozenTransactionId?\n> HEAP_XMAX_INVALID?\n\nWhat am I missing?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 14 Sep 2020 14:05:22 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Allow ERROR from heap_prepare_freeze_tuple to be downgraded to\n WARNING" }, { "msg_contents": "On Tue, Sep 15, 2020 at 2:35 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-09-14 17:00:48 -0400, Robert Haas wrote:\n> > On Mon, Sep 14, 2020 at 4:13 PM Andres Freund <andres@anarazel.de> wrote:\n> > > My understanding of the case we're discussing is that it's corruption\n> > > (e.g. relfrozenxid being different than table contents) affecting a HOT\n> > > chain. I.e. by definition all within a single page. We won't have\n> > > modified part of it independent of B < A, because freezing is\n> > > all-or-nothing. Just breaking the HOT chain into two or something like\n> > > that will just make things worse, because indexes won't find tuples, and\n> > > because reindexing might then get confused e.g. by HOT chains without a\n> > > valid start, or by having two visible tuples for the same PK.\n> >\n> > If we adopt the proposal made by Dilip, we will not do that. We must\n> > have a.xmax = b.xmin, and that value is either less than relfrozenxid\n> > or it is not. If we skip an entire tuple because one XID is bad, then\n> > we could break the HOT chain when a.xmin is bad and the remaining\n> > values are OK. But if we decide separately for xmin and xmax then we\n> > should be alright.\n>\n> I thought I precisely addressed this case:\n>\n> > What exactly are you going to put into xmin/xmax here? And how would\n> > anything you put into the first tuple not break index lookups? There's\n> > no such thing as a frozen xmax (so far), so what are you going to put\n> > in there? A random different xid? FrozenTransactionId?\n> > HEAP_XMAX_INVALID?\n>\n> What am I missing?\n\nWhat problem do you see if we set xmax to the InvalidTransactionId and\nHEAP_XMAX_INVALID flag in the infomask ? I mean now also if the xmax\nis older than the cutoff xid then we do the same thing i.e.\nif (freeze_xmax)\n{\n..\nfrz->xmax = InvalidTransactionId;\n..\nfrz->t_infomask &= ~HEAP_XMAX_BITS;\nfrz->t_infomask |= HEAP_XMAX_INVALID;\nfrz->t_infomask2 &= ~HEAP_HOT_UPDATED;\nfrz->t_infomask2 &= ~HEAP_KEYS_UPDATED;\nchanged = true;\n}\n\nSo if we do that it will not be part of the hot chain anymore. I\nmight be missing something but could not see how it can be more broken\nthan what it is without our change. I agree that in case of corrupted\nxmin it can now mark tuple with HEAP_XMAX_INVALID without freezing the\nxmin but that is anyway a valid status for a tuple.\n\nHowever, if we think it still can cause some issues then I feel that\nwe can skip the whole page as Robert suggested.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 15 Sep 2020 10:54:29 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow ERROR from heap_prepare_freeze_tuple to be downgraded to\n WARNING" }, { "msg_contents": "On 2020-09-15 10:54:29 +0530, Dilip Kumar wrote:\n> What problem do you see if we set xmax to the InvalidTransactionId and\n> HEAP_XMAX_INVALID flag in the infomask ?\n\n1) It'll make a dead tuple appear live. You cannot do this for tuples\n with an xid below the horizon.\n2) it'll break HOT chain following / indexes.\n\n\n", "msg_date": "Mon, 14 Sep 2020 22:44:00 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Allow ERROR from heap_prepare_freeze_tuple to be downgraded to\n WARNING" }, { "msg_contents": "On Tue, Sep 15, 2020 at 11:14 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2020-09-15 10:54:29 +0530, Dilip Kumar wrote:\n> > What problem do you see if we set xmax to the InvalidTransactionId and\n> > HEAP_XMAX_INVALID flag in the infomask ?\n>\n> 1) It'll make a dead tuple appear live. You cannot do this for tuples\n> with an xid below the horizon.\n\nHow is it possible? Because tuple which has a committed xmax and the\nxmax is older than the oldestXmin, should not come for freezing unless\nit is lock_only xid (because those tuples are already gone). So if\nthe xmax is smaller than the cutoff xid than either it is lock_only or\nit is aborted. If the XMAX is lock only then I don't see any issue\nOTOH if it is aborted xid and if it is already smaller than the\ncut-off xid then it is anyway live tuple.\n\n>2) it'll break HOT chain following / indexes.\n\nIf my above theory in point 1 is correct then I don't see this issue as well.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 15 Sep 2020 12:52:25 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow ERROR from heap_prepare_freeze_tuple to be downgraded to\n WARNING" }, { "msg_contents": "Hi,\n\nOn 2020-09-15 12:52:25 +0530, Dilip Kumar wrote:\n> On Tue, Sep 15, 2020 at 11:14 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > On 2020-09-15 10:54:29 +0530, Dilip Kumar wrote:\n> > > What problem do you see if we set xmax to the InvalidTransactionId and\n> > > HEAP_XMAX_INVALID flag in the infomask ?\n> >\n> > 1) It'll make a dead tuple appear live. You cannot do this for tuples\n> > with an xid below the horizon.\n> \n> How is it possible? Because tuple which has a committed xmax and the\n> xmax is older than the oldestXmin, should not come for freezing unless\n> it is lock_only xid (because those tuples are already gone).\n\nThere've been several cases of this in the past. A fairly easy way is a\ncorrupted relfrozenxid (of which there are many examples).\n\nYou simply cannot just assume that everything is OK and argue that\nthat's why it's ok to fix data corruption in some approximate manner. By\ndefinition everything *is not ok* if you ever come here.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 15 Sep 2020 11:04:13 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Allow ERROR from heap_prepare_freeze_tuple to be downgraded to\n WARNING" }, { "msg_contents": "On Tue, Sep 15, 2020 at 2:04 PM Andres Freund <andres@anarazel.de> wrote:\n> > How is it possible? Because tuple which has a committed xmax and the\n> > xmax is older than the oldestXmin, should not come for freezing unless\n> > it is lock_only xid (because those tuples are already gone).\n>\n> There've been several cases of this in the past. A fairly easy way is a\n> corrupted relfrozenxid (of which there are many examples).\n\nHmm, so is the case you're worried about here the case where the\nfreezing threshold is greater than the pruning threshold? i.e. The\nrelfrozenxid has been updated to a value greater than the xmin we\nderive from the procarray?\n\nIf that's not the case, then I don't see what problem there can be\nhere. To reach heap_prepare_freeze_tuple the tuple has to survive\npruning. If xmin < freezing-threshold and freezing-threshold <\npruning-threshold and the tuple survived pruning, then xmin must be a\ncommitted transaction visible to everyone so setting xmin to\nFrozenTransactionId is fine. If xmax < freezing-threshold and\nfreezing-threshold < pruning-threshold and the tuple survived pruning,\nxmax must be visible to everyone and can't be running so it must have\naborted, so setting xmax to InvalidTransactionId is fine.\n\nOn the other hand if, somehow, freezing-threshold > pruning-threshold,\nthen freezing seems categorically unsafe. Doing so would change\nvisibility decisions of transactions that are still running, or that\nwere running at the time when we computed the pruning threshold. But\nthe sanity checks in heap_prepare_freeze_tuple() seem like they would\ncatch many such cases, but I'm not sure if they're all water-tight. It\nmight be better to skip calling heap_prepare_freeze_tuple() altogether\nif the freezing threshold does not precede the pruning threshold.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 15 Sep 2020 17:28:28 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow ERROR from heap_prepare_freeze_tuple to be downgraded to\n WARNING" } ]
[ { "msg_contents": "This started out with just fixing\n\n\"One option do deal\" to \" One option to deal\"\n\nBut after reading the rest I'd propose the following patch.\n\nDave Cramer", "msg_date": "Fri, 17 Jul 2020 11:09:58 -0400", "msg_from": "Dave Cramer <davecramer@gmail.com>", "msg_from_op": true, "msg_subject": "Patch for reorderbuffer.c documentation." }, { "msg_contents": "On Fri, Jul 17, 2020 at 8:10 AM Dave Cramer <davecramer@gmail.com> wrote:\n\n> This started out with just fixing\n>\n> \"One option do deal\" to \" One option to deal\"\n>\n> But after reading the rest I'd propose the following patch.\n>\n\nSuggest replacing \"though\" with \"however\" instead of trying to figure out\nwhat amount of commas is readable (the original seemed better IMO).\n\n\"However, the transaction records are fairly small and\"\n\nThe rest is straight-forward.\n\nDavid J.\n\nOn Fri, Jul 17, 2020 at 8:10 AM Dave Cramer <davecramer@gmail.com> wrote:This started out with just fixing \"One option do deal\" to \" One option to deal\"But after reading the rest I'd propose the following patch.Suggest replacing \"though\" with \"however\" instead of trying to figure out what amount of commas is readable (the original seemed better IMO).\"However, the transaction records are fairly small and\"The rest is straight-forward.David J.", "msg_date": "Fri, 17 Jul 2020 08:16:50 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Patch for reorderbuffer.c documentation." }, { "msg_contents": "On Fri, 17 Jul 2020 at 11:17, David G. Johnston <david.g.johnston@gmail.com>\nwrote:\n\n> On Fri, Jul 17, 2020 at 8:10 AM Dave Cramer <davecramer@gmail.com> wrote:\n>\n>> This started out with just fixing\n>>\n>> \"One option do deal\" to \" One option to deal\"\n>>\n>> But after reading the rest I'd propose the following patch.\n>>\n>\n> Suggest replacing \"though\" with \"however\" instead of trying to figure out\n> what amount of commas is readable (the original seemed better IMO).\n>\n> \"However, the transaction records are fairly small and\"\n>\n\nWorks for me.\n\nThanks,\nDave\n\nOn Fri, 17 Jul 2020 at 11:17, David G. Johnston <david.g.johnston@gmail.com> wrote:On Fri, Jul 17, 2020 at 8:10 AM Dave Cramer <davecramer@gmail.com> wrote:This started out with just fixing \"One option do deal\" to \" One option to deal\"But after reading the rest I'd propose the following patch.Suggest replacing \"though\" with \"however\" instead of trying to figure out what amount of commas is readable (the original seemed better IMO).\"However, the transaction records are fairly small and\"Works for me.Thanks,Dave", "msg_date": "Fri, 17 Jul 2020 11:28:50 -0400", "msg_from": "Dave Cramer <davecramer@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Patch for reorderbuffer.c documentation." }, { "msg_contents": "On Fri, Jul 17, 2020 at 8:59 PM Dave Cramer <davecramer@gmail.com> wrote:\n>\n> On Fri, 17 Jul 2020 at 11:17, David G. Johnston <david.g.johnston@gmail.com> wrote:\n>>\n>> On Fri, Jul 17, 2020 at 8:10 AM Dave Cramer <davecramer@gmail.com> wrote:\n>>>\n>>> This started out with just fixing\n>>>\n>>> \"One option do deal\" to \" One option to deal\"\n>>>\n>>> But after reading the rest I'd propose the following patch.\n>>\n>>\n>> Suggest replacing \"though\" with \"however\" instead of trying to figure out what amount of commas is readable (the original seemed better IMO).\n>>\n>> \"However, the transaction records are fairly small and\"\n>\n>\n> Works for me.\n>\n\nThanks, pushed.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 18 Jul 2020 14:45:28 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Patch for reorderbuffer.c documentation." } ]
[ { "msg_contents": "Hey,\n\nI installed PostgreSQL source for the first time a few weeks ago. I am now\njust getting to my first pull-and-reinstall. I run make again at the top\nof the repo and I get:\n\ngit @ 7fe3083f4\n\ngcc -Wall -Wmissing-prototypes -Wpointer-arith\n-Wdeclaration-after-statement -Werror=vla -Wendif-labels\n-Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wformat-security\n-fno-strict-aliasing -fwrapv -fexcess-precision=standard\n-Wno-format-truncation -O2 [...] -L../../src/port -L../../src/common\n-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\n -Wl,-E -lpthread -lrt -ldl -lm -o postgres\ncatalog/catalog.o: In function `GetNewRelFileNode':\ncatalog.c:(.text+0x3f3): undefined reference to `ParallelMasterBackendId'\ncatalog/storage.o: In function `RelationCreateStorage':\nstorage.c:(.text+0x283): undefined reference to `ParallelMasterBackendId'\nutils/adt/dbsize.o: In function `pg_relation_filepath':\ndbsize.c:(.text+0x166e): undefined reference to `ParallelMasterBackendId'\ncollect2: error: ld returned 1 exit status\nMakefile:66: recipe for target 'postgres' failed\nmake[2]: *** [postgres] Error 1\nmake[2]: Leaving directory '/home/postgres/postgresql/src/backend'\nMakefile:42: recipe for target 'all-backend-recurse' failed\nmake[1]: *** [all-backend-recurse] Error 2\nmake[1]: Leaving directory '/home/postgres/postgresql/src'\nGNUmakefile:11: recipe for target 'all-src-recurse' failed\nmake: *** [all-src-recurse] Error 2\n\nI then ran ./configure again and got the same result. Ubuntu 18.04.\n\nSimply checking out and re-making 3a990a12635 (plus my two patches) works\njust fine.\n\nPlease advise, fixing stuff in the C parts of the codebase is not a skill\nI've picked up yet - been focused on docs and tests.\n\nThanks!\n\nDavid J.\n\nHey,I installed PostgreSQL source for the first time a few weeks ago.  I am now just getting to my first pull-and-reinstall.  I run make again at the top of the repo and I get:git @ 7fe3083f4gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -O2 [...] -L../../src/port -L../../src/common   -Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags  -Wl,-E -lpthread -lrt -ldl -lm -o postgrescatalog/catalog.o: In function `GetNewRelFileNode':catalog.c:(.text+0x3f3): undefined reference to `ParallelMasterBackendId'catalog/storage.o: In function `RelationCreateStorage':storage.c:(.text+0x283): undefined reference to `ParallelMasterBackendId'utils/adt/dbsize.o: In function `pg_relation_filepath':dbsize.c:(.text+0x166e): undefined reference to `ParallelMasterBackendId'collect2: error: ld returned 1 exit statusMakefile:66: recipe for target 'postgres' failedmake[2]: *** [postgres] Error 1make[2]: Leaving directory '/home/postgres/postgresql/src/backend'Makefile:42: recipe for target 'all-backend-recurse' failedmake[1]: *** [all-backend-recurse] Error 2make[1]: Leaving directory '/home/postgres/postgresql/src'GNUmakefile:11: recipe for target 'all-src-recurse' failedmake: *** [all-src-recurse] Error 2I then ran ./configure again and got the same result.  Ubuntu 18.04.Simply checking out and re-making 3a990a12635 (plus my two patches) works just fine.Please advise, fixing stuff in the C parts of the codebase is not a skill I've picked up yet - been focused on docs and tests.Thanks!David J.", "msg_date": "Fri, 17 Jul 2020 08:58:54 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": true, "msg_subject": "Error during make, second install" }, { "msg_contents": "On Fri, Jul 17, 2020 at 8:58 AM David G. Johnston <\ndavid.g.johnston@gmail.com> wrote:\n\n> Hey,\n>\n> I installed PostgreSQL source for the first time a few weeks ago. I am\n> now just getting to my first pull-and-reinstall. I run make again at the\n> top of the repo and I get:\n> [...]\n>\n> I then ran ./configure again and got the same result. Ubuntu 18.04.\n>\n\nSorry for the noise - though maybe some insight is still warranted - but\nrunning make clean first seems to have cleared up my problem.\n\nDavid J.\n\nOn Fri, Jul 17, 2020 at 8:58 AM David G. Johnston <david.g.johnston@gmail.com> wrote:Hey,I installed PostgreSQL source for the first time a few weeks ago.  I am now just getting to my first pull-and-reinstall.  I run make again at the top of the repo and I get:[...]I then ran ./configure again and got the same result.  Ubuntu 18.04.Sorry for the noise - though maybe some insight is still warranted - but running make clean first seems to have cleared up my problem.David J.", "msg_date": "Fri, 17 Jul 2020 09:05:18 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Error during make, second install" }, { "msg_contents": "On 2020-Jul-17, David G. Johnston wrote:\n\n> On Fri, Jul 17, 2020 at 8:58 AM David G. Johnston <\n> david.g.johnston@gmail.com> wrote:\n\n> Sorry for the noise - though maybe some insight is still warranted - but\n> running make clean first seems to have cleared up my problem.\n\nDo you run \"configure --enable-depend\"? If not, then make clean is\nmandatory before pulling changes.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 17 Jul 2020 13:16:50 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Error during make, second install" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> Sorry for the noise - though maybe some insight is still warranted - but\n> running make clean first seems to have cleared up my problem.\n\nYeah. Just doing \"git pull\" and \"make\" will often fail, because by\ndefault there's nothing guaranteeing that all dependent files are remade.\nThere are two safe workflows that I know of:\n\n1. Run \"make distclean\" when pulling an update. It works a bit cleaner\nif you do this before not after \"git pull\". If there was no update\nof the configure script, you can get away with just \"make clean\", but\nyou generally don't know that before pulling ...\n\n2. Always configure with --enable-depend.\n\nI prefer #1, as I find it more reliable. If you use ccache the\nbuild-speed advantage of #2 is pretty minimal.\n\nIn either case, when in doubt, try \"git clean -dfx\" and rebuild\nfrom scratch.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 Jul 2020 13:28:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error during make, second install" } ]
[ { "msg_contents": "headerscheck and cpluspluscheck are both unhappy about this:\n\n./src/include/replication/worker_internal.h:49:2: error: unknown type name 'slock_t'\n slock_t relmutex;\n ^~~~~~~\n\nNow, worker_internal.h itself hasn't changed in some time.\nI conclude that somebody rearranged one of the header files\nit depends on. Anyone have an idea what the relevant change\nwas? Should we just include spin.h here, or is there a\nbetter fix?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 Jul 2020 16:09:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Busted includes somewhere near worker_internal.h" }, { "msg_contents": "Hi,\n\nOn 2020-07-17 16:09:14 -0400, Tom Lane wrote:\n> headerscheck and cpluspluscheck are both unhappy about this:\n> \n> ./src/include/replication/worker_internal.h:49:2: error: unknown type name 'slock_t'\n> slock_t relmutex;\n> ^~~~~~~\n> \n> Now, worker_internal.h itself hasn't changed in some time.\n> I conclude that somebody rearranged one of the header files\n> it depends on. Anyone have an idea what the relevant change\n> was? Should we just include spin.h here, or is there a\n> better fix?\n\nI'm probably to blame for that - I've removed the s_lock.h (it wasn't\nspin.h for some reason) include from lwlock.h:\n\ncommit f219167910ad33dfd8f1b0bba15323d71a91c4e9\nAuthor: Andres Freund <andres@anarazel.de>\nDate: 2020-06-18 19:40:09 -0700\n\n Clean up includes of s_lock.h.\n \n Users of spinlocks should use spin.h, not s_lock.h. And lwlock.h\n hasn't utilized spinlocks for quite a while.\n \n Discussion: https://postgr.es/m/20200618183041.upyrd25eosecyf3x@alap3.anarazel.de\n\nI think including spin.h is the right fix, given that it needs to know\nthe size of s_lock.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 17 Jul 2020 13:24:14 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Busted includes somewhere near worker_internal.h" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-07-17 16:09:14 -0400, Tom Lane wrote:\n>> headerscheck and cpluspluscheck are both unhappy about this:\n>> ./src/include/replication/worker_internal.h:49:2: error: unknown type name 'slock_t'\n>> \tslock_t relmutex;\n>> \t^~~~~~~\n\n> I think including spin.h is the right fix, given that it needs to know\n> the size of s_lock.\n\nDone that way.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 18 Jul 2020 14:59:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Busted includes somewhere near worker_internal.h" } ]
[ { "msg_contents": "Hi,\n\nSo, I am not a Windows native, and here I am mentoring a GSoC student\nsetting up CI on multiple environments, including Windows.\n\nIn my own development and testing, by habit I do everything from\nunprivileged accounts, just spinning up an instance in a temp location,\nrunning some tests, and shutting it down. So I rarely run into the\n\"I refuse to run from a privileged account\" check in postgres.\nSo rarely that it tends to slip my mind.\n\nThe popular hosted CI services, by contrast, tend to run one's test\nscripts from a privileged account, so the scripts can just blithely\ninstall packages, frob configurations, and so on, and it all just works,\nExcept for just spinning up a one-off postgres instance; that runs afoul\nof the privilege check.\n\nOne workaround, of course, is to just use the postgres instance officially\nsupplied by the CI service, already started and listening on the standard\nport. Then, in fact, you /need/ to be running with privilege, so you can\ninstall into the standard locations, frob configs, restart it, etc.\n\nAnother is for the testing script to use its admin powers to create a\nnew user without admin powers, and switch to that identity for the rest\nof the show.\n\nIf I understand correctly what I'm seeing in the pg_ctl source, that would\nbe the sole other option on a non-Windows system; 'pg_ctl start' as root on\nnon-Windows will simply refuse, the same way direct invocation of postgres\nwould.\n\nOn the other hand, it seems that pg_ctl start on Windows has another\ntrick up its sleeve, and in about 180 lines of fussing with arcane Windows\nAPIs, it can arrange to run under the current identity but with its\nadministrator-ness removed and privileges capped. Which seems cool.\n\nBut there's a NOTE! in the comment for CreateRestrictedProcess: \"Job object\nwill only work when running as a service, because it's automatically\ndestroyed when pg_ctl exits.\"\n\nI haven't been able to find any documentation of what that really means\nin practical terms, or quite figure it out from the code. Does that mean\n'pg_ctl start' won't really work after all from a privileged account, or\nwill seem to work but something will go wrong after the server is ready\nand pg_ctl exits? Does it mean the tersely-documented 'register' operation\nmust be used, and that's the only way to start from a privileged account?\n\nI don't have an especially easy way to experiment on Windows; I can push\nexperiments to the CI service and wait a bit to see what they do, but\nI figured I'd ask here first.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Fri, 17 Jul 2020 20:03:23 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "pg_ctl behavior on Windows" }, { "msg_contents": "On Sat, Jul 18, 2020 at 5:33 AM Chapman Flack <chap@anastigmatix.net> wrote:\n>\n> But there's a NOTE! in the comment for CreateRestrictedProcess: \"Job object\n> will only work when running as a service, because it's automatically\n> destroyed when pg_ctl exits.\"\n>\n> I haven't been able to find any documentation of what that really means\n> in practical terms, or quite figure it out from the code. Does that mean\n> 'pg_ctl start' won't really work after all from a privileged account, or\n> will seem to work but something will go wrong after the server is ready\n> and pg_ctl exits? Does it mean the tersely-documented 'register' operation\n> must be used, and that's the only way to start from a privileged account?\n>\n\nI don't think so. I think you can use 'pg_ctl start' to achieve that.\nI think the JOBS stuff is primarily required when we use 'register'\noperation (aka runs server via service). For example, if you see one\nof the Job options \"JOB_OBJECT_LIMIT_DIE_ON_UNHANDLED_EXCEPTION\", it\nsuppresses dialog box for a certain type of errors and causes a\ntermination of the process with the exception code as the exit status\n(See [1]) which I think is essential for a service.\n\n> I don't have an especially easy way to experiment on Windows; I can push\n> experiments to the CI service and wait a bit to see what they do, but\n> I figured I'd ask here first.\n>\n\nI have tried and 'pg_ctl stuff seems to be working for a privileged\naccount. See below:\npostgres.exe -D ..\\..\\Data\nExecution of PostgreSQL by a user with administrative permissions is\nnot permitted.\nThe server must be started under an unprivileged user ID to prevent\npossible system security compromises. See the documentation for\nmore information on how to properly start the server.\n\npg_ctl.exe start -D ..\\..\\Data\nwaiting for server to start....2020-07-18 14:53:46.120 IST [8468] LOG:\n starting PostgreSQL 14devel, compiled by Visual C++ build 1915,\n64-bit\n2020-07-18 14:53:46.136 IST [8468] LOG: listening on IPv6 address\n\"::1\", port 5432\n2020-07-18 14:53:46.136 IST [8468] LOG: listening on IPv4 address\n\"127.0.0.1\", port 5432\n2020-07-18 14:53:46.214 IST [7512] LOG: database system was shut down\nat 2020-07-18 14:53:22 IST\n2020-07-18 14:53:46.245 IST [8468] LOG: database system is ready to\naccept connections\n done\nserver started\n\nI have run above two commands from an account with administrative\nprivilege and 'pg_ctl start' is working. I have further tried a few\noperations after connecting with the client and everything is working\nfine.\n\n[1] - https://docs.microsoft.com/en-us/windows/win32/api/winnt/ns-winnt-jobobject_basic_limit_information\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 18 Jul 2020 15:16:01 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_ctl behavior on Windows" }, { "msg_contents": "On 07/18/20 05:46, Amit Kapila wrote:\n> I don't think so. I think you can use 'pg_ctl start' to achieve that.\n> I think the JOBS stuff is primarily required when we use 'register'\n> operation (aka runs server via service). For example, if you see one\n> of the Job options \"JOB_OBJECT_LIMIT_DIE_ON_UNHANDLED_EXCEPTION\", it\n> suppresses dialog box for a certain type of errors and causes a\n> termination of the process with the exception code as the exit status\n\nThanks very much, that helps a lot. I still wonder, though, about some\nof the other limits also placed on that job object, such as\nJOB_OBJECT_SECURITY_NO_ADMIN | JOB_OBJECT_SECURITY_ONLY_TOKEN\n\nThose seem closely related to the purpose of CreateRestrictedProcess.\nDoes the NOTE! mean that, when not running as a service, the job object\ndisappears as soon as pg_ctl exits, and does the job object's disappearance\nsimply mean those limits are no longer enforced for the remaining life\nof the process? Or do they remain in effect for the process even after\nthe job object is reclaimed (and if so, what does the NOTE! really mean)?\n\nI could add that for my current purpose, running a few tests on a CI\nvirtual host where everything is admin anyway and it all gets wiped\nafter the test, I don't really care whether those restrictions are\nenforced, and in fact the whole \"I refuse to start as admin\" check seems\na punctilious headache. But for other uses, what that NOTE! means about\nthose restrictions might matter more, or even be worth mentioning\nin the docs.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Sat, 18 Jul 2020 09:00:57 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Re: pg_ctl behavior on Windows" }, { "msg_contents": "\nOn 7/18/20 5:46 AM, Amit Kapila wrote:\n>\n>> I don't have an especially easy way to experiment on Windows; I can push\n>> experiments to the CI service and wait a bit to see what they do, but\n>> I figured I'd ask here first.\n>>\n> I have tried and 'pg_ctl stuff seems to be working for a privileged\n> account. See below:\n> postgres.exe -D ..\\..\\Data\n> Execution of PostgreSQL by a user with administrative permissions is\n> not permitted.\n> The server must be started under an unprivileged user ID to prevent\n> possible system security compromises. See the documentation for\n> more information on how to properly start the server.\n>\n> pg_ctl.exe start -D ..\\..\\Data\n> waiting for server to start....2020-07-18 14:53:46.120 IST [8468] LOG:\n> starting PostgreSQL 14devel, compiled by Visual C++ build 1915,\n> 64-bit\n> 2020-07-18 14:53:46.136 IST [8468] LOG: listening on IPv6 address\n> \"::1\", port 5432\n> 2020-07-18 14:53:46.136 IST [8468] LOG: listening on IPv4 address\n> \"127.0.0.1\", port 5432\n> 2020-07-18 14:53:46.214 IST [7512] LOG: database system was shut down\n> at 2020-07-18 14:53:22 IST\n> 2020-07-18 14:53:46.245 IST [8468] LOG: database system is ready to\n> accept connections\n> done\n> server started\n>\n> I have run above two commands from an account with administrative\n> privilege and 'pg_ctl start' is working. I have further tried a few\n> operations after connecting with the client and everything is working\n> fine.\n>\n\n\nThis has been true for a long time, and since commit ce5d3424d6 we've\nbeen able to run all the regression and TAP tests safely under an\nadministrative account.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Sat, 18 Jul 2020 09:18:11 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_ctl behavior on Windows" }, { "msg_contents": "On Sat, Jul 18, 2020 at 6:31 PM Chapman Flack <chap@anastigmatix.net> wrote:\n>\n> On 07/18/20 05:46, Amit Kapila wrote:\n> > I don't think so. I think you can use 'pg_ctl start' to achieve that.\n> > I think the JOBS stuff is primarily required when we use 'register'\n> > operation (aka runs server via service). For example, if you see one\n> > of the Job options \"JOB_OBJECT_LIMIT_DIE_ON_UNHANDLED_EXCEPTION\", it\n> > suppresses dialog box for a certain type of errors and causes a\n> > termination of the process with the exception code as the exit status\n>\n> Thanks very much, that helps a lot. I still wonder, though, about some\n> of the other limits also placed on that job object, such as\n> JOB_OBJECT_SECURITY_NO_ADMIN | JOB_OBJECT_SECURITY_ONLY_TOKEN\n>\n> Those seem closely related to the purpose of CreateRestrictedProcess.\n> Does the NOTE! mean that, when not running as a service, the job object\n> disappears as soon as pg_ctl exits,\n>\n\n From the comments in that part of code, it seems like Job object will\nbe closed as soon as pg_ctl exits. However, as per my understanding\nof specs [1], it will be closed once the process with which it is\nassociated is gone which in this case should be the new process\ncreated with \"CreateProcessAsUser\". This has been added by the below\ncommit, so Magnus might remember something about this.\n\ncommit a25cd81007e827684343a53a80e8bc90f585ca8e\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Fri Feb 10 22:00:59 2006 +0000\n\n Enable pg_ctl to give up admin privileges when starting the server under\n Windows (if newer than NT4, else works same as before).\n\n Magnus\n\n\n[1] - https://docs.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-createjobobjecta\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 Jul 2020 10:12:22 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_ctl behavior on Windows" } ]
[ { "msg_contents": "Hi,\n\nOne of the comments needs correction \"sorting all tuples in the the\ndataset\" should have been \"sorting all tuples in the dataset\".\nThe Attached patch has the changes for the same.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sat, 18 Jul 2020 16:48:43 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Patch for nodeIncrementalSort comment correction." }, { "msg_contents": "On Saturday, July 18, 2020, vignesh C <vignesh21@gmail.com> wrote:\n\n> Hi,\n>\n> One of the comments needs correction \"sorting all tuples in the the\n> dataset\" should have been \"sorting all tuples in the dataset\".\n> The Attached patch has the changes for the same.\n>\n> Regards,\n> Vignesh\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\nThanks for fixing this. Looks correct to me.\n\nJames\n\nOn Saturday, July 18, 2020, vignesh C <vignesh21@gmail.com> wrote:Hi,\n\nOne of the comments needs correction \"sorting all tuples in the the\ndataset\" should have been \"sorting all tuples in the dataset\".\nThe Attached patch has the changes for the same.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\nThanks for fixing this. Looks correct to me. James", "msg_date": "Sun, 19 Jul 2020 09:55:30 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Patch for nodeIncrementalSort comment correction." }, { "msg_contents": "On Sun, Jul 19, 2020 at 7:25 PM James Coleman <jtc331@gmail.com> wrote:\n>\n> On Saturday, July 18, 2020, vignesh C <vignesh21@gmail.com> wrote:\n>>\n>> Hi,\n>>\n>> One of the comments needs correction \"sorting all tuples in the the\n>> dataset\" should have been \"sorting all tuples in the dataset\".\n>> The Attached patch has the changes for the same.\n>>\n>\n>\n> Thanks for fixing this. Looks correct to me.\n>\n\nYeah, looks like a typo will push in some time.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 Jul 2020 07:16:08 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Patch for nodeIncrementalSort comment correction." } ]
[ { "msg_contents": "Hi Tom,\n\nCan you take a look?\n\nPer Coverity.\n\nThere is something wrong with the definition of QUEUE_PAGESIZE on async.c\n\n1. #define QUEUE_PAGESIZE BLCKSZ\n2. BLCKSZ is 8192\n3..sizeof(AsyncQueueControl) is 8080, according to Coverity (Windows 64\nbits)\n4. (Line 1508) qe.length = QUEUE_PAGESIZE - offset;\n5. offset is zero\n6. qe.length is 8192\n\n/* Now copy qe into the shared buffer page */\nmemcpy(NotifyCtl->shared->page_buffer[slotno] + offset,\n &qe,\n qe.length);\n\nCID 1428952 (#1 of 1): Out-of-bounds access (OVERRUN) at line 1515, with\nmemcpy call.\n9. overrun-buffer-arg: Overrunning struct type AsyncQueueEntry of 8080\nbytes by passing it to a function which accesses it at byte offset 8191\nusing argument qe.length (which evaluates to 8192).\n\nQuestion:\n1. NotifyCtl->shared->page_buffer[slotno] is really struct type\nAsyncQueueEntry?\n\nregards,\nRanier Vilela\n\nHi Tom,Can you take a look?Per Coverity.There is something wrong with the definition of QUEUE_PAGESIZE on async.c1. \n#define QUEUE_PAGESIZE BLCKSZ\n\n2. BLCKSZ is  81923..sizeof(AsyncQueueControl) is 8080, according to Coverity (Windows 64 bits)4. (Line 1508) \t\t    qe.length = QUEUE_PAGESIZE - offset;5. offset is zero6. qe.length is 8192\t\t/* Now copy qe into the shared buffer page */\t\tmemcpy(NotifyCtl->shared->page_buffer[slotno] + offset,\t\t\t   &qe,\t\t\t   qe.length);\nCID 1428952 (#1 of 1): Out-of-bounds access (OVERRUN)  at line 1515, with memcpy call.\n9. overrun-buffer-arg: Overrunning\n struct type AsyncQueueEntry of 8080 bytes by passing it to a function \nwhich accesses it at byte offset 8191 using argument qe.length (which evaluates to 8192).\nQuestion:1. \nNotifyCtl->shared->page_buffer[slotno] is really \n struct type AsyncQueueEntry?regards,Ranier Vilela", "msg_date": "Sat, 18 Jul 2020 11:34:54 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "CID 1428952 (#1 of 1): Out-of-bounds access (OVERRUN)\n (src/backend/commands/async.c)" }, { "msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> Can you take a look?\n> Per Coverity.\n> There is something wrong with the definition of QUEUE_PAGESIZE on async.c\n\nNo, there's just something wrong with Coverity's analysis.\nI've grown a bit disillusioned with that tool; of late it's\nbeen giving many more false positives than useful reports.\n\n> 3..sizeof(AsyncQueueControl) is 8080, according to Coverity (Windows 64\n> bits)\n\nITYM AsyncQueueEntry?\n\n> 4. (Line 1508) qe.length = QUEUE_PAGESIZE - offset;\n> 5. offset is zero\n> 6. qe.length is 8192\n> /* Now copy qe into the shared buffer page */\n> memcpy(NotifyCtl->shared->page_buffer[slotno] + offset,\n> &qe,\n> qe.length);\n> CID 1428952 (#1 of 1): Out-of-bounds access (OVERRUN) at line 1515, with\n> memcpy call.\n> 9. overrun-buffer-arg: Overrunning struct type AsyncQueueEntry of 8080\n> bytes by passing it to a function which accesses it at byte offset 8191\n> using argument qe.length (which evaluates to 8192).\n\nI suppose what Coverity is on about is the possibility that we might\nincrease qe.length to more than sizeof(AsyncQueueEntry). However,\ngiven the logic:\n\n if (offset + qe.length <= QUEUE_PAGESIZE)\n ...\n else\n qe.length = QUEUE_PAGESIZE - offset;\n\nthat assignment must be *reducing* qe.length, so there can be no overrun\nunless asyncQueueNotificationToEntry() had prepared an oversize value to\nbegin with. Which is impossible given the assertions in that function,\nbut maybe Coverity can't work that out? (But then why isn't it\ncomplaining about asyncQueueNotificationToEntry itself?)\n\nI'd be willing to add a relevant assertion to\nasyncQueueNotificationToEntry, along the lines of\n\n\t/* The terminators are already included in AsyncQueueEntryEmptySize */\n\tentryLength = AsyncQueueEntryEmptySize + payloadlen + channellen;\n\tentryLength = QUEUEALIGN(entryLength);\n+\tAssert(entryLength <= sizeof(AsyncQueueEntry));\n\tqe->length = entryLength;\n\tqe->dboid = MyDatabaseId;\n\tqe->xid = GetCurrentTransactionId();\n\nif it'd shut up Coverity on this point; but I have no easy way\nto find that out.\n\n> Question:\n> 1. NotifyCtl->shared->page_buffer[slotno] is really struct type\n> AsyncQueueEntry?\n\nNo, it's a page. But it contains AsyncQueueEntry(s).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 18 Jul 2020 13:21:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: CID 1428952 (#1 of 1): Out-of-bounds access (OVERRUN)\n (src/backend/commands/async.c)" }, { "msg_contents": "Em sáb., 18 de jul. de 2020 às 14:21, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Ranier Vilela <ranier.vf@gmail.com> writes:\n> > Can you take a look?\n> > Per Coverity.\n> > There is something wrong with the definition of QUEUE_PAGESIZE on async.c\n>\n> No, there's just something wrong with Coverity's analysis.\n> I've grown a bit disillusioned with that tool; of late it's\n> been giving many more false positives than useful reports.\n>\nFor other projects, it has helped me, but for Postgres it has really been a\nchallenge.\n\n>\n> > 3..sizeof(AsyncQueueControl) is 8080, according to Coverity (Windows 64\n> > bits)\n>\n> ITYM AsyncQueueEntry?\n>\nYes, my bad. Its AsyncQueueEntry size on Windows 64 bits.\n\n>\n> > 4. (Line 1508) qe.length = QUEUE_PAGESIZE - offset;\n> > 5. offset is zero\n> > 6. qe.length is 8192\n> > /* Now copy qe into the shared buffer page */\n> > memcpy(NotifyCtl->shared->page_buffer[slotno] + offset,\n> > &qe,\n> > qe.length);\n> > CID 1428952 (#1 of 1): Out-of-bounds access (OVERRUN) at line 1515, with\n> > memcpy call.\n> > 9. overrun-buffer-arg: Overrunning struct type AsyncQueueEntry of 8080\n> > bytes by passing it to a function which accesses it at byte offset 8191\n> > using argument qe.length (which evaluates to 8192).\n>\n> I suppose what Coverity is on about is the possibility that we might\n> increase qe.length to more than sizeof(AsyncQueueEntry). However,\n> given the logic:\n>\n> if (offset + qe.length <= QUEUE_PAGESIZE)\n> ...\n> else\n> qe.length = QUEUE_PAGESIZE - offset;\n>\nHere, the offset is zero. Maybe qe.length > QUEUE_PAGESIZE?\n\"7. Condition offset + qe.length <= 8192, taking false branch.\"\n\n\n>\n> that assignment must be *reducing* qe.length, so there can be no overrun\n> unless asyncQueueNotificationToEntry() had prepared an oversize value to\n> begin with. Which is impossible given the assertions in that function,\n> but maybe Coverity can't work that out?\n\nCoverity analysed the DEBUG version, what includes assertions.\n\n\n> (But then why isn't it\n> complaining about asyncQueueNotificationToEntry itself?)\n>\n I still couldn't say.\n\n>\n> I'd be willing to add a relevant assertion to\n> asyncQueueNotificationToEntry, along the lines of\n>\n> /* The terminators are already included in\n> AsyncQueueEntryEmptySize */\n> entryLength = AsyncQueueEntryEmptySize + payloadlen + channellen;\n> entryLength = QUEUEALIGN(entryLength);\n> + Assert(entryLength <= sizeof(AsyncQueueEntry));\n> qe->length = entryLength;\n> qe->dboid = MyDatabaseId;\n> qe->xid = GetCurrentTransactionId();\n>\n> if it'd shut up Coverity on this point; but I have no easy way\n> to find that out.\n>\nI'm not sure that assertion interferes with the analysis.\n\n\n>\n> > Question:\n> > 1. NotifyCtl->shared->page_buffer[slotno] is really struct type\n> > AsyncQueueEntry?\n>\n> No, it's a page. But it contains AsyncQueueEntry(s).\n>\nI understand.\n\nIt could be, differences in the sizes of the types. Since on Linux, there\nmay be no alerts.\nBut as it was compiled on Windows, the AsyncQueueEntry structure, have a\nsmaller size than in Linux, being smaller than BLCKSZ?\n\nregards,\nRanier Vilela\n\nEm sáb., 18 de jul. de 2020 às 14:21, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Ranier Vilela <ranier.vf@gmail.com> writes:\n> Can you take a look?\n> Per Coverity.\n> There is something wrong with the definition of QUEUE_PAGESIZE on async.c\n\nNo, there's just something wrong with Coverity's analysis.\nI've grown a bit disillusioned with that tool; of late it's\nbeen giving many more false positives than useful reports.For other projects, it has helped me, but for Postgres it has really been a challenge.\n\n> 3..sizeof(AsyncQueueControl) is 8080, according to Coverity (Windows 64\n> bits)\n\nITYM AsyncQueueEntry?Yes, my bad. Its \nAsyncQueueEntry size on Windows 64 bits.\n\n> 4. (Line 1508)    qe.length = QUEUE_PAGESIZE - offset;\n> 5. offset is zero\n> 6. qe.length is 8192\n> /* Now copy qe into the shared buffer page */\n> memcpy(NotifyCtl->shared->page_buffer[slotno] + offset,\n>   &qe,\n>   qe.length);\n> CID 1428952 (#1 of 1): Out-of-bounds access (OVERRUN)  at line 1515, with\n> memcpy call.\n> 9. overrun-buffer-arg: Overrunning struct type AsyncQueueEntry of 8080\n> bytes by passing it to a function which accesses it at byte offset 8191\n> using argument qe.length (which evaluates to 8192).\n\nI suppose what Coverity is on about is the possibility that we might\nincrease qe.length to more than sizeof(AsyncQueueEntry).  However,\ngiven the logic:\n\n        if (offset + qe.length <= QUEUE_PAGESIZE)\n            ...\n        else\n            qe.length = QUEUE_PAGESIZE - offset;Here, the offset is zero. Maybe qe.length > \nQUEUE_PAGESIZE?\n\"7. Condition offset + qe.length <= 8192, taking false branch.\"\n \n\nthat assignment must be *reducing* qe.length, so there can be no overrun\nunless asyncQueueNotificationToEntry() had prepared an oversize value to\nbegin with.  Which is impossible given the assertions in that function,\nbut maybe Coverity can't work that out? Coverity analysed the DEBUG version, what includes assertions.   (But then why isn't it\ncomplaining about asyncQueueNotificationToEntry itself?) I still couldn't say.\n\nI'd be willing to add a relevant assertion to\nasyncQueueNotificationToEntry, along the lines of\n\n        /* The terminators are already included in AsyncQueueEntryEmptySize */\n        entryLength = AsyncQueueEntryEmptySize + payloadlen + channellen;\n        entryLength = QUEUEALIGN(entryLength);\n+       Assert(entryLength <= sizeof(AsyncQueueEntry));\n        qe->length = entryLength;\n        qe->dboid = MyDatabaseId;\n        qe->xid = GetCurrentTransactionId();\n\nif it'd shut up Coverity on this point; but I have no easy way\nto find that out.I'm not sure that assertion interferes with the analysis. \n\n> Question:\n> 1. NotifyCtl->shared->page_buffer[slotno] is really struct type\n> AsyncQueueEntry?\n\nNo, it's a page.  But it contains AsyncQueueEntry(s).I understand.It could be, differences in the sizes of the types. Since on Linux, there may be no alerts.But as it was compiled on Windows, the AsyncQueueEntry structure, have a smaller size than in Linux, being smaller than BLCKSZ? regards,Ranier Vilela", "msg_date": "Sat, 18 Jul 2020 14:53:38 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: CID 1428952 (#1 of 1): Out-of-bounds access (OVERRUN)\n (src/backend/commands/async.c)" }, { "msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> Em sáb., 18 de jul. de 2020 às 14:21, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n>> No, there's just something wrong with Coverity's analysis.\n>> I've grown a bit disillusioned with that tool; of late it's\n>> been giving many more false positives than useful reports.\n\n> It could be, differences in the sizes of the types. Since on Linux, there\n> may be no alerts.\n\nNo, all the types involved here should be pretty platform-independent.\nIIRC, the PG security team already saw this same warning from Coverity,\nand we dismissed it as a false positive.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 18 Jul 2020 14:19:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: CID 1428952 (#1 of 1): Out-of-bounds access (OVERRUN)\n (src/backend/commands/async.c)" }, { "msg_contents": "Em sáb., 18 de jul. de 2020 às 15:19, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Ranier Vilela <ranier.vf@gmail.com> writes:\n> > Em sáb., 18 de jul. de 2020 às 14:21, Tom Lane <tgl@sss.pgh.pa.us>\n> escreveu:\n> >> No, there's just something wrong with Coverity's analysis.\n> >> I've grown a bit disillusioned with that tool; of late it's\n> >> been giving many more false positives than useful reports.\n>\n> > It could be, differences in the sizes of the types. Since on Linux, there\n> > may be no alerts.\n>\n> No, all the types involved here should be pretty platform-independent.\n> IIRC, the PG security team already saw this same warning from Coverity,\n> and we dismissed it as a false positive.\n>\nUnderstood, again, thanks for your time.\n\nregards,\nRanier Vilela\n\nEm sáb., 18 de jul. de 2020 às 15:19, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Ranier Vilela <ranier.vf@gmail.com> writes:\n> Em sáb., 18 de jul. de 2020 às 14:21, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n>> No, there's just something wrong with Coverity's analysis.\n>> I've grown a bit disillusioned with that tool; of late it's\n>> been giving many more false positives than useful reports.\n\n> It could be, differences in the sizes of the types. Since on Linux, there\n> may be no alerts.\n\nNo, all the types involved here should be pretty platform-independent.\nIIRC, the PG security team already saw this same warning from Coverity,\nand we dismissed it as a false positive.Understood, again, thanks for your time. regards,Ranier Vilela", "msg_date": "Sat, 18 Jul 2020 16:27:36 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: CID 1428952 (#1 of 1): Out-of-bounds access (OVERRUN)\n (src/backend/commands/async.c)" } ]
[ { "msg_contents": "In all branches back to v10, initdb marks pg_subscription.subslotname\nas NOT NULL:\n\n# \\d pg_subscription\n Table \"pg_catalog.pg_subscription\"\n Column | Type | Collation | Nullable | Default \n-----------------+---------+-----------+----------+---------\n oid | oid | | not null | \n subdbid | oid | | not null | \n subname | name | | not null | \n subowner | oid | | not null | \n subenabled | boolean | | not null | \n subbinary | boolean | | not null | \n subconninfo | text | C | not null | \n subslotname | name | | not null | \n subsynccommit | text | C | not null | \n subpublications | text[] | C | not null | \n\nNonetheless, CREATE/ALTER SUBSCRIPTION blithely set it to null\nwhen slot_name = NONE is specified.\n\nThis apparently causes few ill effects, unless somebody decides\nto JIT-compile deconstruction of pg_subscription tuples. Which\nis why all of Andres' JIT-enabled buildfarm animals are unhappy\nwith 9de77b545 --- quite unintentionally, that commit added a\ntest case that exposed the problem.\n\nWhat would we like to do about this? Removing the NOT NULL\nmarking wouldn't be too hard in HEAD, but telling users to\nfix it manually in the back branches seems like a mess.\n\nOn the whole it seems like changing the code to use some other\nrepresentation of slot_name = NONE, like say an empty string,\nwould be less of a mess.\n\nIt's also a bit annoying that we have no mechanized checks that\nwould catch this inconsistency. If JIT is going to be absolutely\ndependent on NOT NULL markings being accurate, we can't really\nhave such a laissez-faire attitude to C code getting it wrong.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 18 Jul 2020 14:15:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "pg_subscription.subslotname is wrongly marked NOT NULL" }, { "msg_contents": "I wrote:\n> In all branches back to v10, initdb marks pg_subscription.subslotname\n> as NOT NULL: ...\n> Nonetheless, CREATE/ALTER SUBSCRIPTION blithely set it to null\n> when slot_name = NONE is specified.\n\n> What would we like to do about this? Removing the NOT NULL\n> marking wouldn't be too hard in HEAD, but telling users to\n> fix it manually in the back branches seems like a mess.\n\nAfter further thought, it seems like changing the definition that\nsubslotname is null for \"NONE\" is unworkable, because client-side\ncode might be depending on that. (pg_dump certainly is; we could\nchange that, but other code might have the same expectation.)\n\nWhat I propose we do is\n\n(1) Fix the NOT NULL marking in HEAD and v13. We could perhaps\nalter it in older branches as well, but we cannot force initdb\nso such a change would only affect newly-initdb'd installations.\n\n(2) In pre-v13 branches, hack the JIT tuple deconstruction code\nto be specifically aware that it can't trust attnotnull for\npg_subscription.subslotname. Yeah, it's ugly, but at least it's\nnot ugly going forwards.\n\nI haven't looked to see where or how we might do (2), but I assume\nit's possible.\n\n> It's also a bit annoying that we have no mechanized checks that\n> would catch this inconsistency. If JIT is going to be absolutely\n> dependent on NOT NULL markings being accurate, we can't really\n> have such a laissez-faire attitude to C code getting it wrong.\n\nIt seems like at least in assert-enabled builds, we'd better have\na cross-check for that. I'm not sure where's the best place.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 18 Jul 2020 22:09:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_subscription.subslotname is wrongly marked NOT NULL" }, { "msg_contents": "I wrote:\n> (2) In pre-v13 branches, hack the JIT tuple deconstruction code\n> to be specifically aware that it can't trust attnotnull for\n> pg_subscription.subslotname. Yeah, it's ugly, but at least it's\n> not ugly going forwards.\n\nConcretely, as attached for v12.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 19 Jul 2020 16:42:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_subscription.subslotname is wrongly marked NOT NULL" }, { "msg_contents": "I wrote:\n>> It's also a bit annoying that we have no mechanized checks that\n>> would catch this inconsistency. If JIT is going to be absolutely\n>> dependent on NOT NULL markings being accurate, we can't really\n>> have such a laissez-faire attitude to C code getting it wrong.\n\n> It seems like at least in assert-enabled builds, we'd better have\n> a cross-check for that. I'm not sure where's the best place.\n\nI concluded that we should put this into CatalogTupleInsert and\nCatalogTupleUpdate. The bootstrap data path already has a check\n(see InsertOneNull()), and so does the executor, so we only need\nto worry about tuples that're built manually by catalog manipulation\ncode. I think all of that goes through these functions. Hence,\nas attached.\n\n... and apparently, I should have done this task first, because\ndamn if it didn't immediately expose another bug of the same ilk.\npg_subscription_rel.srsublsn also needs to be marked nullable.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 19 Jul 2020 18:04:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_subscription.subslotname is wrongly marked NOT NULL" }, { "msg_contents": "I wrote:\n> pg_subscription_rel.srsublsn also needs to be marked nullable.\n\nNot only is it wrongly marked attnotnull, but two of the three places\nthat read it are doing so unsafely (ie, as though it *were*\nnon-nullable). So I think we'd better fix it as attached.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 19 Jul 2020 19:48:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_subscription.subslotname is wrongly marked NOT NULL" }, { "msg_contents": "Mopping this up ... the attached patch against v12 shows the portions\nof 72eab84a5 and 0fa0b487b that I'm thinking of putting into v10-v12.\n\nThe doc changes, which just clarify that subslotname and srsublsn can\nbe null, should be uncontroversial. The changes in pg_subscription.c\nprevent it from accessing data that might not be there. 99.999% of\nthe time, that doesn't matter; we'd copy garbage into\nSubscriptionRelState.lsn, but the callers shouldn't look at that field\nin states where it's not valid. However, it's possible that the code\ncould access data off the end of the heap page, and at least in theory\nthat could lead to a SIGSEGV.\n\nWhat I'm not quite sure about is whether to add the BKI_FORCE_NULL\nannotations to the headers or not. There are some pros and cons:\n\n* While Catalog.pm has had support for BKI_FORCE_NULL for quite some\ntime, we never used it in anger before yesterday. It's easy to\ncheck that it works, but I wonder whether anybody has third-party\nanalysis tools that look at the catalog headers and would get broken\nbecause they didn't cover this.\n\n* If we change these markings, then our own testing in the buildfarm\netc. will not reflect the state of affairs seen in many/most actual\nv10-v12 installations. The scope of code where it'd matter seems\npretty tiny, so I don't think there's a big risk, but there's more\nthan zero risk. (In any case, I would not push this part until all\nthe buildfarm JIT critters have reported happiness with 798b4faef,\nas that's the one specific spot where it surely does matter.)\n\n* On the other side of the ledger, if we don't fix these markings\nwe cannot back-patch the additional assertions I proposed at [1].\n\nI'm kind of leaning to committing this as shown and back-patching\nthe patch at [1], but certainly a case could be made in the other\ndirection. Thoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/298837.1595196283%40sss.pgh.pa.us", "msg_date": "Mon, 20 Jul 2020 16:53:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_subscription.subslotname is wrongly marked NOT NULL" }, { "msg_contents": "I wrote:\n> * On the other side of the ledger, if we don't fix these markings\n> we cannot back-patch the additional assertions I proposed at [1].\n\n> I'm kind of leaning to committing this as shown and back-patching\n> the patch at [1], but certainly a case could be made in the other\n> direction. Thoughts?\n\nAfter further thought about that I realized that the assertion patch\ncould be kluged in the same way as we did in llvmjit_deform.c, and\nthat that would really be the only safe way to do it pre-v13.\nOtherwise the assertions would trip in pre-existing databases,\nwhich would not be nice.\n\nSo what I've done is to back-patch the assertions that way, and\n*not* apply BKI_FORCE_NULL in the back branches. The possible\ndownsides of doing that seem to outweigh the upside of making\nthe catalog state cleaner in new installations.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 21 Jul 2020 12:42:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_subscription.subslotname is wrongly marked NOT NULL" } ]
[ { "msg_contents": "Part of the blame for the pg_subscription.subslotname fiasco can be laid\nat the feet of initdb's default rule for marking columns NOT NULL; that\nrule is fairly arbitrary and does not guarantee to make safe choices.\nI propose that we change it so that it *is* safe, ie it will only mark\nfields NOT NULL if they'd certainly be safe to access as C struct fields.\n\nKeeping the end results the same requires a few more manual applications\nof BKI_FORCE_NOT_NULL than we had before. But I think that that's fine,\nbecause it reduces the amount of poorly-documented magic in this area.\nI note in particular that bki.sgml was entirely failing to tell the full\ntruth.\n\n(Note: this would allow reverting the manual BKI_FORCE_NULL label that\nI just added to pg_subscription.subslotname, but I feel no great desire\nto do that.)\n\nI propose this only for HEAD, not the back branches.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 19 Jul 2020 14:03:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Fix initdb's unsafe not-null-marking rule" } ]
[ { "msg_contents": "Hi,\n\nIt appears that when logical decoding sends out the data from the output\nplugin, it is not guaranteed that the decoded transaction's effects are\nvisible on the source server. Is this the way it's supposed to work?\n\nIf so, would doing something like this in the output plugin be reasonable?\n\n TransactionId xid = transaction->xid;\n if (transaction->is_known_as_subxact)\n xid = transaction->toplevel_xid;\n\n if (TransactionIdIsInProgress(xid))\n XactLockTableWait(xid, NULL, NULL, XLTW_None);\n\n\n-marko\n\nHi,It appears that when logical decoding sends out the data from the output plugin, it is not guaranteed that the decoded transaction's effects are visible on the source server.  Is this the way it's supposed to work?If so, would doing something like this in the output plugin be reasonable?    TransactionId xid = transaction->xid;    if (transaction->is_known_as_subxact)        xid = transaction->toplevel_xid;    if (TransactionIdIsInProgress(xid))        XactLockTableWait(xid, NULL, NULL, XLTW_None);-marko", "msg_date": "Mon, 20 Jul 2020 17:27:30 +0300", "msg_from": "Marko Tiikkaja <marko@joh.to>", "msg_from_op": true, "msg_subject": "Local visibility with logical decoding" }, { "msg_contents": "Marko Tiikkaja <marko@joh.to> wrote:\n\n> It appears that when logical decoding sends out the data from the output\n> plugin, it is not guaranteed that the decoded transaction's effects are\n> visible on the source server. Is this the way it's supposed to work?\n\nCan you please share the test that indicates this behavior? As far as I\nunderstand, the transaction must have been committed before the output plugin\nstarts to receive the changes.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Mon, 20 Jul 2020 18:38:00 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Local visibility with logical decoding" }, { "msg_contents": "On Mon, Jul 20, 2020 at 7:36 PM Antonin Houska <ah@cybertec.at> wrote:\n\n> Marko Tiikkaja <marko@joh.to> wrote:\n>\n> > It appears that when logical decoding sends out the data from the output\n> > plugin, it is not guaranteed that the decoded transaction's effects are\n> > visible on the source server. Is this the way it's supposed to work?\n>\n> Can you please share the test that indicates this behavior? As far as I\n> understand, the transaction must have been committed before the output\n> plugin\n> starts to receive the changes.\n>\n\nI don't have a reliable test program, but you can reproduce quite easily\nwith test_decoding if you put a breakpoint before the SyncRepWaitForLSN()\ncall in src/backend/access/transam/xact.c. pg_logicalrecv will see the\nchanges while the session is sitting on the breakpoint, and not finishing\nits commit.\n\n\n-marko\n\nOn Mon, Jul 20, 2020 at 7:36 PM Antonin Houska <ah@cybertec.at> wrote:Marko Tiikkaja <marko@joh.to> wrote:\n\n> It appears that when logical decoding sends out the data from the output\n> plugin, it is not guaranteed that the decoded transaction's effects are\n> visible on the source server.  Is this the way it's supposed to work?\n\nCan you please share the test that indicates this behavior? As far as I\nunderstand, the transaction must have been committed before the output plugin\nstarts to receive the changes.I don't have a reliable test program, but you can reproduce quite easily with test_decoding if you put a breakpoint before the SyncRepWaitForLSN() call in src/backend/access/transam/xact.c.  pg_logicalrecv will see the changes while the session is sitting on the breakpoint, and not finishing its commit.-marko", "msg_date": "Mon, 20 Jul 2020 20:10:55 +0300", "msg_from": "Marko Tiikkaja <marko@joh.to>", "msg_from_op": true, "msg_subject": "Re: Local visibility with logical decoding" }, { "msg_contents": "Hi,\n\nOn 2020-07-20 17:27:30 +0300, Marko Tiikkaja wrote:\n> It appears that when logical decoding sends out the data from the output\n> plugin, it is not guaranteed that the decoded transaction's effects are\n> visible on the source server. Is this the way it's supposed to work?\n\nAt the moment the visibility behaviour is basically the same as crash\nrecovery / standbys. And they just look at the WAL...\n\n\n> If so, would doing something like this in the output plugin be reasonable?\n> \n> TransactionId xid = transaction->xid;\n> if (transaction->is_known_as_subxact)\n> xid = transaction->toplevel_xid;\n> \n> if (TransactionIdIsInProgress(xid))\n> XactLockTableWait(xid, NULL, NULL, XLTW_None);\n\nI'd not be surprised if this had a potential to cause deadlocks.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 20 Jul 2020 13:21:34 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Local visibility with logical decoding" } ]
[ { "msg_contents": "Hey all,\n\n*tl;dr: we're looking for an easy way to ask if a tuple is frozen from\nwithin a SQL query*\n\nWe're trying to build a validation process around our CCD, in an attempt to\nvalidate that all data inside of Postgres has made it into our secondary\nstore.\n\nOur plan is to build a small incremental process around daily snapshots of\nthe database, scanning each table with something like:\n\n-- $1: xid of transaction that occurred just before the previous day\n-- TODO: Handle wraparound, defend against vacuum min frozen age, etc\nselect id from table where xmin > $1 and not frozen(tid);\n\nWe're hoping this can reliably detect new and modified tuples, and do it\nquickly, by sequentially scanning the table.\n\nSo we hit the question: how can we identify if a tuple is frozen? I know\nthe tuple has both committed and aborted hint bits set, but accessing those\nbits seems to require superuser functions and are unlikely to be that fast.\n\nAre there system columns (similar to xmin, tid, cid) that we don't know\nabout?\n\nGiven this context, are we trying to do something you would think is a bad\nidea?\n\nThanks,\nLawrence\n\nHey all,tl;dr: we're looking for an easy way to ask if a tuple is frozen from within a SQL queryWe're trying to build a validation process around our CCD, in an attempt to validate that all data inside of Postgres has made it into our secondary store.Our plan is to build a small incremental process around daily snapshots of the database, scanning each table with something like:-- $1: xid of transaction that occurred just before the previous day-- TODO: Handle wraparound, defend against vacuum min frozen age, etcselect id from table where xmin > $1 and not frozen(tid);We're hoping this can reliably detect new and modified tuples, and do it quickly, by sequentially scanning the table.So we hit the question: how can we identify if a tuple is frozen? I know the tuple has both committed and aborted hint bits set, but accessing those bits seems to require superuser functions and are unlikely to be that fast.Are there system columns (similar to xmin, tid, cid) that we don't know about?Given this context, are we trying to do something you would think is a bad idea?Thanks,Lawrence", "msg_date": "Mon, 20 Jul 2020 16:21:55 +0100", "msg_from": "Lawrence Jones <lawrence@gocardless.com>", "msg_from_op": true, "msg_subject": "Postgres-native method to identify if a tuple is frozen" }, { "msg_contents": "On Mon, Jul 20, 2020 at 9:07 PM Lawrence Jones <lawrence@gocardless.com> wrote:\n>\n>\n> So we hit the question: how can we identify if a tuple is frozen? I know the tuple has both committed and aborted hint bits set, but accessing those bits seems to require superuser functions and are unlikely to be that fast.\n>\n> Are there system columns (similar to xmin, tid, cid) that we don't know about?\n>\n\nI think the way to get that information is to use pageinspect\nextension and use some query like below but you are right that you\nneed superuser privilege for that:\n\nSELECT t_ctid, raw_flags, combined_flags\n FROM heap_page_items(get_raw_page('pg_class', 0)),\n LATERAL heap_tuple_infomask_flags(t_infomask, t_infomask2)\n WHERE t_infomask IS NOT NULL OR t_infomask2 IS NOT NULL;\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 21 Jul 2020 17:52:05 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres-native method to identify if a tuple is frozen" }, { "msg_contents": "Thanks for the help. I'd seen the heap_page_items functions, but wanted to\navoid the superuser requirement and wondered if this was going to be a\nperformant method of finding the freeze column (we're scanning some\nbillions of rows).\n\nFwiw, we think we'll probably go with a tiny extension that exposes the\nfrozen state exactly. For reference, this is the basic sketch:\n\nDatum\nfrozen(PG_FUNCTION_ARGS)\n{\nOid reloid = PG_GETARG_OID(0);\nItemPointer tid = PG_GETARG_ITEMPOINTER(1);\nRelation rel;\nHeapTupleData tuple;\nBuffer buf;\nint result;\n// Open table and snapshot- ensuring we later close them\nrel = heap_open(reloid, AccessShareLock);\n// Initialise the tuple data with a tid that matches our input\nItemPointerCopy(tid, &(tuple.t_self));\n#if PG_MAJOR < 12\nif (!heap_fetch(rel, SnapshotAny, &tuple, &buf, true, NULL))\n#else\nif (!heap_fetch(rel, SnapshotAny, &tuple, &buf))\n#endif\n{\nresult = 3;\n}\nelse\n{\nresult = HeapTupleHeaderXminFrozen(tuple.t_data);\n}\n// Close any opened resources here\nheap_close(rel, AccessShareLock);\nReleaseBuffer(buf);\nPG_RETURN_INT32(result);\n}\n\nOn Tue, 21 Jul 2020 at 13:22, Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Mon, Jul 20, 2020 at 9:07 PM Lawrence Jones <lawrence@gocardless.com>\n> wrote:\n> >\n> >\n> > So we hit the question: how can we identify if a tuple is frozen? I know\n> the tuple has both committed and aborted hint bits set, but accessing those\n> bits seems to require superuser functions and are unlikely to be that fast.\n> >\n> > Are there system columns (similar to xmin, tid, cid) that we don't know\n> about?\n> >\n>\n> I think the way to get that information is to use pageinspect\n> extension and use some query like below but you are right that you\n> need superuser privilege for that:\n>\n> SELECT t_ctid, raw_flags, combined_flags\n> FROM heap_page_items(get_raw_page('pg_class', 0)),\n> LATERAL heap_tuple_infomask_flags(t_infomask, t_infomask2)\n> WHERE t_infomask IS NOT NULL OR t_infomask2 IS NOT NULL;\n>\n> --\n> With Regards,\n> Amit Kapila.\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\nThanks for the help. I'd seen the heap_page_items functions, but wanted to avoid the superuser requirement and wondered if this was going to be a performant method of finding the freeze column (we're scanning some billions of rows).Fwiw, we think we'll probably go with a tiny extension that exposes the frozen state exactly. For reference, this is the basic sketch:Datumfrozen(PG_FUNCTION_ARGS){\tOid\treloid = PG_GETARG_OID(0); ItemPointer tid = PG_GETARG_ITEMPOINTER(1);\n Relation rel; HeapTupleData tuple;\tBuffer\t\t buf; int\t\t\t result;\n // Open table and snapshot- ensuring we later close them rel = heap_open(reloid, AccessShareLock);\n // Initialise the tuple data with a tid that matches our input ItemPointerCopy(tid, &(tuple.t_self));#if PG_MAJOR < 12 if (!heap_fetch(rel, SnapshotAny, &tuple, &buf, true, NULL))#else if (!heap_fetch(rel, SnapshotAny, &tuple, &buf))#endif { result = 3; } else { result = HeapTupleHeaderXminFrozen(tuple.t_data);\t}\n // Close any opened resources here heap_close(rel, AccessShareLock); ReleaseBuffer(buf);\n PG_RETURN_INT32(result);}\n\nOn Tue, 21 Jul 2020 at 13:22, Amit Kapila <amit.kapila16@gmail.com> wrote:On Mon, Jul 20, 2020 at 9:07 PM Lawrence Jones <lawrence@gocardless.com> wrote:\n>\n>\n> So we hit the question: how can we identify if a tuple is frozen? I know the tuple has both committed and aborted hint bits set, but accessing those bits seems to require superuser functions and are unlikely to be that fast.\n>\n> Are there system columns (similar to xmin, tid, cid) that we don't know about?\n>\n\nI think the way to get that information is to use pageinspect\nextension and use some query like below but you are right that you\nneed superuser privilege for that:\n\nSELECT t_ctid, raw_flags, combined_flags\n         FROM heap_page_items(get_raw_page('pg_class', 0)),\n           LATERAL heap_tuple_infomask_flags(t_infomask, t_infomask2)\n         WHERE t_infomask IS NOT NULL OR t_infomask2 IS NOT NULL;\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 27 Jul 2020 08:08:24 +0100", "msg_from": "Lawrence Jones <lawrence@gocardless.com>", "msg_from_op": true, "msg_subject": "Re: Postgres-native method to identify if a tuple is frozen" } ]
[ { "msg_contents": "Hi,\n\nI am currently exploring the pg_start_backup() and pg_stop_backup() functions.\n\nIn the documentation (https://www.postgresql.org/docs/9.0/functions-admin.html), it is stated that after calling pg_stop_backup() Postgres switches to the new WAL segment file. But it doesn’t say the same for pg_start_backup().\n\nHowever, I found the following comment regarding pg_start_backup() in the source code:\n\nExcerpt from Postgres source code https://doxygen.postgresql.org/xlog_8c_source.html#l10595\n\n* Force an XLOG file switch before the checkpoint, to ensure that the\n* WAL segment the checkpoint is written to doesn't contain pages with\n* old timeline IDs. That would otherwise happen if you called\n* pg_start_backup() right after restoring from a PITR archive: the\n* first WAL segment containing the startup checkpoint has pages in\n* the beginning with the old timeline ID. That can cause trouble at\n* recovery: we won't have a history file covering the old timeline if\n* pg_wal directory was not included in the base backup and the WAL\n* archive was cleared too before starting the backup.\n\nSo does it mean that Postgres always switches to the new WAL segment file on pg_start_backup() call too?\n\nIf so, as I understood, the newly created WAL segment file should start from the checkpoint and should not contain any WAL records regarding the events that happened before pg_start_backup() call?\n\nThanks,\nDaniil Zakhlystov\nHi,I am currently exploring the pg_start_backup() and pg_stop_backup() functions.In the documentation (https://www.postgresql.org/docs/9.0/functions-admin.html), it is stated that after calling pg_stop_backup() Postgres switches to the new WAL segment file. But it doesn’t say the same for pg_start_backup().However, I found the following comment regarding pg_start_backup() in the source code:Excerpt from Postgres source code https://doxygen.postgresql.org/xlog_8c_source.html#l10595* Force an XLOG file switch before the checkpoint, to ensure that the* WAL segment the checkpoint is written to doesn't contain pages with* old timeline IDs.  That would otherwise happen if you called* pg_start_backup() right after restoring from a PITR archive: the* first WAL segment containing the startup checkpoint has pages in* the beginning with the old timeline ID.  That can cause trouble at* recovery: we won't have a history file covering the old timeline if* pg_wal directory was not included in the base backup and the WAL* archive was cleared too before starting the backup.So does it mean that Postgres always switches to the new WAL segment file on pg_start_backup() call too?If so, as I understood, the newly created WAL segment file should start from the checkpoint and should not contain any WAL records regarding the events that happened before pg_start_backup() call?Thanks,Daniil Zakhlystov", "msg_date": "Tue, 21 Jul 2020 07:51:56 +0000", "msg_from": "\"@usernamedt\" <usernamedt@protonmail.ch>", "msg_from_op": true, "msg_subject": "WAL segment switch on pg_start_backup()" } ]
[ { "msg_contents": "While poking around our crypto code, I noticed that a comment in sha2.h was\nreferencing sha-1 which is an algorithm not supported by the code. The\nattached fixes the comment aligning it with other comments in the file.\n\ncheers ./daniel", "msg_date": "Tue, 21 Jul 2020 13:57:11 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Comment referencing incorrect algorithm" }, { "msg_contents": "On Tue, Jul 21, 2020 at 01:57:11PM +0200, Daniel Gustafsson wrote:\n> While poking around our crypto code, I noticed that a comment in sha2.h was\n> referencing sha-1 which is an algorithm not supported by the code. The\n> attached fixes the comment aligning it with other comments in the file.\n\nThanks, fixed. The style of the surroundings is to not use an hyphen,\nso fine by me to stick with that.\n--\nMichael", "msg_date": "Wed, 22 Jul 2020 10:19:34 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Comment referencing incorrect algorithm" } ]
[ { "msg_contents": "After forking we call RAND_cleanup in fork_process.c to force a re-seed to\nensure that two backends cannot share sequence. OpenSSL 1.1.0 deprecated\nRAND_cleanup, and contrary to how they usually leave deprecated APIs working\nuntil removed, they decided to silently make this call a noop like below:\n\n# define RAND_cleanup() while(0) continue\n\nThis leaves our defence against pool sharing seemingly useless, and also\nagainst the recommendations of OpenSSL for versions > 1.1.0 and < 1.1.1 where\nthe RNG was rewritten:\n\n https://wiki.openssl.org/index.php/Random_fork-safety\n\nThe silver lining here is that while OpenSSL nooped RAND_cleanup, they also\nchanged what is mixed into seeding so we are still not sharing a sequence. To\nfix this, changing the RAND_cleanup call to RAND_poll should be enough to\nensure re-seeding after forking across all supported OpenSSL versions. Patch\n0001 implements this along with a comment referencing when it can be removed\n(which most likely won't be for quite some time).\n\nAnother thing that stood out when reviewing this code is that we optimize for\nRAND_poll failing in pg_strong_random, when we already have RAND_status\nchecking for a sufficiently seeded RNG for us. ISTM that we can simplify the\ncode by letting RAND_status do the work as per 0002, and also (while unlikely)\nsurvive any transient failures in RAND_poll by allowing all the retries we've\ndefined for the loop.\n\nAlso, as a disclaimer, this was brought up with the PostgreSQL security team\nfirst whom have given permission to discuss this in public.\n\nThoughts on these?\n\ncheers ./daniel\n\n\n\n\n\n\n--\nVMware", "msg_date": "Tue, 21 Jul 2020 14:13:32 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "OpenSSL randomness seeding" }, { "msg_contents": "On 7/21/20 8:13 AM, Daniel Gustafsson wrote:\n> After forking we call RAND_cleanup in fork_process.c to force a re-seed to\n> ensure that two backends cannot share sequence. OpenSSL 1.1.0 deprecated\n> RAND_cleanup, and contrary to how they usually leave deprecated APIs working\n> until removed, they decided to silently make this call a noop like below:\n> \n> # define RAND_cleanup() while(0) continue\n> \n> This leaves our defence against pool sharing seemingly useless, and also\n> against the recommendations of OpenSSL for versions > 1.1.0 and < 1.1.1 where\n> the RNG was rewritten:\n> \n> https://wiki.openssl.org/index.php/Random_fork-safety\n> \n> The silver lining here is that while OpenSSL nooped RAND_cleanup, they also\n> changed what is mixed into seeding so we are still not sharing a sequence. To\n> fix this, changing the RAND_cleanup call to RAND_poll should be enough to\n> ensure re-seeding after forking across all supported OpenSSL versions. Patch\n> 0001 implements this along with a comment referencing when it can be removed\n> (which most likely won't be for quite some time).\n\nThis looks reasonable to me based on your explanation and the OpenSSL wiki.\n\n> Another thing that stood out when reviewing this code is that we optimize for\n> RAND_poll failing in pg_strong_random, when we already have RAND_status\n> checking for a sufficiently seeded RNG for us. ISTM that we can simplify the\n> code by letting RAND_status do the work as per 0002, and also (while unlikely)\n> survive any transient failures in RAND_poll by allowing all the retries we've\n> defined for the loop.\n\nI wonder how effective the retries are going to be if they happen \nimmediately. However, most of the code paths I followed ended in a hard \nerror when pg_strong_random() failed so it may not hurt to try. I just \nworry that some caller is depending on a faster failure here.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Tue, 21 Jul 2020 11:31:03 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: OpenSSL randomness seeding" }, { "msg_contents": "> On 21 Jul 2020, at 17:31, David Steele <david@pgmasters.net> wrote:\n> On 7/21/20 8:13 AM, Daniel Gustafsson wrote:\n\n>> Another thing that stood out when reviewing this code is that we optimize for\n>> RAND_poll failing in pg_strong_random, when we already have RAND_status\n>> checking for a sufficiently seeded RNG for us. ISTM that we can simplify the\n>> code by letting RAND_status do the work as per 0002, and also (while unlikely)\n>> survive any transient failures in RAND_poll by allowing all the retries we've\n>> defined for the loop.\n> \n> I wonder how effective the retries are going to be if they happen immediately. However, most of the code paths I followed ended in a hard error when pg_strong_random() failed so it may not hurt to try. I just worry that some caller is depending on a faster failure here.\n\nThere is that, but I'm not convinced that relying on specific timing for\nanything RNG or similarly cryptographic-related is especially sane.\n\ncheers ./daniel\n\n", "msg_date": "Tue, 21 Jul 2020 21:44:58 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: OpenSSL randomness seeding" }, { "msg_contents": "On 7/21/20 3:44 PM, Daniel Gustafsson wrote:\n>> On 21 Jul 2020, at 17:31, David Steele <david@pgmasters.net> wrote:\n>> On 7/21/20 8:13 AM, Daniel Gustafsson wrote:\n> \n>>> Another thing that stood out when reviewing this code is that we optimize for\n>>> RAND_poll failing in pg_strong_random, when we already have RAND_status\n>>> checking for a sufficiently seeded RNG for us. ISTM that we can simplify the\n>>> code by letting RAND_status do the work as per 0002, and also (while unlikely)\n>>> survive any transient failures in RAND_poll by allowing all the retries we've\n>>> defined for the loop.\n>>\n>> I wonder how effective the retries are going to be if they happen immediately. However, most of the code paths I followed ended in a hard error when pg_strong_random() failed so it may not hurt to try. I just worry that some caller is depending on a faster failure here.\n> \n> There is that, but I'm not convinced that relying on specific timing for\n> anything RNG or similarly cryptographic-related is especially sane.\n\nI wasn't thinking specific timing -- just that the caller might be \nexpecting it to give up quickly if it doesn't work. That's what the code \nis trying to do and I wonder if there is a reason for it.\n\nBut you are probably correct and I'm just overthinking it.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Tue, 21 Jul 2020 16:00:42 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: OpenSSL randomness seeding" }, { "msg_contents": "> On 21 Jul 2020, at 22:00, David Steele <david@pgmasters.net> wrote:\n> \n> On 7/21/20 3:44 PM, Daniel Gustafsson wrote:\n>>> On 21 Jul 2020, at 17:31, David Steele <david@pgmasters.net> wrote:\n>>> On 7/21/20 8:13 AM, Daniel Gustafsson wrote:\n>>>> Another thing that stood out when reviewing this code is that we optimize for\n>>>> RAND_poll failing in pg_strong_random, when we already have RAND_status\n>>>> checking for a sufficiently seeded RNG for us. ISTM that we can simplify the\n>>>> code by letting RAND_status do the work as per 0002, and also (while unlikely)\n>>>> survive any transient failures in RAND_poll by allowing all the retries we've\n>>>> defined for the loop.\n>>> \n>>> I wonder how effective the retries are going to be if they happen immediately. However, most of the code paths I followed ended in a hard error when pg_strong_random() failed so it may not hurt to try. I just worry that some caller is depending on a faster failure here.\n>> There is that, but I'm not convinced that relying on specific timing for\n>> anything RNG or similarly cryptographic-related is especially sane.\n> \n> I wasn't thinking specific timing -- just that the caller might be expecting it to give up quickly if it doesn't work. That's what the code is trying to do and I wonder if there is a reason for it.\n\nI think the original intention was to handle older OpenSSL versions where\nmultiple successful RAND_poll calls were required for RAND_status to succeed,\nthe check working as an optimization since a failing RAND_poll would render all\nefforts useless anyway. I'm not sure this is true for the OpenSSL versions we\nsupport in HEAD, and/or for modern platforms, but without proof of it not being\nuseful I would opt for keeping it.\n\ncheers ./daniel\n\n", "msg_date": "Tue, 21 Jul 2020 22:36:53 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: OpenSSL randomness seeding" }, { "msg_contents": "On Tue, Jul 21, 2020 at 10:36:53PM +0200, Daniel Gustafsson wrote:\n> I think the original intention was to handle older OpenSSL versions where\n> multiple successful RAND_poll calls were required for RAND_status to succeed,\n> the check working as an optimization since a failing RAND_poll would render all\n> efforts useless anyway. I'm not sure this is true for the OpenSSL versions we\n> support in HEAD, and/or for modern platforms, but without proof of it not being\n> useful I would opt for keeping it.\n\nYeah, the retry loop refers to this part of the past discussion on the\nmatter:\nhttps://www.postgresql.org/message-id/CAEZATCWYs6rAp36VKm4W7Sb3EF_7tNcRuhcnJC1P8=8W9nBm9w@mail.gmail.com\n\nDuring the rewrite of the RNG engines, there was also a retry logic\nintroduced in 75e2c87, then removed in c16de9d for 1.1.1. In short,\nwe may be able to live without that once we cut more support for\nOpenSSL versions (minimum version support of 1.1.1 is a couple of\nyears ahead at least for us), but I see no reasons to not leave that\nin place either. And this visibly solved one problem for us. I don't\nsee either a reason to not simplify the loop to fall back to\nRAND_status() for the validation.\n\nIn short, the proposed patch set looks like a good idea to me to stick\nwith the recommendations of upstream's wiki to use RAND_poll() after a\nfork, but only do that on HEAD (OpenSSL 1.1.0 mixes the current\ntimestamp and the PID in the random seed of the default engine, 1.0.2\nonly the PID but RAND_cleanup is a no-op only after 1.1.0).\n--\nMichael", "msg_date": "Wed, 22 Jul 2020 10:45:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: OpenSSL randomness seeding" }, { "msg_contents": "On Tue, Jul 21, 2020 at 02:13:32PM +0200, Daniel Gustafsson wrote:\n> The silver lining here is that while OpenSSL nooped RAND_cleanup, they also\n> changed what is mixed into seeding so we are still not sharing a sequence. To\n> fix this, changing the RAND_cleanup call to RAND_poll should be enough to\n> ensure re-seeding after forking across all supported OpenSSL versions. Patch\n> 0001 implements this along with a comment referencing when it can be removed\n> (which most likely won't be for quite some time).\n> \n> Another thing that stood out when reviewing this code is that we optimize for\n> RAND_poll failing in pg_strong_random, when we already have RAND_status\n> checking for a sufficiently seeded RNG for us. ISTM that we can simplify the\n> code by letting RAND_status do the work as per 0002, and also (while unlikely)\n> survive any transient failures in RAND_poll by allowing all the retries we've\n> defined for the loop.\n\n> Thoughts on these?\n\nThese look good. I'll push them on Saturday or later. I wondered whether to\ndo both RAND_cleanup() and RAND_poll(), to purge all traces of the old seed on\nversions supporting both. Since that would strictly (albeit negligibly)\nincrease seed predictability, I like your decision here.\n\nDo you happen to know how OpenSSL 1.1.1 changed its reaction to forks in order\nto make the RAND_poll() superfluous? (No need to research it if you don't.)\n\n\n", "msg_date": "Tue, 21 Jul 2020 22:00:20 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: OpenSSL randomness seeding" }, { "msg_contents": "On Tue, Jul 21, 2020 at 10:00:20PM -0700, Noah Misch wrote:\n> These look good. I'll push them on Saturday or later. I wondered whether to\n> do both RAND_cleanup() and RAND_poll(), to purge all traces of the old seed on\n> versions supporting both. Since that would strictly (albeit negligibly)\n> increase seed predictability, I like your decision here.\n\nThanks Noah for taking care of it. No plans for a backpatch, right?\n--\nMichael", "msg_date": "Wed, 22 Jul 2020 14:35:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: OpenSSL randomness seeding" }, { "msg_contents": "> On 22 Jul 2020, at 07:00, Noah Misch <noah@leadboat.com> wrote:\n> \n> On Tue, Jul 21, 2020 at 02:13:32PM +0200, Daniel Gustafsson wrote:\n>> The silver lining here is that while OpenSSL nooped RAND_cleanup, they also\n>> changed what is mixed into seeding so we are still not sharing a sequence. To\n>> fix this, changing the RAND_cleanup call to RAND_poll should be enough to\n>> ensure re-seeding after forking across all supported OpenSSL versions. Patch\n>> 0001 implements this along with a comment referencing when it can be removed\n>> (which most likely won't be for quite some time).\n>> \n>> Another thing that stood out when reviewing this code is that we optimize for\n>> RAND_poll failing in pg_strong_random, when we already have RAND_status\n>> checking for a sufficiently seeded RNG for us. ISTM that we can simplify the\n>> code by letting RAND_status do the work as per 0002, and also (while unlikely)\n>> survive any transient failures in RAND_poll by allowing all the retries we've\n>> defined for the loop.\n> \n>> Thoughts on these?\n> \n> These look good. I'll push them on Saturday or later.\n\nThanks for picking it up!\n\n> I wondered whether to\n> do both RAND_cleanup() and RAND_poll(), to purge all traces of the old seed on\n> versions supporting both. Since that would strictly (albeit negligibly)\n> increase seed predictability, I like your decision here.\n\nThat's a good question. I believe that if one actually do use RAND_cleanup as\na re-seeding mechanism then that can break FIPS enabled OpenSSL installations\nas RAND_cleanup resets the RNG method from the FIPS RNG to the built-in one. I\nwould be inclined to follow the upstream recommendations of using RAND_poll\nexclusively, but I'm far from an expert here.\n\n> Do you happen to know how OpenSSL 1.1.1 changed its reaction to forks in order\n> to make the RAND_poll() superfluous? (No need to research it if you don't.)\n\nI'm not entirely sure, but I do believe that 1.1.1 ported over the RNG from the\nFIPS module which re-seeds itself with fork() protection. There was however a\nbug, fixed in 1.1.1d or thereabouts as CVE-2019-1549, where the fork protection\nwasn't activated by default.. so there is that. Since that bug was found,\nthere has been tests introduced to catch any regression on that which is\ncomforting.\n\ncheers ./daniel\n\n", "msg_date": "Wed, 22 Jul 2020 23:31:38 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: OpenSSL randomness seeding" }, { "msg_contents": "On Wed, Jul 22, 2020 at 11:31:38PM +0200, Daniel Gustafsson wrote:\n> Thanks for picking it up!\n\nFor the archives, the patch set has been applied as ce4939f and\n15e4419 on HEAD. Thanks, Noah.\n\n> That's a good question. I believe that if one actually do use RAND_cleanup as\n> a re-seeding mechanism then that can break FIPS enabled OpenSSL installations\n> as RAND_cleanup resets the RNG method from the FIPS RNG to the built-in one. I\n> would be inclined to follow the upstream recommendations of using RAND_poll\n> exclusively, but I'm far from an expert here.\n\nRAND_cleanup() can cause a failure telling that the RNG state is not\ninitialized when attempting to use FIPS in 1.0.2. This is not\nofficially supported by upstream AFAIK, and those APIs have been\ndropped later in 1.1.0. And FWIW, VMware's Photon actually applies\nsome custom patches in this area:\nhttps://github.com/vmware/photon/tree/master/SPECS/openssl\n\nopenssl-drbg-default-read-system-fips.patch is used to enforce the\ninitialization state of FIPS for example. Anyway, I would just stick\nwith the wiki recommendation.\n\n>> Do you happen to know how OpenSSL 1.1.1 changed its reaction to forks in order\n>> to make the RAND_poll() superfluous? (No need to research it if you don't.)\n> \n> I'm not entirely sure, but I do believe that 1.1.1 ported over the RNG from the\n> FIPS module which re-seeds itself with fork() protection. There was however a\n> bug, fixed in 1.1.1d or thereabouts as CVE-2019-1549, where the fork protection\n> wasn't activated by default.. so there is that. Since that bug was found,\n> there has been tests introduced to catch any regression on that which is\n> comforting.\n\nNo idea about this one actually.\n--\nMichael", "msg_date": "Sun, 26 Jul 2020 16:06:18 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: OpenSSL randomness seeding" }, { "msg_contents": "> On 26 Jul 2020, at 09:06, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Wed, Jul 22, 2020 at 11:31:38PM +0200, Daniel Gustafsson wrote:\n>> Thanks for picking it up!\n> \n> For the archives, the patch set has been applied as ce4939f and\n> 15e4419 on HEAD. Thanks, Noah.\n\nIndeed, thanks!\n\n>>> Do you happen to know how OpenSSL 1.1.1 changed its reaction to forks in order\n>>> to make the RAND_poll() superfluous? (No need to research it if you don't.)\n>> \n>> I'm not entirely sure, but I do believe that 1.1.1 ported over the RNG from the\n>> FIPS module which re-seeds itself with fork() protection. There was however a\n>> bug, fixed in 1.1.1d or thereabouts as CVE-2019-1549, where the fork protection\n>> wasn't activated by default.. so there is that. Since that bug was found,\n>> there has been tests introduced to catch any regression on that which is\n>> comforting.\n> \n> No idea about this one actually.\n\nI did some more reading and AFAICT it won't be required in 1.1.1+, but it also\nwon't cause any harm so unless evidence of the latter emerge we may just as\nwell leave it as an extra safeguard.\n\nSomewhat on topic though, 1.1.1 adds a RAND_priv_bytes function for random\nnumbers that are supposed to be private and extra protected via it's own DRBG.\nMaybe we should use that for SCRAM salts etc in case we detect 1.1.1?\n\ncheers ./daniel\n\n", "msg_date": "Thu, 30 Jul 2020 23:42:16 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: OpenSSL randomness seeding" }, { "msg_contents": "On Thu, Jul 30, 2020 at 11:42:16PM +0200, Daniel Gustafsson wrote:\n> Somewhat on topic though, 1.1.1 adds a RAND_priv_bytes function for random\n> numbers that are supposed to be private and extra protected via it's own DRBG.\n> Maybe we should use that for SCRAM salts etc in case we detect 1.1.1?\n\nMaybe. Would you have a separate pg_private_random() function, or just use\nRAND_priv_bytes() for pg_strong_random()? No pg_strong_random() caller is\nclearly disinterested in privacy; gen_random_uuid() may come closest.\n\n\n", "msg_date": "Sat, 1 Aug 2020 23:48:23 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: OpenSSL randomness seeding" }, { "msg_contents": "On Sat, Aug 01, 2020 at 11:48:23PM -0700, Noah Misch wrote:\n> On Thu, Jul 30, 2020 at 11:42:16PM +0200, Daniel Gustafsson wrote:\n>> Somewhat on topic though, 1.1.1 adds a RAND_priv_bytes function for random\n>> numbers that are supposed to be private and extra protected via it's own DRBG.\n>> Maybe we should use that for SCRAM salts etc in case we detect 1.1.1?\n> \n> Maybe. Would you have a separate pg_private_random() function, or just use\n> RAND_priv_bytes() for pg_strong_random()? No pg_strong_random() caller is\n> clearly disinterested in privacy; gen_random_uuid() may come closest.\n\nFWIW, I am not sure that we need extra level of complexity when it\ncomes to random number generation, so having only one API to rule them\nall sounds sensible to me, particularly if we know that the API used\nhas more private protections.\n--\nMichael", "msg_date": "Sun, 2 Aug 2020 16:05:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: OpenSSL randomness seeding" }, { "msg_contents": "> On 2 Aug 2020, at 09:05, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Sat, Aug 01, 2020 at 11:48:23PM -0700, Noah Misch wrote:\n>> On Thu, Jul 30, 2020 at 11:42:16PM +0200, Daniel Gustafsson wrote:\n>>> Somewhat on topic though, 1.1.1 adds a RAND_priv_bytes function for random\n>>> numbers that are supposed to be private and extra protected via it's own DRBG.\n>>> Maybe we should use that for SCRAM salts etc in case we detect 1.1.1?\n>> \n>> Maybe. Would you have a separate pg_private_random() function, or just use\n>> RAND_priv_bytes() for pg_strong_random()? No pg_strong_random() caller is\n>> clearly disinterested in privacy; gen_random_uuid() may come closest.\n> \n> FWIW, I am not sure that we need extra level of complexity when it\n> comes to random number generation, so having only one API to rule them\n> all sounds sensible to me, particularly if we know that the API used\n> has more private protections.\n\nI would agree with that, especially since we might not be able to provide an\nequivalent implementation of a pg_private_random() function in non-OpenSSL\nbuilds.\n\nWill do a bit more reading and poking and post a patch.\n\ncheers ./daniel\n\n", "msg_date": "Sun, 2 Aug 2020 23:24:56 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: OpenSSL randomness seeding" } ]
[ { "msg_contents": "I don't quite understand this part of the comment of the xl_heap_header\nstructure:\n\n * NOTE: t_hoff could be recomputed, but we may as well store it because\n * it will come for free due to alignment considerations.\n\nWhat are the alignment considerations? The WAL code does not appear to assume\nany alignment, and therefore it uses memcpy() to copy the structure into a\nlocal variable before accessing its fields. For example, heap_xlog_insert().\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Tue, 21 Jul 2020 19:45:37 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "xl_heap_header alignment?" }, { "msg_contents": "Hi, \n\nOn July 21, 2020 10:45:37 AM PDT, Antonin Houska <ah@cybertec.at> wrote:\n>I don't quite understand this part of the comment of the xl_heap_header\n>structure:\n>\n>* NOTE: t_hoff could be recomputed, but we may as well store it because\n> * it will come for free due to alignment considerations.\n>\n>What are the alignment considerations? The WAL code does not appear to\n>assume\n>any alignment, and therefore it uses memcpy() to copy the structure\n>into a\n>local variable before accessing its fields. For example,\n>heap_xlog_insert().\n\nUnless you declare them as packed, structs will add padding to align members correctly (if, and only if, the whole struct is stored well aligned).\n\nRegards,\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Tue, 21 Jul 2020 11:02:11 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: xl_heap_header alignment?" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On July 21, 2020 10:45:37 AM PDT, Antonin Houska <ah@cybertec.at> wrote:\n>> I don't quite understand this part of the comment of the xl_heap_header\n>> structure:\n>> * NOTE: t_hoff could be recomputed, but we may as well store it because\n>> * it will come for free due to alignment considerations.\n\n> Unless you declare them as packed, structs will add padding to align members correctly (if, and only if, the whole struct is stored well aligned).\n\nI think that comment may be out of date, because what's there now is\n\n * NOTE: t_hoff could be recomputed, but we may as well store it because\n * it will come for free due to alignment considerations.\n */\ntypedef struct xl_heap_header\n{\n\tuint16\t\tt_infomask2;\n\tuint16\t\tt_infomask;\n\tuint8\t\tt_hoff;\n} xl_heap_header;\n\nI find it hard to see how tacking t_hoff onto what would have been a\n4-byte struct is \"free\". Maybe sometime in the dim past there was\nanother field in this struct? (But I checked back as far as 7.4\nwithout finding one.)\n\nI don't particularly want to remove the field, but we ought to\nchange or remove the comment.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 21 Jul 2020 14:33:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: xl_heap_header alignment?" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I don't particularly want to remove the field, but we ought to\n> change or remove the comment.\n\nI'm not concerned about the existence of the field as well. The comment just\nmade me worried that I might be missing some fundamental concept. Thanks for\nyour opinion.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Wed, 22 Jul 2020 06:58:33 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: xl_heap_header alignment?" }, { "msg_contents": "On Wed, Jul 22, 2020 at 06:58:33AM +0200, Antonin Houska wrote:\n> Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> > I don't particularly want to remove the field, but we ought to\n> > change or remove the comment.\n> \n> I'm not concerned about the existence of the field as well. The comment just\n> made me worried that I might be missing some fundamental concept. Thanks for\n> your opinion.\n\nI have developed the attached patch to address this.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee", "msg_date": "Fri, 21 Aug 2020 20:40:48 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: xl_heap_header alignment?" }, { "msg_contents": "On Fri, Aug 21, 2020 at 5:41 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Wed, Jul 22, 2020 at 06:58:33AM +0200, Antonin Houska wrote:\n> > Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > > I don't particularly want to remove the field, but we ought to\n> > > change or remove the comment.\n> >\n> > I'm not concerned about the existence of the field as well. The comment\n> just\n> > made me worried that I might be missing some fundamental concept. Thanks\n> for\n> > your opinion.\n>\n> I have developed the attached patch to address this.\n>\n\nI would suggest either dropping the word \"potentially\" or removing the\nsentence. I'm not a fan of this in-between position on principle even if I\ndon't understand the full reality of the implementation.\n\nIf leaving the word \"potentially\" is necessary it would be good to point\nout where the complexity is documented as a part of that - this header file\nprobably not the best place to go into detail.\n\nDavid J.\n\nOn Fri, Aug 21, 2020 at 5:41 PM Bruce Momjian <bruce@momjian.us> wrote:On Wed, Jul 22, 2020 at 06:58:33AM +0200, Antonin Houska wrote:\n> Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> > I don't particularly want to remove the field, but we ought to\n> > change or remove the comment.\n> \n> I'm not concerned about the existence of the field as well. The comment just\n> made me worried that I might be missing some fundamental concept. Thanks for\n> your opinion.\n\nI have developed the attached patch to address this.I would suggest either dropping the word \"potentially\" or removing the sentence.  I'm not a fan of this in-between position on principle even if I don't understand the full reality of the implementation.If leaving the word \"potentially\" is necessary it would be good to point out where the complexity is documented as a part of that - this header file probably not the best place to go into detail.David J.", "msg_date": "Fri, 21 Aug 2020 20:07:34 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: xl_heap_header alignment?" }, { "msg_contents": "On Fri, Aug 21, 2020 at 08:07:34PM -0700, David G. Johnston wrote:\n> On Fri, Aug 21, 2020 at 5:41 PM Bruce Momjian <bruce@momjian.us> wrote:\n> \n> On Wed, Jul 22, 2020 at 06:58:33AM +0200, Antonin Houska wrote:\n> > Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > > I don't particularly want to remove the field, but we ought to\n> > > change or remove the comment.\n> >\n> > I'm not concerned about the existence of the field as well. The comment\n> just\n> > made me worried that I might be missing some fundamental concept. Thanks\n> for\n> > your opinion.\n> \n> I have developed the attached patch to address this.\n> \n> \n> I would suggest either dropping the word \"potentially\" or removing the\n> sentence.� I'm not a fan of this in-between position on principle even if I\n> don't understand the full reality of the implementation.\n> \n> If leaving the word \"potentially\" is necessary it would be good to point out\n> where the complexity is documented as a part of that - this header file\n> probably�not the best place to go into detail.\n\nUpdated patch.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee", "msg_date": "Sat, 22 Aug 2020 11:37:34 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: xl_heap_header alignment?" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> Updated patch.\n\nFWIW, I concur with the idea of just dropping that sentence altogether.\nIt's not likely that getting rid of that field is a line of development\nthat will ever be pursued; if anyone does get concerned about cutting\nWAL size, there's a lot of more-valuable directions to go in.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 22 Aug 2020 11:45:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: xl_heap_header alignment?" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Wed, Jul 22, 2020 at 06:58:33AM +0200, Antonin Houska wrote:\n> > Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > \n> > > I don't particularly want to remove the field, but we ought to\n> > > change or remove the comment.\n> > \n> > I'm not concerned about the existence of the field as well. The comment just\n> > made me worried that I might be missing some fundamental concept. Thanks for\n> > your opinion.\n> \n> I have developed the attached patch to address this.\n\nThanks. I wasn't sure if I'm expected to send the patch and then I forgot.\n\nIf the comment tells that t_hoff can be computed (i.e. it's no necessary to\ninclude it in the structure), I think the comment should tell why it's yet\nincluded. Maybe something about \"historical reasons\"? Perhaps we can say that\nthe storage used to be free due to padding, and that it's no longer so, but\nit's still \"cheap\", so it's not worth to teach the REDO functions to compute\nthe value.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Sat, 22 Aug 2020 20:48:54 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: xl_heap_header alignment?" }, { "msg_contents": "Antonin Houska <ah@cybertec.at> wrote:\n\n> If the comment tells that t_hoff can be computed (i.e. it's no necessary to\n> include it in the structure), I think the comment should tell why it's yet\n> included. Maybe something about \"historical reasons\"? Perhaps we can say that\n> the storage used to be free due to padding, and that it's no longer so, but\n> it's still \"cheap\", so it's not worth to teach the REDO functions to compute\n> the value.\n\nI've received some more replies to your email as soon as I had replied. I\ndon't insist on my proposal, just go ahead with your simpler changes.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Sat, 22 Aug 2020 21:00:15 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: xl_heap_header alignment?" }, { "msg_contents": "On Sat, Aug 22, 2020 at 09:00:15PM +0200, Antonin Houska wrote:\n> Antonin Houska <ah@cybertec.at> wrote:\n> \n> > If the comment tells that t_hoff can be computed (i.e. it's no necessary to\n> > include it in the structure), I think the comment should tell why it's yet\n> > included. Maybe something about \"historical reasons\"? Perhaps we can say that\n> > the storage used to be free due to padding, and that it's no longer so, but\n> > it's still \"cheap\", so it's not worth to teach the REDO functions to compute\n> > the value.\n> \n> I've received some more replies to your email as soon as I had replied. I\n> don't insist on my proposal, just go ahead with your simpler changes.\n\nPatch applied.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Mon, 31 Aug 2020 13:58:31 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: xl_heap_header alignment?" } ]
[ { "msg_contents": "We hit this on v13b2 and verified it fails on today's HEAD (ac25e7b039).\n\nexplain SELECT 1 FROM sites NATURAL JOIN sectors WHERE sites.config_site_name != sectors.sect_name ;\nERROR: could not determine which collation to use for string comparison \n\nI can workaround the issue by DELETEing stats for either column.\n\nIt's possible we're doing soemthing wrong and I need to revisit docs..but this\nwas working in v12.\n\nts=# SELECT * FROM pg_stats WHERE tablename='sites' AND attname='config_site_name'; \n-[ RECORD 1 ]----------+-----------------\nschemaname | public\ntablename | sites\nattname | config_site_name\ninherited | f\nnull_frac | 0\navg_width | 1\nn_distinct | 1\nmost_common_vals | {\"\"}\nmost_common_freqs | {1}\nhistogram_bounds | \ncorrelation | 1\nmost_common_elems | \nmost_common_elem_freqs | \nelem_count_histogram | \n\n#1 0x0000000000ab2993 in errfinish (filename=0xcaae40 \"varlena.c\", lineno=1476, funcname=0xcab7b0 <__func__.18296> \"check_collation_set\") at elog.c:502\n#2 0x0000000000a783ae in check_collation_set (collid=0) at varlena.c:1473\n#3 0x0000000000a78857 in texteq (fcinfo=0x7fff1ecae590) at varlena.c:1740\n#4 0x0000000000a4248c in eqjoinsel_inner (opfuncoid=67, collation=0, vardata1=0x7fff1ecae7a0, vardata2=0x7fff1ecae770, nd1=1, nd2=1, isdefault1=false, isdefault2=false, sslot1=0x7fff1ecae720, \n sslot2=0x7fff1ecae6e0, stats1=0x1a97c00, stats2=0x1a98230, have_mcvs1=true, have_mcvs2=true) at selfuncs.c:2466\n#5 0x0000000000a41f66 in eqjoinsel (fcinfo=0x7fff1ecae8a0) at selfuncs.c:2298\n#6 0x0000000000abb63c in DirectFunctionCall5Coll (func=0xa41caf <eqjoinsel>, collation=0, arg1=28313248, arg2=98, arg3=28315832, arg4=0, arg5=140733710004032) at fmgr.c:908\n#7 0x0000000000a43197 in neqjoinsel (fcinfo=0x7fff1ecaea40) at selfuncs.c:2824\n#8 0x0000000000abc4a0 in FunctionCall5Coll (flinfo=0x7fff1ecaeb00, collation=100, arg1=28313248, arg2=531, arg3=28315832, arg4=0, arg5=140733710004032) at fmgr.c:1245\n#9 0x0000000000abcd1c in OidFunctionCall5Coll (functionId=106, collation=100, arg1=28313248, arg2=531, arg3=28315832, arg4=0, arg5=140733710004032) at fmgr.c:1463\n#10 0x000000000084b2c2 in join_selectivity (root=0x1b006a0, operatorid=531, args=0x1b010b8, inputcollid=100, jointype=JOIN_INNER, sjinfo=0x7fff1ecaef40) at plancat.c:1822\n#11 0x00000000007dba29 in clause_selectivity (root=0x1b006a0, clause=0x1b01168, varRelid=0, jointype=JOIN_INNER, sjinfo=0x7fff1ecaef40) at clausesel.c:765\n#12 0x00000000007dacf4 in clauselist_selectivity_simple (root=0x1b006a0, clauses=0x1b05fe8, varRelid=0, jointype=JOIN_INNER, sjinfo=0x7fff1ecaef40, estimatedclauses=0x0) at clausesel.c:169\n#13 0x00000000007dac33 in clauselist_selectivity (root=0x1b006a0, clauses=0x1b05fe8, varRelid=0, jointype=JOIN_INNER, sjinfo=0x7fff1ecaef40) at clausesel.c:102\n#14 0x00000000007e44e3 in calc_joinrel_size_estimate (root=0x1b006a0, joinrel=0x1b02ce0, outer_rel=0x1afd4f0, inner_rel=0x1b01cf0, outer_rows=311, inner_rows=1047, sjinfo=0x7fff1ecaef40, restrictlist_in=0x1b05de0)\n at costsize.c:4857\n#15 0x00000000007e41eb in set_joinrel_size_estimates (root=0x1b006a0, rel=0x1b02ce0, outer_rel=0x1afd4f0, inner_rel=0x1b01cf0, sjinfo=0x7fff1ecaef40, restrictlist=0x1b05de0) at costsize.c:4712\n#16 0x00000000008507a6 in build_join_rel (root=0x1b006a0, joinrelids=0x1b05c08, outer_rel=0x1afd4f0, inner_rel=0x1b01cf0, sjinfo=0x7fff1ecaef40, restrictlist_ptr=0x7fff1ecaef38) at relnode.c:728\n#17 0x00000000007f5ecb in make_join_rel (root=0x1b006a0, rel1=0x1afd4f0, rel2=0x1b01cf0) at joinrels.c:746\n#18 0x00000000007f542e in make_rels_by_clause_joins (root=0x1b006a0, old_rel=0x1afd4f0, other_rels_list=0x1b05d08, other_rels=0x1b05d28) at joinrels.c:312\n#19 0x00000000007f4f04 in join_search_one_level (root=0x1b006a0, level=2) at joinrels.c:123\n#20 0x00000000007d96a5 in standard_join_search (root=0x1b006a0, levels_needed=2, initial_rels=0x1b05d08) at allpaths.c:3097\n#21 0x00000000007d961e in make_rel_from_joinlist (root=0x1b006a0, joinlist=0x1b03b28) at allpaths.c:3028\n#22 0x00000000007d4f82 in make_one_rel (root=0x1b006a0, joinlist=0x1b03b28) at allpaths.c:227\n#23 0x000000000080f835 in query_planner (root=0x1b006a0, qp_callback=0x816525 <standard_qp_callback>, qp_extra=0x7fff1ecaf320) at planmain.c:269\n#24 0x0000000000813406 in grouping_planner (root=0x1b006a0, inheritance_update=false, tuple_fraction=0) at planner.c:2058\n#25 0x00000000008115b7 in subquery_planner (glob=0x1b00588, parse=0x1afdc48, parent_root=0x0, hasRecursion=false, tuple_fraction=0) at planner.c:1015\n#26 0x000000000080fe34 in standard_planner (parse=0x1afdc48, query_string=0x1938e90 \"explain SELECT 1 FROM sites NATURAL JOIN sectors WHERE sites. config_site_name != sectors.sect_name ;\", cursorOptions=256, \n boundParams=0x0) at planner.c:405\n\n\n", "msg_date": "Tue, 21 Jul 2020 14:16:06 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "v13 planner ERROR: could not determine which collation to use for\n string comparison" }, { "msg_contents": "On Tuesday, July 21, 2020, Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> We hit this on v13b2 and verified it fails on today's HEAD (ac25e7b039).\n>\n> explain SELECT 1 FROM sites NATURAL JOIN sectors WHERE\n> sites.config_site_name != sectors.sect_name ;\n> ERROR: could not determine which collation to use for string comparison\n>\n> I can workaround the issue by DELETEing stats for either column.\n>\n> It's possible we're doing soemthing wrong and I need to revisit docs..but\n> this\n> was working in v12.\n>\n\nThis sounds suspiciously like a side-effect of:\n\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=022cd0bfd33968f2b004106cfeaa3b2951e7f322\n\nDavid J.\n\nOn Tuesday, July 21, 2020, Justin Pryzby <pryzby@telsasoft.com> wrote:We hit this on v13b2 and verified it fails on today's HEAD (ac25e7b039).\n\nexplain SELECT 1 FROM sites NATURAL JOIN sectors WHERE sites.config_site_name != sectors.sect_name ;\nERROR:  could not determine which collation to use for string comparison \n\nI can workaround the issue by DELETEing stats for either column.\n\nIt's possible we're doing soemthing wrong and I need to revisit docs..but this\nwas working in v12.\nThis sounds suspiciously like a side-effect of: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=022cd0bfd33968f2b004106cfeaa3b2951e7f322David J.", "msg_date": "Tue, 21 Jul 2020 12:34:01 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: v13 planner ERROR: could not determine which collation to use for\n string comparison" }, { "msg_contents": "Reproducer:\n\npostgres=# CREATE TABLE t AS SELECT ''a FROM generate_series(1,99); CREATE TABLE u AS SELECT ''a FROM generate_series(1,99) ; VACUUM ANALYZE t,u;\npostgres=# explain SELECT * FROM t JOIN u ON t.a!=u.a;\nERROR: could not determine which collation to use for string comparison\nHINT: Use the COLLATE clause to set the collation explicitly.\n\n\n", "msg_date": "Tue, 21 Jul 2020 14:57:57 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: v13 planner ERROR: could not determine which collation to use\n for string comparison" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> We hit this on v13b2 and verified it fails on today's HEAD (ac25e7b039).\n> explain SELECT 1 FROM sites NATURAL JOIN sectors WHERE sites.config_site_name != sectors.sect_name ;\n> ERROR: could not determine which collation to use for string comparison \n\n> I can workaround the issue by DELETEing stats for either column.\n\nUgh. It's clear from your stack trace that neqjoinsel() has forgotten to\npass through collation to eqjoinsel(). Will fix.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 21 Jul 2020 18:25:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v13 planner ERROR: could not determine which collation to use for\n string comparison" }, { "msg_contents": "On Tue, Jul 21, 2020 at 06:25:00PM -0400, Tom Lane wrote:\n> Ugh. It's clear from your stack trace that neqjoinsel() has forgotten to\n> pass through collation to eqjoinsel(). Will fix.\n\nWhy didn't you include a regression test in bd0d893?\n--\nMichael", "msg_date": "Wed, 22 Jul 2020 09:36:12 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: v13 planner ERROR: could not determine which collation to use\n for string comparison" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Tue, Jul 21, 2020 at 06:25:00PM -0400, Tom Lane wrote:\n>> Ugh. It's clear from your stack trace that neqjoinsel() has forgotten to\n>> pass through collation to eqjoinsel(). Will fix.\n\n> Why didn't you include a regression test in bd0d893?\n\nDidn't really see much point. It's not like anybody's likely to\ntake out the collation handling now that it's there.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 21 Jul 2020 20:43:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v13 planner ERROR: could not determine which collation to use for\n string comparison" } ]
[ { "msg_contents": "Hi hackers.\n\nI tried to create LSM AM which can be used instead of nbtree.\nI looked at contrib/btree/gin, contrib/isn and try to do the following:\n\nCREATE OPERATOR FAMILY lsm3_float_ops USING lsm3;\n\nCREATE OPERATOR CLASS float4_ops DEFAULT\n     FOR TYPE float4 USING lsm3 FAMILY lsm3_float_ops AS\n     OPERATOR 1  <,\n     OPERATOR 2  <=,\n     OPERATOR 3  =,\n     OPERATOR 4  >=,\n     OPERATOR 5  >,\n     FUNCTION 1  btfloat4cmp(float4,float4);\n\nCREATE OPERATOR CLASS float8_ops DEFAULT\n     FOR TYPE float8 USING lsm3 FAMILY lsm3_float_ops AS\n     OPERATOR 1  <,\n     OPERATOR 2  <=,\n     OPERATOR 3  =,\n     OPERATOR 4  >=,\n     OPERATOR 5  >,\n     FUNCTION 1  btfloat8cmp(float8,float8);\n\n\nALTER OPERATOR FAMILY lsm3_float_ops USING lsm3 ADD\n     OPERATOR 1  < (float4,float8),\n     OPERATOR 1  < (float8,float4),\n\n     OPERATOR 2  <= (float4,float8),\n     OPERATOR 2  <= (float8,float4),\n\n     OPERATOR 3  = (float4,float8),\n     OPERATOR 3  = (float8,float4),\n\n     OPERATOR 4  >= (float4,float8),\n     OPERATOR 4  >= (float8,float4),\n\n     OPERATOR 5  > (float4,float8),\n     OPERATOR 5  > (float8,float4),\n\n     FUNCTION 1  btfloat48cmp(float4,float8),\n     FUNCTION 1  btfloat84cmp(float8,float4);\n\n\nBut then I get error for btfloat48cmp and btfloat84cmp functions:\n\nERROR:  associated data types must be specified for index support function\n\nIf I replace lsm3 with btree in ALTER FAMILY, then there is no error.\nI wonder if it is possible in Postgres to define custom index, which can \nhandle comparison of different types, i.e.\n\ncreate table t(pk bigint);\ncreate index on t using lsm3(pk);\nselect * from t where pk=1;\n\nI failed to make Postgres use index in this case. Index is used only if \nI rewrite this query in this way:\nselect * from t where pk=1::bigint;\n\nThanks in advance,\nKonstantin\n\n\n\n\n\n\n\n", "msg_date": "Thu, 23 Jul 2020 02:15:56 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Why it is not possible to create custom AM which behaves similar to\n btree?" }, { "msg_contents": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru> writes:\n> But then I get error for btfloat48cmp and btfloat84cmp functions:\n> ERROR: associated data types must be specified for index support function\n\nYou need to specify the amproclefttype and amprocrighttype types you\nwant the function to be registered under. The core code knows that\nfor btree, those are the same as the actual parameter types of the\nfunction; but there's no reason to make such an assumption for other AMs.\nSo you have to write it out; perhaps\n\n ...\n FUNCTION 1(float4,float8) btfloat48cmp(float4,float8),\n ...\n\n\t\t\tregards, tom lane\n\n\n\n\n", "msg_date": "Wed, 22 Jul 2020 20:11:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Why it is not possible to create custom AM which behaves similar\n to btree?" }, { "msg_contents": "\n\nOn 23.07.2020 03:11, Tom Lane wrote:\n> Konstantin Knizhnik <k.knizhnik@postgrespro.ru> writes:\n>> But then I get error for btfloat48cmp and btfloat84cmp functions:\n>> ERROR: associated data types must be specified for index support function\n> You need to specify the amproclefttype and amprocrighttype types you\n> want the function to be registered under. The core code knows that\n> for btree, those are the same as the actual parameter types of the\n> function; but there's no reason to make such an assumption for other AMs.\n> So you have to write it out; perhaps\n>\n> ...\n> FUNCTION 1(float4,float8) btfloat48cmp(float4,float8),\n> ...\n\nThank you very much.\nIt works!\n\n\n\n", "msg_date": "Thu, 23 Jul 2020 15:35:42 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Why it is not possible to create custom AM which behaves similar\n to btree?" } ]
[ { "msg_contents": "Hi,hackers\r\n\r\nWhen I analyze this commit:\r\nhttps://github.com/postgres/postgres/commit/7897e3bb902c557412645b82120f4d95f7474906\r\nI noticed that the message was not consistent with the previous one in ‘src/backend/storage/file/buffile.c’\r\nTo keep the message consistent, I made the patch.\r\n\r\n\r\nSee the attachment for the patch.\r\n\r\n\r\nBest regards", "msg_date": "Thu, 23 Jul 2020 03:23:47 +0000", "msg_from": "\"Lu, Chenyang\" <lucy.fnst@cn.fujitsu.com>", "msg_from_op": true, "msg_subject": "[PATCH] keep the message consistent in buffile.c" }, { "msg_contents": "On Thu, Jul 23, 2020 at 3:24 PM Lu, Chenyang <lucy.fnst@cn.fujitsu.com> wrote:\n> When I analyze this commit:\n>\n> https://github.com/postgres/postgres/commit/7897e3bb902c557412645b82120f4d95f7474906\n>\n> I noticed that the message was not consistent with the previous one in ‘src/backend/storage/file/buffile.c’\n>\n> To keep the message consistent, I made the patch.\n\nThanks. I will push this later today.\n\n\n", "msg_date": "Thu, 23 Jul 2020 15:27:10 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] keep the message consistent in buffile.c" } ]
[ { "msg_contents": "Hi,\n\nI'm not sure this is the right list, but I have a problem concerning building PostgreSQL 12.3 from source on a Mac.\n\nI do:\n\n./configure \\\n --prefix=${pgTargetDir} \\\n --enable-nls \\\n --with-perl \\\n --with-python \\\n --with-libxml \\\n --with-tclconfig=/usr/lib64 \\\n PG_SYSROOT=$(xcodebuild -version -sdk macosx Path)\n\nand I get:\n\n...\nchecking for __cpuid... no\nchecking for _mm_crc32_u8 and _mm_crc32_u32 with CFLAGS=... no\nchecking for _mm_crc32_u8 and _mm_crc32_u32 with CFLAGS=-msse4.2... yes\nchecking for __crc32cb, __crc32ch, __crc32cw, and __crc32cd with CFLAGS=... no\nchecking for __crc32cb, __crc32ch, __crc32cw, and __crc32cd with CFLAGS=-march=armv8-a+crc... no\nchecking which CRC-32C implementation to use... SSE 4.2 with runtime check\nchecking which semaphore API to use... System V\nchecking for /dev/urandom... yes\nchecking which random number source to use... /dev/urandom\nchecking for library containing bind_textdomain_codeset... no\nconfigure: error: a gettext implementation is required for NLS\n\nIf I leave out --enable-nls then building works fine and I get everything without error. But why is there a problem with gettext?\n\nMy Mac:\n\nMacBook Pro (Retina, 15-inch, Late 2013)\nmacOS Catalina 10.15.6 (all updates installed)\nXcode 11.6 (11E708) w/ command line tools installed\nNo brew, no MacPorts, or other stuff like this is installed.\n\nDoes anyone have an idea? Thanks in advance.\n\nCheers,\nPaul\n\n", "msg_date": "Thu, 23 Jul 2020 12:01:36 +0200", "msg_from": "=?utf-8?Q?Paul_F=C3=B6rster?= <paul.foerster@gmail.com>", "msg_from_op": true, "msg_subject": "Building 12.3 from source on Mac" }, { "msg_contents": "> On 23 Jul 2020, at 12:01, Paul Förster <paul.foerster@gmail.com> wrote:\n\n> If I leave out --enable-nls then building works fine and I get everything without error. But why is there a problem with gettext?\n\ngettext is not shipped by default with macOS, you will have to install it\nseparately via your favourite package manager or by building from source. To\nverify you can always search your system for the required header file:\n\n mdfind -name libintl.h\n\nSee https://www.postgresql.org/docs/current/install-requirements.html for more\ninformation on build-time requirements.\n\ncheers ./daniel\n\n", "msg_date": "Thu, 23 Jul 2020 12:37:33 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Building 12.3 from source on Mac" }, { "msg_contents": "Hi Daniel,\n\n> On 23. Jul, 2020, at 12:37, Daniel Gustafsson <daniel@yesql.se> wrote:\n> gettext is not shipped by default with macOS, you will have to install it\n> separately via your favourite package manager or by building from source. To\n> verify you can always search your system for the required header file:\n> \n> mdfind -name libintl.h\n> \n> See https://www.postgresql.org/docs/current/install-requirements.html for more\n> information on build-time requirements.\n\nthanks for the answer and the pointer.\n\nBut I am still wondering: mdfind spits out libintl.h without me installing the gettext library:\n\npaul@meerkat:~$ mdfind -name libintl.h\n/usr/local/include/libintl.h\n\nWhy is that? Did I miss something?\n\nCheers,\nPaul\n\n", "msg_date": "Thu, 23 Jul 2020 15:24:31 +0200", "msg_from": "=?utf-8?Q?Paul_F=C3=B6rster?= <paul.foerster@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Building 12.3 from source on Mac" }, { "msg_contents": "=?utf-8?Q?Paul_F=C3=B6rster?= <paul.foerster@gmail.com> writes:\n>> On 23. Jul, 2020, at 12:37, Daniel Gustafsson <daniel@yesql.se> wrote:\n>> gettext is not shipped by default with macOS, you will have to install it\n>> separately via your favourite package manager or by building from source.\n\n> But I am still wondering: mdfind spits out libintl.h without me installing the gettext library:\n> paul@meerkat:~$ mdfind -name libintl.h\n> /usr/local/include/libintl.h\n\nKind of looks like you *did* install gettext as Daniel suggested\n(macOS proper would never put anything under /usr/local). Maybe\nyou did not ask for that specifically, but installed some package\nthat requires it?\n\nHowever, Apple's toolchain doesn't search /usr/local by default,\nI believe. You'll need to add something along the line of\n\n --with-includes=/usr/local/include --with-libs=/usr/local/lib\n\nto your configure command.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 23 Jul 2020 09:42:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Building 12.3 from source on Mac" }, { "msg_contents": "Hi Tom,\n\n> On 23. Jul, 2020, at 15:42, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Kind of looks like you *did* install gettext as Daniel suggested\n> (macOS proper would never put anything under /usr/local). Maybe\n> you did not ask for that specifically, but installed some package\n> that requires it?\n> \n> However, Apple's toolchain doesn't search /usr/local by default,\n> I believe. You'll need to add something along the line of\n> \n> --with-includes=/usr/local/include --with-libs=/usr/local/lib\n> \n> to your configure command.\n\nI tried with your options. Still, same effect. Ok, worth a try.\n\nI found:\n\npaul@meerkat:~$ mdfind -name gettext | egrep -vi \"/(share|man|bin|system)/\" \n/usr/local/info/gettext.info\n/usr/local/lib/gettext\n/Library/i-Installer/Receipts/gettext.ii2receipt\n/usr/local/include/gettext-po.h\n\npaul@meerkat:~$ mdfind -name libintl | egrep -vi \"/(share|man|bin|system)/\"\n/usr/local/lib/libintl.3.4.3.dylib\n/usr/local/lib/libintl.a\n/usr/local/lib/libintl.la\n/usr/local/include/libintl.h\n\nBut I did not *knowingly* install that. I guess it comes as part of Xcode but I really don't know. I'm not a developer, I just want to build PostgreSQL for my Mac.\n\nCheers,\nPaul\n\n", "msg_date": "Thu, 23 Jul 2020 15:56:27 +0200", "msg_from": "=?utf-8?Q?Paul_F=C3=B6rster?= <paul.foerster@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Building 12.3 from source on Mac" }, { "msg_contents": "=?utf-8?Q?Paul_F=C3=B6rster?= <paul.foerster@gmail.com> writes:\n>> On 23. Jul, 2020, at 15:42, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> However, Apple's toolchain doesn't search /usr/local by default,\n>> I believe. You'll need to add something along the line of\n>> --with-includes=/usr/local/include --with-libs=/usr/local/lib\n>> to your configure command.\n\n> I tried with your options. Still, same effect. Ok, worth a try.\n\nI might be wrong about that. But this shows another issue:\n\n> paul@meerkat:~$ mdfind -name libintl | egrep -vi \"/(share|man|bin|system)/\"\n> /usr/local/lib/libintl.3.4.3.dylib\n> /usr/local/lib/libintl.a\n> /usr/local/lib/libintl.la\n> /usr/local/include/libintl.h\n\nLooks like what you lack is a symlink libintl.dylib -> libintl.3.4.3.dylib\nin /usr/local/lib. It's not real clear to me why you'd have .a and .la\nfiles and no versionless symlink, because all of those files would\njust be used for linking dependent software.\n\n> But I did not *knowingly* install that. I guess it comes as part of Xcode but I really don't know. I'm not a developer, I just want to build PostgreSQL for my Mac.\n\nThese files absolutely, positively, gold-plated 100% did not come\nwith XCode. Homebrew installs stuff under /usr/local though.\nNot sure about MacPorts.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 23 Jul 2020 10:03:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Building 12.3 from source on Mac" }, { "msg_contents": "Hi Tom,\n\n> On 23. Jul, 2020, at 16:03, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Looks like what you lack is a symlink libintl.dylib -> libintl.3.4.3.dylib\n> in /usr/local/lib. It's not real clear to me why you'd have .a and .la\n> files and no versionless symlink, because all of those files would\n> just be used for linking dependent software.\n\nthere is not a single symlink in /usr/local/lib:\n\npaul@meerkat:~$ ll /usr/local/lib \ntotal 113968\ndrwxr-xr-x 4 root wheel 128 Oct 17 2014 ImageMagick-6.3.3\n-rw-r--r--+ 1 root wheel 606 May 14 2010 charset.alias\ndrwxr-xr-x 6 root wheel 192 Oct 17 2014 gettext\n-rwxr-xr-x+ 1 root wheel 4244568 Mar 6 2007 libMagick++.10.0.7.dylib\n-rw-r--r--+ 1 root wheel 4722468 Mar 6 2007 libMagick++.a\n-rwxr-xr-x+ 1 root wheel 980 Mar 6 2007 libMagick++.la\n-rwxr-xr-x+ 1 root wheel 8414800 Mar 6 2007 libMagick.10.0.7.dylib\n-rw-r--r--+ 1 root wheel 8129604 Mar 6 2007 libMagick.a\n-rwxr-xr-x+ 1 root wheel 912 Mar 6 2007 libMagick.la\n-rwxr-xr-x+ 1 root wheel 2416164 Mar 6 2007 libWand.10.0.7.dylib\n-rw-r--r--+ 1 root wheel 3354004 Mar 6 2007 libWand.a\n-rwxr-xr-x+ 1 root wheel 926 Mar 6 2007 libWand.la\n-rwxr-xr-x+ 1 root wheel 737672 Sep 23 2006 libasprintf.0.0.0.dylib\n-rw-r--r--+ 1 root wheel 47704 Sep 23 2006 libasprintf.a\n-rwxr-xr-x+ 1 root wheel 832 Sep 23 2006 libasprintf.la\n-rwxr-xr-x+ 1 root wheel 4024172 Mar 6 2007 libfreetype.6.3.12.dylib\n-rw-r--r--+ 1 root wheel 4240572 Mar 6 2007 libfreetype.a\n-rwxr-xr-x+ 1 root wheel 838 Mar 6 2007 libfreetype.la\n-rwxr-xr-x+ 1 root wheel 3429720 Mar 13 2007 libgdraw.1.0.14.dylib\n-rwxr-xr-x+ 1 root wheel 891 Mar 13 2007 libgdraw.la\n-rwxr-xr-x+ 1 root wheel 485908 Sep 23 2006 libgettextlib-0.14.5.dylib\n-rwxr-xr-x+ 1 root wheel 908 Sep 23 2006 libgettextlib.la\n-rwxr-xr-x+ 1 root wheel 79480 Sep 23 2006 libgettextpo.0.1.0.dylib\n-rw-r--r--+ 1 root wheel 62136 Sep 23 2006 libgettextpo.a\n-rwxr-xr-x+ 1 root wheel 954 Sep 23 2006 libgettextpo.la\n-rwxr-xr-x+ 1 root wheel 1097632 Sep 23 2006 libgettextsrc-0.14.5.dylib\n-rwxr-xr-x+ 1 root wheel 940 Sep 23 2006 libgettextsrc.la\n-rwxr-xr-x+ 1 root wheel 5713584 Mar 13 2007 libgunicode.2.0.3.dylib\n-rwxr-xr-x+ 1 root wheel 877 Mar 13 2007 libgunicode.la\n-rw-r--r--+ 1 root wheel 253512 Sep 23 2006 libintl.3.4.3.dylib\n-rw-r--r--+ 1 root wheel 286284 Sep 23 2006 libintl.a\n-rw-r--r--+ 1 root wheel 829 Sep 23 2006 libintl.la\n-rwxr-xr-x+ 1 root wheel 2121700 Mar 13 2007 libuninameslist-fr.0.0.1.dylib\n-rwxr-xr-x+ 1 root wheel 774 Mar 13 2007 libuninameslist-fr.la\n-rwxr-xr-x+ 1 root wheel 2148388 Mar 13 2007 libuninameslist.0.0.1.dylib\n-rwxr-xr-x+ 1 root wheel 756 Mar 13 2007 libuninameslist.la\n-rw-r--r--+ 1 root wheel 1670612 Sep 28 2006 libwmf.a\n-rwxr-xr-x+ 1 root wheel 913 Sep 28 2006 libwmf.la\n-rw-r--r--+ 1 root wheel 571300 Sep 28 2006 libwmflite.a\n-rwxr-xr-x+ 1 root wheel 751 Sep 28 2006 libwmflite.la\ndrwxr-xr-x 7 root wheel 224 Oct 17 2014 pkgconfig\n\n> These files absolutely, positively, gold-plated 100% did not come\n> with XCode. Homebrew installs stuff under /usr/local though.\n> Not sure about MacPorts.\n\nbut I didn't install Homebrew, MacPorts, Fink or other package managers.\n\nAre they leftovers from old OS versions? After all, I started with don't know what macOS (Mavericks or Yosemite?) back then and always upgraded the OS, one major release after the other and it has always been working fine with no problem at all. And now, as I mentioned before, it's Catalina and still works with not a single reinstall.\n\npaul@meerkat:~$ uname -a\nDarwin meerkat.local 19.6.0 Darwin Kernel Version 19.6.0: Sun Jul 5 00:43:10 PDT 2020; root:xnu-6153.141.1~9/RELEASE_X86_64 x86_64\n\nCheers,\nPaul\n\n\n\n", "msg_date": "Thu, 23 Jul 2020 16:16:23 +0200", "msg_from": "=?utf-8?Q?Paul_F=C3=B6rster?= <paul.foerster@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Building 12.3 from source on Mac" }, { "msg_contents": "=?utf-8?Q?Paul_F=C3=B6rster?= <paul.foerster@gmail.com> writes:\n> there is not a single symlink in /usr/local/lib:\n\nNot only that, but look at the file dates:\n\n> -rw-r--r--+ 1 root wheel 253512 Sep 23 2006 libintl.3.4.3.dylib\n> -rw-r--r--+ 1 root wheel 286284 Sep 23 2006 libintl.a\n> -rw-r--r--+ 1 root wheel 829 Sep 23 2006 libintl.la\n\nYou should see what \"file\" reports these as, but there's a good\nbet that these are 32-bit code and won't even run on Catalina.\n\n>> These files absolutely, positively, gold-plated 100% did not come\n>> with XCode. Homebrew installs stuff under /usr/local though.\n>> Not sure about MacPorts.\n\n> but I didn't install Homebrew, MacPorts, Fink or other package managers.\n\nYou apparently installed *something*, or several somethings, back\nin ought-six or so. Do you really remember what you were doing\nback then?\n\nAnyway, now that we realize these are ancient history, you likely\nneed to install a more modern version anyway.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 23 Jul 2020 10:50:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Building 12.3 from source on Mac" }, { "msg_contents": "Hi Tom,\n\n> On 23. Jul, 2020, at 16:50, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> You should see what \"file\" reports these as, but there's a good\n> bet that these are 32-bit code and won't even run on Catalina.\n\nyes, they seem pretty old:\n\npaul@meerkat:/usr/local/lib$ file libintl.* \nlibintl.3.4.3.dylib: Mach-O universal binary with 2 architectures: [i386:Mach-O dynamically linked shared library i386] [ppc:Mach-O dynamically linked shared library ppc]\nlibintl.3.4.3.dylib (for architecture i386):\tMach-O dynamically linked shared library i386\nlibintl.3.4.3.dylib (for architecture ppc):\tMach-O dynamically linked shared library ppc\nlibintl.a: Mach-O universal binary with 2 architectures: [i386:current ar archive random library] [ppc:current ar archive random library]\nlibintl.a (for architecture i386):\tcurrent ar archive random library\nlibintl.a (for architecture ppc):\tcurrent ar archive random library\nlibintl.la: libtool library file, ASCII text\n\n> You apparently installed *something*, or several somethings, back\n> in ought-six or so. Do you really remember what you were doing\n> back then?\n\nI used to have an old iMac. And when I got this laptop, I did a time machine backup then and restored it to this laptop to not have to install all my software from scratch. Maybe it's old stuff from Java installations. I also have XQuartz running since ages now which has been updated from version to version, currently 2.7.11. I don't know which it could be that could be that old.\n\nSeems like a lot of old stuff:\n\npaul@meerkat:/usr/local/lib$ file * | egrep \"(i386|ppc)\" | awk '{ print $1 }' | tr -d ':' | sort -u\nlibMagick++.10.0.7.dylib\nlibMagick++.a\nlibMagick.10.0.7.dylib\nlibMagick.a\nlibWand.10.0.7.dylib\nlibWand.a\nlibasprintf.0.0.0.dylib\nlibasprintf.a\nlibfreetype.6.3.12.dylib\nlibfreetype.a\nlibgdraw.1.0.14.dylib\nlibgettextlib-0.14.5.dylib\nlibgettextpo.0.1.0.dylib\nlibgettextpo.a\nlibgettextsrc-0.14.5.dylib\nlibgunicode.2.0.3.dylib\nlibintl.3.4.3.dylib\nlibintl.a\nlibuninameslist-fr.0.0.1.dylib\nlibuninameslist.0.0.1.dylib\nlibwmf.a\nlibwmflite.a\n\n> Anyway, now that we realize these are ancient history, you likely\n> need to install a more modern version anyway.\n\nwill try, thanks. :-)\n\nCheers,\nPaul\n\n\n\n", "msg_date": "Thu, 23 Jul 2020 17:04:56 +0200", "msg_from": "=?utf-8?Q?Paul_F=C3=B6rster?= <paul.foerster@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Building 12.3 from source on Mac" }, { "msg_contents": "I'd like to add that MacPorts installs everything to /opt/ and /opt/local\nunless someone configures other path.\nYou can also easily check is something from homebrew installation by\nrunning 'brew config' and looking at HOMEBREW_PREFIX entry.\n\nRegards,\nPavel\n\nчт, 23 июл. 2020 г. в 19:05, Paul Förster <paul.foerster@gmail.com>:\n\n> Hi Tom,\n>\n> > On 23. Jul, 2020, at 16:50, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > You should see what \"file\" reports these as, but there's a good\n> > bet that these are 32-bit code and won't even run on Catalina.\n>\n> yes, they seem pretty old:\n>\n> paul@meerkat:/usr/local/lib$ file libintl.*\n> libintl.3.4.3.dylib: Mach-O universal binary with 2 architectures:\n> [i386:Mach-O dynamically linked shared library i386] [ppc:Mach-O\n> dynamically linked shared library ppc]\n> libintl.3.4.3.dylib (for architecture i386): Mach-O dynamically linked\n> shared library i386\n> libintl.3.4.3.dylib (for architecture ppc): Mach-O dynamically linked\n> shared library ppc\n> libintl.a: Mach-O universal binary with 2 architectures:\n> [i386:current ar archive random library] [ppc:current ar archive random\n> library]\n> libintl.a (for architecture i386): current ar archive random library\n> libintl.a (for architecture ppc): current ar archive random library\n> libintl.la: libtool library file, ASCII text\n>\n> > You apparently installed *something*, or several somethings, back\n> > in ought-six or so. Do you really remember what you were doing\n> > back then?\n>\n> I used to have an old iMac. And when I got this laptop, I did a time\n> machine backup then and restored it to this laptop to not have to install\n> all my software from scratch. Maybe it's old stuff from Java installations.\n> I also have XQuartz running since ages now which has been updated from\n> version to version, currently 2.7.11. I don't know which it could be that\n> could be that old.\n>\n> Seems like a lot of old stuff:\n>\n> paul@meerkat:/usr/local/lib$ file * | egrep \"(i386|ppc)\" | awk '{ print\n> $1 }' | tr -d ':' | sort -u\n> libMagick++.10.0.7.dylib\n> libMagick++.a\n> libMagick.10.0.7.dylib\n> libMagick.a\n> libWand.10.0.7.dylib\n> libWand.a\n> libasprintf.0.0.0.dylib\n> libasprintf.a\n> libfreetype.6.3.12.dylib\n> libfreetype.a\n> libgdraw.1.0.14.dylib\n> libgettextlib-0.14.5.dylib\n> libgettextpo.0.1.0.dylib\n> libgettextpo.a\n> libgettextsrc-0.14.5.dylib\n> libgunicode.2.0.3.dylib\n> libintl.3.4.3.dylib\n> libintl.a\n> libuninameslist-fr.0.0.1.dylib\n> libuninameslist.0.0.1.dylib\n> libwmf.a\n> libwmflite.a\n>\n> > Anyway, now that we realize these are ancient history, you likely\n> > need to install a more modern version anyway.\n>\n> will try, thanks. :-)\n>\n> Cheers,\n> Paul\n>\n>\n>\n>\n\nI'd like to add that MacPorts installs everything to /opt/ and /opt/local unless someone configures other path.You can also easily check is something from homebrew installation by running 'brew config' and looking at HOMEBREW_PREFIX entry.Regards,Pavel чт, 23 июл. 2020 г. в 19:05, Paul Förster <paul.foerster@gmail.com>:Hi Tom,\n\n> On 23. Jul, 2020, at 16:50, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> You should see what \"file\" reports these as, but there's a good\n> bet that these are 32-bit code and won't even run on Catalina.\n\nyes, they seem pretty old:\n\npaul@meerkat:/usr/local/lib$ file libintl.*          \nlibintl.3.4.3.dylib: Mach-O universal binary with 2 architectures: [i386:Mach-O dynamically linked shared library i386] [ppc:Mach-O dynamically linked shared library ppc]\nlibintl.3.4.3.dylib (for architecture i386):    Mach-O dynamically linked shared library i386\nlibintl.3.4.3.dylib (for architecture ppc):     Mach-O dynamically linked shared library ppc\nlibintl.a:           Mach-O universal binary with 2 architectures: [i386:current ar archive random library] [ppc:current ar archive random library]\nlibintl.a (for architecture i386):      current ar archive random library\nlibintl.a (for architecture ppc):       current ar archive random library\nlibintl.la:          libtool library file, ASCII text\n\n> You apparently installed *something*, or several somethings, back\n> in ought-six or so.  Do you really remember what you were doing\n> back then?\n\nI used to have an old iMac. And when I got this laptop, I did a time machine backup then and restored it to this laptop to not have to install all my software from scratch. Maybe it's old stuff from Java installations. I also have XQuartz running since ages now which has been updated from version to version, currently 2.7.11. I don't know which it could be that could be that old.\n\nSeems like a lot of old stuff:\n\npaul@meerkat:/usr/local/lib$ file * | egrep \"(i386|ppc)\" | awk '{ print $1 }' | tr -d ':' | sort -u\nlibMagick++.10.0.7.dylib\nlibMagick++.a\nlibMagick.10.0.7.dylib\nlibMagick.a\nlibWand.10.0.7.dylib\nlibWand.a\nlibasprintf.0.0.0.dylib\nlibasprintf.a\nlibfreetype.6.3.12.dylib\nlibfreetype.a\nlibgdraw.1.0.14.dylib\nlibgettextlib-0.14.5.dylib\nlibgettextpo.0.1.0.dylib\nlibgettextpo.a\nlibgettextsrc-0.14.5.dylib\nlibgunicode.2.0.3.dylib\nlibintl.3.4.3.dylib\nlibintl.a\nlibuninameslist-fr.0.0.1.dylib\nlibuninameslist.0.0.1.dylib\nlibwmf.a\nlibwmflite.a\n\n> Anyway, now that we realize these are ancient history, you likely\n> need to install a more modern version anyway.\n\nwill try, thanks. :-)\n\nCheers,\nPaul", "msg_date": "Thu, 23 Jul 2020 19:16:38 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Building 12.3 from source on Mac" } ]
[ { "msg_contents": "Hi,\n\nI've twice seen the below failure when running tests in a loop (to\nverify another rare issue in a patch is fixed):\n\ndiff -du10 /home/andres/src/postgresql/src/test/regress/expected/with.out /home/andres/build/postgres/dev-assert/vpath/src/test/regress/results/with.out\n--- /home/andres/src/postgresql/src/test/regress/expected/with.out 2020-07-21 15:03:11.239754712 -0700\n+++ /home/andres/build/postgres/dev-assert/vpath/src/test/regress/results/with.out 2020-07-23 04:25:07.955839299 -0700\n@@ -2207,28 +2207,30 @@\n Output: a_1.ctid, a_1.aa\n -> CTE Scan on wcte\n Output: wcte.*, wcte.q2\n -> Nested Loop\n Output: a_2.ctid, wcte.*\n Join Filter: (a_2.aa = wcte.q2)\n -> Seq Scan on public.c a_2\n Output: a_2.ctid, a_2.aa\n -> CTE Scan on wcte\n Output: wcte.*, wcte.q2\n- -> Nested Loop\n+ -> Hash Join\n Output: a_3.ctid, wcte.*\n- Join Filter: (a_3.aa = wcte.q2)\n+ Hash Cond: (a_3.aa = wcte.q2)\n -> Seq Scan on public.d a_3\n Output: a_3.ctid, a_3.aa\n- -> CTE Scan on wcte\n+ -> Hash\n Output: wcte.*, wcte.q2\n-(38 rows)\n+ -> CTE Scan on wcte\n+ Output: wcte.*, wcte.q2\n+(40 rows)\n \n -- error cases\n -- data-modifying WITH tries to use its own output\n WITH RECURSIVE t AS (\n INSERT INTO y\n SELECT * FROM t\n )\n VALUES(FALSE);\n ERROR: recursive query \"t\" must not contain data-modifying statements\n LINE 1: WITH RECURSIVE t AS (\n\n\nSearching the archives didn't unearth other reports of the same.\n\n\nThis was the first failure after 404 iterations of installcheck, so it's\nclearly not a common occurance on my machine.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 23 Jul 2020 09:19:48 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "'with' regression tests fails rarely (and spuriously)" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I've twice seen the below failure when running tests in a loop (to\n> verify another rare issue in a patch is fixed):\n\nWeird. It sort of looks like autovacuum came along and changed the\nstats for those tables, but I didn't think they were big enough to\ndraw autovac's attention.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 23 Jul 2020 13:05:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 'with' regression tests fails rarely (and spuriously)" }, { "msg_contents": "Hi,\n\nOn 2020-07-23 13:05:32 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I've twice seen the below failure when running tests in a loop (to\n> > verify another rare issue in a patch is fixed):\n> \n> Weird. It sort of looks like autovacuum came along and changed the\n> stats for those tables, but I didn't think they were big enough to\n> draw autovac's attention.\n\nHm. I guess I could run them again after enabling more logging. Don't\nreally have a better idea. Probably not worth investing more energy into\nif I can't readily reproduce over night.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 23 Jul 2020 11:00:59 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: 'with' regression tests fails rarely (and spuriously)" } ]
[ { "msg_contents": "Every so often we get a complaint like [1] about how a CASE should have\nprevented a run-time error and didn't, because constant-folding tried\nto evaluate a subexpression that would not have been entered at run-time.\n\nIt struck me that it would not be hard to improve this situation a great\ndeal. If, within a CASE subexpression that isn't certain to be executed\nat runtime, we refuse to pre-evaluate *any* function (essentially, treat\nthem all as volatile), then we should largely get the semantics that\nusers expect. There's some potential for query slowdown if a CASE\ncontains a constant subexpression that we formerly reduced at plan time\nand now do not, but that doesn't seem to me to be a very big deal.\n\nAttached is a draft patch that handles CASE and COALESCE this way.\n\nThis is not a complete fix, because if you write a sub-SELECT the\ncontents of the sub-SELECT are not processed by the outer query's\neval_const_expressions pass; instead, we look at it within the\nsub-SELECT itself, and in that context there's no apparent reason\nto avoid const-folding. So\n CASE WHEN x < 0 THEN (SELECT 1/0) END\nfails even if x is never less than zero. I don't see any great way\nto avoid that, and I'm not particularly concerned about it anyhow;\nusually the point of a sub-SELECT like this is to be decoupled from\nouter query evaluation, so that the behavior should not be that\nsurprising.\n\nOne interesting point is that the join regression test contains a\nnumber of uses of \"coalesce(int8-variable, int4-constant)\" which is\ntreated a little differently than before: we no longer constant-fold\nthe int4 constant to int8. That causes the run-time cost of the\nexpression to be estimated slightly higher, which changes plans in\na couple of these tests; and in any case the EXPLAIN output looks\ndifferent since it shows the runtime coercion explicitly. To avoid\nthose changes I made all these examples quote the constants, so that\nthe parser resolves them as int8 out of the gate. (Perhaps it'd be\nokay to just accept the changes, but I didn't feel like trying to\nanalyze in detail what each test case had been meant to prove.)\n\nAlso, I didn't touch the docs yet. Sections 4.2.14 and 9.18.1\ncontain some weasel wording that could be backed off, but in light\nof the sub-SELECT exception we can't just remove the issue\naltogether I think. Not quite sure how to word it.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/16549-4991fbf36fcec234%40postgresql.org", "msg_date": "Thu, 23 Jul 2020 12:57:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Making CASE error handling less surprising" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Every so often we get a complaint like [1] about how a CASE should have\n> prevented a run-time error and didn't, because constant-folding tried\n> to evaluate a subexpression that would not have been entered at run-time.\n>\n> It struck me that it would not be hard to improve this situation a great\n> deal. If, within a CASE subexpression that isn't certain to be executed\n> at runtime, we refuse to pre-evaluate *any* function (essentially, treat\n> them all as volatile), then we should largely get the semantics that\n> users expect. There's some potential for query slowdown if a CASE\n> contains a constant subexpression that we formerly reduced at plan time\n> and now do not, but that doesn't seem to me to be a very big deal.\n[…]\n> Thoughts?\n\nWould it be feasible to set up an exception handler when constant-\nfolding cases that might not be reached, and leave the expression\nunfolded only if an error was thrown, or does that have too much\noverhead to be worthwhile?\n\n- ilmari\n-- \n\"A disappointingly low fraction of the human race is,\n at any given time, on fire.\" - Stig Sandbeck Mathisen\n\n\n", "msg_date": "Thu, 23 Jul 2020 18:50:32 +0100", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": false, "msg_subject": "Re: Making CASE error handling less surprising" }, { "msg_contents": "Hi,\n\nOn 2020-07-23 18:50:32 +0100, Dagfinn Ilmari Mannsåker wrote:\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n> \n> > Every so often we get a complaint like [1] about how a CASE should have\n> > prevented a run-time error and didn't, because constant-folding tried\n> > to evaluate a subexpression that would not have been entered at run-time.\n> >\n> > It struck me that it would not be hard to improve this situation a great\n> > deal. If, within a CASE subexpression that isn't certain to be executed\n> > at runtime, we refuse to pre-evaluate *any* function (essentially, treat\n> > them all as volatile), then we should largely get the semantics that\n> > users expect. There's some potential for query slowdown if a CASE\n> > contains a constant subexpression that we formerly reduced at plan time\n> > and now do not, but that doesn't seem to me to be a very big deal.\n> […]\n> > Thoughts?\n> \n> Would it be feasible to set up an exception handler when constant-\n> folding cases that might not be reached, and leave the expression\n> unfolded only if an error was thrown, or does that have too much\n> overhead to be worthwhile?\n\nThat'd require using a subtransaction for expression\nsimplification. That'd be way too high overhead.\n\nGiven how often we've had a need to call functions while handling\nerrors, I do wonder if it'd be worthwhile and feasible to mark functions\nas being safe to call without subtransactions, or mark them as not\nerroring out (e.g. comparators would usually be safe).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 23 Jul 2020 12:02:18 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Making CASE error handling less surprising" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-07-23 18:50:32 +0100, Dagfinn Ilmari Mannsåker wrote:\n>> Would it be feasible to set up an exception handler when constant-\n>> folding cases that might not be reached, and leave the expression\n>> unfolded only if an error was thrown, or does that have too much\n>> overhead to be worthwhile?\n\n> That'd require using a subtransaction for expression\n> simplification. That'd be way too high overhead.\n\nThat's my opinion as well. It'd be a subtransaction for *each*\noperator/function call we need to simplify, which seems completely\ndisastrous.\n\n> Given how often we've had a need to call functions while handling\n> errors, I do wonder if it'd be worthwhile and feasible to mark functions\n> as being safe to call without subtransactions, or mark them as not\n> erroring out (e.g. comparators would usually be safe).\n\nYeah. I was wondering whether the existing \"leakproof\" marking would\nbe adequate for this purpose. It's a little stronger than what we\nneed, but the pain-in-the-rear factor for adding YA function property\nis high enough that I'm inclined to just use it anyway.\n\nWe do have to assume that \"leakproof\" includes \"cannot throw any\ninput-dependent error\", but it seems to me that that's true.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 23 Jul 2020 15:43:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Making CASE error handling less surprising" }, { "msg_contents": "čt 23. 7. 2020 v 21:43 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2020-07-23 18:50:32 +0100, Dagfinn Ilmari Mannsåker wrote:\n> >> Would it be feasible to set up an exception handler when constant-\n> >> folding cases that might not be reached, and leave the expression\n> >> unfolded only if an error was thrown, or does that have too much\n> >> overhead to be worthwhile?\n>\n> > That'd require using a subtransaction for expression\n> > simplification. That'd be way too high overhead.\n>\n> That's my opinion as well. It'd be a subtransaction for *each*\n> operator/function call we need to simplify, which seems completely\n> disastrous.\n>\n> > Given how often we've had a need to call functions while handling\n> > errors, I do wonder if it'd be worthwhile and feasible to mark functions\n> > as being safe to call without subtransactions, or mark them as not\n> > erroring out (e.g. comparators would usually be safe).\n>\n> Yeah. I was wondering whether the existing \"leakproof\" marking would\n> be adequate for this purpose. It's a little stronger than what we\n> need, but the pain-in-the-rear factor for adding YA function property\n> is high enough that I'm inclined to just use it anyway.\n>\n> We do have to assume that \"leakproof\" includes \"cannot throw any\n> input-dependent error\", but it seems to me that that's true.\n>\n\nI am afraid of a performance impact.\n\nlot of people expects constant folding everywhere now and I can imagine\nquery like\n\nSELECT CASE col1 WHEN 1 THEN upper('hello') ELSE upper('bye') END FROM ...\n\nNow, it is optimized well, but with the proposed patch, this query can be\nslow.\n\nWe should introduce planner safe functions for some usual functions, or\nmaybe better explain the behaviour, the costs, and benefits. I don't think\nthis issue is too common.\n\nRegards\n\nPavel\n\n\n\n> regards, tom lane\n>\n>\n>\n\nčt 23. 7. 2020 v 21:43 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Andres Freund <andres@anarazel.de> writes:\n> On 2020-07-23 18:50:32 +0100, Dagfinn Ilmari Mannsåker wrote:\n>> Would it be feasible to set up an exception handler when constant-\n>> folding cases that might not be reached, and leave the expression\n>> unfolded only if an error was thrown, or does that have too much\n>> overhead to be worthwhile?\n\n> That'd require using a subtransaction for expression\n> simplification. That'd be way too high overhead.\n\nThat's my opinion as well.  It'd be a subtransaction for *each*\noperator/function call we need to simplify, which seems completely\ndisastrous.\n\n> Given how often we've had a need to call functions while handling\n> errors, I do wonder if it'd be worthwhile and feasible to mark functions\n> as being safe to call without subtransactions, or mark them as not\n> erroring out (e.g. comparators would usually be safe).\n\nYeah.  I was wondering whether the existing \"leakproof\" marking would\nbe adequate for this purpose.  It's a little stronger than what we\nneed, but the pain-in-the-rear factor for adding YA function property\nis high enough that I'm inclined to just use it anyway.\n\nWe do have to assume that \"leakproof\" includes \"cannot throw any\ninput-dependent error\", but it seems to me that that's true.I am afraid of a performance impact.  lot of people expects constant folding everywhere now and I can imagine query likeSELECT CASE col1 WHEN 1 THEN upper('hello') ELSE upper('bye')  END FROM ...Now, it is optimized well, but with the proposed patch, this query can be slow.We should introduce planner safe functions for some usual functions, or maybe better explain the behaviour, the costs, and benefits.  I don't think this issue is too common.RegardsPavel\n\n                        regards, tom lane", "msg_date": "Thu, 23 Jul 2020 21:56:26 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Making CASE error handling less surprising" }, { "msg_contents": "Hi,\n\nOn 2020-07-23 15:43:44 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2020-07-23 18:50:32 +0100, Dagfinn Ilmari Manns�ker wrote:\n> >> Would it be feasible to set up an exception handler when constant-\n> >> folding cases that might not be reached, and leave the expression\n> >> unfolded only if an error was thrown, or does that have too much\n> >> overhead to be worthwhile?\n> \n> > That'd require using a subtransaction for expression\n> > simplification. That'd be way too high overhead.\n> \n> That's my opinion as well. It'd be a subtransaction for *each*\n> operator/function call we need to simplify, which seems completely\n> disastrous.\n\nI guess we could optimize it to be one subtransaction by having error\nrecovery be to redo simplification with a parameter that prevents doing\nsimplification within CASE etc. Still too unattractive performancewise\nto consider imo.\n\n\n> > Given how often we've had a need to call functions while handling\n> > errors, I do wonder if it'd be worthwhile and feasible to mark functions\n> > as being safe to call without subtransactions, or mark them as not\n> > erroring out (e.g. comparators would usually be safe).\n> \n> Yeah. I was wondering whether the existing \"leakproof\" marking would\n> be adequate for this purpose. It's a little stronger than what we\n> need, but the pain-in-the-rear factor for adding YA function property\n> is high enough that I'm inclined to just use it anyway.\n\nHm, I didn't consider that. Good idea.\n\n\n> We do have to assume that \"leakproof\" includes \"cannot throw any\n> input-dependent error\", but it seems to me that that's true.\n\nA quick look through the list seems to confirm that. There's errors like\nin text_starts_with:\n\n\tif (mylocale && !mylocale->deterministic)\n\t\tereport(ERROR,\n\t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n\t\t\t\t errmsg(\"nondeterministic collations are not supported for substring searches\")));\n\nbut that's not a content dependent error, so I don't think it's problem.\n\n\nSo the idea would be to continue to do simplification like we do right\nnow for things outside a CASE but to only call leakproof functions\nwithin a case?\n\nIs there any concern about having to do additional lookups for\nleakproofness? It doesn't seem likely to me since we already need to do\nlookups for the FmgrInfo?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 23 Jul 2020 13:06:57 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Making CASE error handling less surprising" }, { "msg_contents": "čt 23. 7. 2020 v 21:56 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> čt 23. 7. 2020 v 21:43 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>\n>> Andres Freund <andres@anarazel.de> writes:\n>> > On 2020-07-23 18:50:32 +0100, Dagfinn Ilmari Mannsåker wrote:\n>> >> Would it be feasible to set up an exception handler when constant-\n>> >> folding cases that might not be reached, and leave the expression\n>> >> unfolded only if an error was thrown, or does that have too much\n>> >> overhead to be worthwhile?\n>>\n>> > That'd require using a subtransaction for expression\n>> > simplification. That'd be way too high overhead.\n>>\n>> That's my opinion as well. It'd be a subtransaction for *each*\n>> operator/function call we need to simplify, which seems completely\n>> disastrous.\n>>\n>> > Given how often we've had a need to call functions while handling\n>> > errors, I do wonder if it'd be worthwhile and feasible to mark functions\n>> > as being safe to call without subtransactions, or mark them as not\n>> > erroring out (e.g. comparators would usually be safe).\n>>\n>> Yeah. I was wondering whether the existing \"leakproof\" marking would\n>> be adequate for this purpose. It's a little stronger than what we\n>> need, but the pain-in-the-rear factor for adding YA function property\n>> is high enough that I'm inclined to just use it anyway.\n>>\n>> We do have to assume that \"leakproof\" includes \"cannot throw any\n>> input-dependent error\", but it seems to me that that's true.\n>>\n>\n> I am afraid of a performance impact.\n>\n> lot of people expects constant folding everywhere now and I can imagine\n> query like\n>\n> SELECT CASE col1 WHEN 1 THEN upper('hello') ELSE upper('bye') END FROM ...\n>\n> Now, it is optimized well, but with the proposed patch, this query can be\n> slow.\n>\n> We should introduce planner safe functions for some usual functions, or\n> maybe better explain the behaviour, the costs, and benefits. I don't think\n> this issue is too common.\n>\n\nwhat about different access. We can introduce function\n\ncreate or replace function volatile_expr(anyelement) returns anyelement as\n$$ begin return $1; end $$ language plpgsql;\n\nand this can be used as a constant folding optimization fence.\n\nselect case col when 1 then volatile_expr(1/$1) else $1 end;\n\nI don't think so people have a problem with this behaviour - the problem is\nunexpected behaviour change between major releases without really\nillustrative explanation in documentation.\n\nRegards\n\nPavel\n\n\n> Regards\n>\n> Pavel\n>\n>\n>\n>> regards, tom lane\n>>\n>>\n>>\n\nčt 23. 7. 2020 v 21:56 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:čt 23. 7. 2020 v 21:43 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Andres Freund <andres@anarazel.de> writes:\n> On 2020-07-23 18:50:32 +0100, Dagfinn Ilmari Mannsåker wrote:\n>> Would it be feasible to set up an exception handler when constant-\n>> folding cases that might not be reached, and leave the expression\n>> unfolded only if an error was thrown, or does that have too much\n>> overhead to be worthwhile?\n\n> That'd require using a subtransaction for expression\n> simplification. That'd be way too high overhead.\n\nThat's my opinion as well.  It'd be a subtransaction for *each*\noperator/function call we need to simplify, which seems completely\ndisastrous.\n\n> Given how often we've had a need to call functions while handling\n> errors, I do wonder if it'd be worthwhile and feasible to mark functions\n> as being safe to call without subtransactions, or mark them as not\n> erroring out (e.g. comparators would usually be safe).\n\nYeah.  I was wondering whether the existing \"leakproof\" marking would\nbe adequate for this purpose.  It's a little stronger than what we\nneed, but the pain-in-the-rear factor for adding YA function property\nis high enough that I'm inclined to just use it anyway.\n\nWe do have to assume that \"leakproof\" includes \"cannot throw any\ninput-dependent error\", but it seems to me that that's true.I am afraid of a performance impact.  lot of people expects constant folding everywhere now and I can imagine query likeSELECT CASE col1 WHEN 1 THEN upper('hello') ELSE upper('bye')  END FROM ...Now, it is optimized well, but with the proposed patch, this query can be slow.We should introduce planner safe functions for some usual functions, or maybe better explain the behaviour, the costs, and benefits.  I don't think this issue is too common.what about different access. We can introduce function create or replace function volatile_expr(anyelement) returns anyelement as $$ begin return $1; end $$ language plpgsql;and this can be used as a constant folding optimization fence.select case col when 1 then volatile_expr(1/$1) else $1 end;I don't think so people have a problem with this behaviour - the problem is unexpected behaviour change between major releases without really illustrative explanation in documentation. RegardsPavel RegardsPavel\n\n                        regards, tom lane", "msg_date": "Thu, 23 Jul 2020 22:08:30 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Making CASE error handling less surprising" }, { "msg_contents": "Hi,\n\nOn 2020-07-23 21:56:26 +0200, Pavel Stehule wrote:\n> I am afraid of a performance impact.\n> \n> lot of people expects constant folding everywhere now and I can imagine\n> query like\n> \n> SELECT CASE col1 WHEN 1 THEN upper('hello') ELSE upper('bye') END FROM ...\n> \n> Now, it is optimized well, but with the proposed patch, this query can be\n> slow.\n\nI'd be more concerned about thinks like conditional expressions that\ninvolve both columns and non-comparison operations on constants. Where\nright now we'd simplify the constant part of the expression, but\nwouldn't at all anymore after this.\n\nIs there an argument to continue simplifying expressions within case\nwhen only involving \"true\" constants even with not leakproof functions,\nbut only simplify \"pseudo\" constants like parameters with leakproof\nfunctions? I.e CASE WHEN ... THEN 1 / 0 would still raise an error\nduring simplification but CASE WHEN ... THEN 1 / $1 wouldn't, because $1\nis not a real constant (even if PARAM_FLAG_CONST).\n\nIt doesn't seem like it'd be too hard to implement that, but that it'd\nprobably be fairly bulky because we'd need to track more state across\nrecursive expression_tree_mutator() calls.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 23 Jul 2020 13:21:41 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Making CASE error handling less surprising" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Is there any concern about having to do additional lookups for\n> leakproofness? It doesn't seem likely to me since we already need to do\n> lookups for the FmgrInfo?\n\nNo, we could easily fix it so that one syscache lookup gets both\nthe provolatile and proleakproof markings.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 23 Jul 2020 16:27:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Making CASE error handling less surprising" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Is there an argument to continue simplifying expressions within case\n> when only involving \"true\" constants even with not leakproof functions,\n> but only simplify \"pseudo\" constants like parameters with leakproof\n> functions? I.e CASE WHEN ... THEN 1 / 0 would still raise an error\n> during simplification but CASE WHEN ... THEN 1 / $1 wouldn't, because $1\n> is not a real constant (even if PARAM_FLAG_CONST).\n\nHmm, interesting idea. That might fix all the practical cases in plpgsql,\nbut it wouldn't do anything to make the behavior more explainable. Not\nsure if we care about that though.\n\nIf we go this way I'd be inclined to do this instead of, not in addition\nto, what I originally proposed. Not sure if that was how you envisioned\nit, but I think this is probably sufficient for its purpose and we would\nnot need any additional lobotomization of const-simplification.\n\n> It doesn't seem like it'd be too hard to implement that, but that it'd\n> probably be fairly bulky because we'd need to track more state across\n> recursive expression_tree_mutator() calls.\n\nIt wouldn't be any harder than what I posted upthread; it would\njust be a different flag getting passed down in the context struct\nand getting tested in a different place.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 23 Jul 2020 16:34:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Making CASE error handling less surprising" }, { "msg_contents": "Hi,\n\nOn 2020-07-23 16:34:25 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Is there an argument to continue simplifying expressions within case\n> > when only involving \"true\" constants even with not leakproof functions,\n> > but only simplify \"pseudo\" constants like parameters with leakproof\n> > functions? I.e CASE WHEN ... THEN 1 / 0 would still raise an error\n> > during simplification but CASE WHEN ... THEN 1 / $1 wouldn't, because $1\n> > is not a real constant (even if PARAM_FLAG_CONST).\n> \n> Hmm, interesting idea. That might fix all the practical cases in plpgsql,\n> but it wouldn't do anything to make the behavior more explainable. Not\n> sure if we care about that though.\n\nI've probably done too much compiler stuff, but to me it doesn't seem\ntoo hard to understand that purely constant expressions may get\nevaluated unconditionally even when inside a CASE, but everything else\nwon't. The fact that we sometimes optimize params to be essentially\nconstants isn't really exposed to users, so shouldn't be confusing.\n\n\n> If we go this way I'd be inclined to do this instead of, not in addition\n> to, what I originally proposed. Not sure if that was how you envisioned\n> it, but I think this is probably sufficient for its purpose and we would\n> not need any additional lobotomization of const-simplification.\n\nYea, I would assume that we'd not need anything else. I've not thought\nabout the subquery case yet, so perhaps it'd be desirable to do\nsomething additional there.\n\n\n> > It doesn't seem like it'd be too hard to implement that, but that it'd\n> > probably be fairly bulky because we'd need to track more state across\n> > recursive expression_tree_mutator() calls.\n> \n> It wouldn't be any harder than what I posted upthread; it would\n> just be a different flag getting passed down in the context struct\n> and getting tested in a different place.\n\nCool.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 23 Jul 2020 13:42:08 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Making CASE error handling less surprising" }, { "msg_contents": "Hi,\n\nOn 2020-07-23 13:42:08 -0700, Andres Freund wrote:\n> On 2020-07-23 16:34:25 -0400, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > It doesn't seem like it'd be too hard to implement that, but that it'd\n> > > probably be fairly bulky because we'd need to track more state across\n> > > recursive expression_tree_mutator() calls.\n> > \n> > It wouldn't be any harder than what I posted upthread; it would\n> > just be a different flag getting passed down in the context struct\n> > and getting tested in a different place.\n> \n> Cool.\n\nHm. Would SQL function inlining be a problem? It looks like that just\nsubstitutes parameters. Before calling\neval_const_expressions_mutator(). So we'd not know not to evaluate such\n\"pseudo constants\". And that'd probably be confusing, especially\nbecause it's not exactly obvious when inlining happens.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 23 Jul 2020 13:49:12 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Making CASE error handling less surprising" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Hm. Would SQL function inlining be a problem? It looks like that just\n> substitutes parameters. Before calling\n> eval_const_expressions_mutator(). So we'd not know not to evaluate such\n> \"pseudo constants\". And that'd probably be confusing, especially\n> because it's not exactly obvious when inlining happens.\n\nHm, interesting question. I think it might be all right without any\nfurther hacking, because the parameters we care about substituting\nwould have been handled (or not) before inlining. But the interactions\nwould be ticklish, and surely worthy of a test case or three.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 23 Jul 2020 16:56:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Making CASE error handling less surprising" }, { "msg_contents": "Hi,\n\nOn 2020-07-23 16:56:44 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Hm. Would SQL function inlining be a problem? It looks like that just\n> > substitutes parameters. Before calling\n> > eval_const_expressions_mutator(). So we'd not know not to evaluate such\n> > \"pseudo constants\". And that'd probably be confusing, especially\n> > because it's not exactly obvious when inlining happens.\n> \n> Hm, interesting question. I think it might be all right without any\n> further hacking, because the parameters we care about substituting\n> would have been handled (or not) before inlining. But the interactions\n> would be ticklish, and surely worthy of a test case or three.\n\nI'm a bit worried about a case like:\n\nSELECT foo(17);\nCREATE FUNCTION yell(int, int)\nRETURNS int\nIMMUTABLE\nLANGUAGE SQL AS $$\n SELECT CASE WHEN $1 != 0 THEN 17 / $2 ELSE NULL END\n$$;\n\nEXPLAIN SELECT yell(g.i, 0) FROM generate_series(1, 10) g(i);\n\nI don't think the parameters here would have been handled before\ninlining, right?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 23 Jul 2020 14:09:54 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Making CASE error handling less surprising" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I'm a bit worried about a case like:\n\n> CREATE FUNCTION yell(int, int)\n> RETURNS int\n> IMMUTABLE\n> LANGUAGE SQL AS $$\n> SELECT CASE WHEN $1 != 0 THEN 17 / $2 ELSE NULL END\n> $$;\n\n> EXPLAIN SELECT yell(g.i, 0) FROM generate_series(1, 10) g(i);\n\n> I don't think the parameters here would have been handled before\n> inlining, right?\n\nAh, I see what you mean. Yeah, that throws an error today, and it\nstill would with the patch I was envisioning (attached), because\ninlining does Param substitution in a different way. I'm not\nsure that we could realistically fix the inlining case with this\nsort of approach.\n\nI think this bears out the comment I made before that this approach\nstill leaves us with a very complicated behavior. Maybe we should\nstick with the previous approach, possibly supplemented with a\nleakproofness exception.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 23 Jul 2020 22:34:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Making CASE error handling less surprising" }, { "msg_contents": "On Fri, Jul 24, 2020 at 4:35 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andres Freund <andres@anarazel.de> writes:\n> > I'm a bit worried about a case like:\n>\n> > CREATE FUNCTION yell(int, int)\n> > RETURNS int\n> > IMMUTABLE\n> > LANGUAGE SQL AS $$\n> > SELECT CASE WHEN $1 != 0 THEN 17 / $2 ELSE NULL END\n> > $$;\n>\n> > EXPLAIN SELECT yell(g.i, 0) FROM generate_series(1, 10) g(i);\n>\n> > I don't think the parameters here would have been handled before\n> > inlining, right?\n>\n> Ah, I see what you mean. Yeah, that throws an error today, and it\n> still would with the patch I was envisioning (attached), because\n> inlining does Param substitution in a different way. I'm not\n> sure that we could realistically fix the inlining case with this\n> sort of approach.\n>\n> I think this bears out the comment I made before that this approach\n> still leaves us with a very complicated behavior. Maybe we should\n> stick with the previous approach, possibly supplemented with a\n> leakproofness exception.\n>\n\n\nI am actually not so sure this is a good idea. Here are two doubts I have.\n\n1. The problem of when a given SQL expression is evaluated crops up in a\nwide variety of different contexts and, worst case, causes far more damage\nthan queries which always error. Removing the lower hanging fruit while\nleaving cases like:\n\nselect lock_foo(id), * from foo where somefield > 100; -- which rows does\nlock_foo(id) run on? Does it matter?\n\nis going to legitimize these complaints in a way which will be very hard to\ndo unless we also want to eventually be able to specify when volatile\nfunctions may be run. The two cases don't look the same but they are\nmanifestations of the same problem which is that when you execute a SQL\nquery you have no control over when expressions are actually run.\n\n2. The refusal to fold immutables within case statements here mean either\nwe do more tricks to get around the planner if we hit a pathological cases\nin performance. I am not convinced this is a net win.\n\nIf we go this route, would it be too much to ask to allow a GUC variable to\npreserve the old behavior?\n\n\n> regards, tom lane\n>\n>\n\n-- \nBest Regards,\nChris Travers\nHead of Database\n\nTel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com\nSaarbrücker Straße 37a, 10405 Berlin\n\nOn Fri, Jul 24, 2020 at 4:35 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Andres Freund <andres@anarazel.de> writes:\n> I'm a bit worried about a case like:\n\n> CREATE FUNCTION yell(int, int)\n> RETURNS int\n> IMMUTABLE\n> LANGUAGE SQL AS $$\n>    SELECT CASE WHEN $1 != 0 THEN 17 / $2 ELSE NULL END\n> $$;\n\n> EXPLAIN SELECT yell(g.i, 0) FROM generate_series(1, 10) g(i);\n\n> I don't think the parameters here would have been handled before\n> inlining, right?\n\nAh, I see what you mean.  Yeah, that throws an error today, and it\nstill would with the patch I was envisioning (attached), because\ninlining does Param substitution in a different way.  I'm not\nsure that we could realistically fix the inlining case with this\nsort of approach.\n\nI think this bears out the comment I made before that this approach\nstill leaves us with a very complicated behavior.  Maybe we should\nstick with the previous approach, possibly supplemented with a\nleakproofness exception.I am actually not so sure this is a good idea. Here are two doubts I have.1.  The problem of when a given SQL expression is evaluated crops up in a wide variety of different contexts and, worst case, causes far more damage than queries which always error.  Removing the lower hanging fruit while leaving cases like:select lock_foo(id), * from foo where somefield > 100; -- which rows does lock_foo(id) run on?  Does it matter?is going to legitimize these complaints in a way which will be very hard to do unless we also want to eventually be able to specify when volatile functions may be run. The two cases don't look the same but they are manifestations of the same problem which is that when you execute a SQL query you have no control over when expressions are actually run.2.  The refusal to fold immutables within case statements here mean either we do more tricks to get around the planner if we hit a pathological cases in performance.  I am not convinced this is a net win.If we go this route, would it be too much to ask to allow a GUC variable to preserve the old behavior?\n                        regards, tom lane\n\n-- Best Regards,Chris TraversHead of DatabaseTel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com Saarbrücker Straße 37a, 10405 Berlin", "msg_date": "Fri, 24 Jul 2020 16:17:36 +0200", "msg_from": "Chris Travers <chris.travers@adjust.com>", "msg_from_op": false, "msg_subject": "Re: Making CASE error handling less surprising" }, { "msg_contents": "I wrote:\n> Ah, I see what you mean. Yeah, that throws an error today, and it\n> still would with the patch I was envisioning (attached), because\n> inlining does Param substitution in a different way. I'm not\n> sure that we could realistically fix the inlining case with this\n> sort of approach.\n\nHere's another example that we can't possibly fix with Param substitution\nhacking, because there are no Params involved in the first place:\n\nselect f1, case when f1 = 42 then 1/i else null end\nfrom (select f1, 0 as i from int4_tbl) ss;\n\nPulling up the subquery results in \"1/0\", so this fails today,\neven though \"f1 = 42\" is never true.\n\nAttached is a v3 patch that incorporates the leakproofness idea.\nAs shown in the new case.sql tests, this does fix both the SQL\nfunction and subquery-pullup cases.\n\nTo keep the join regression test results the same, I marked int48()\nas leakproof, which is surely safe enough. Probably we should make\na push to mark all unconditionally-safe implicit coercions as\nleakproof, but that's a separate matter.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 24 Jul 2020 11:22:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Making CASE error handling less surprising" }, { "msg_contents": "On Thu, Jul 23, 2020 at 12:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Every so often we get a complaint like [1] about how a CASE should have\n> prevented a run-time error and didn't, because constant-folding tried\n> to evaluate a subexpression that would not have been entered at run-time.\n\nYes, I've heard such complaints from other sources as well.\n\n> It struck me that it would not be hard to improve this situation a great\n> deal. If, within a CASE subexpression that isn't certain to be executed\n> at runtime, we refuse to pre-evaluate *any* function (essentially, treat\n> them all as volatile), then we should largely get the semantics that\n> users expect. There's some potential for query slowdown if a CASE\n> contains a constant subexpression that we formerly reduced at plan time\n> and now do not, but that doesn't seem to me to be a very big deal.\n\nLike Pavel, and I think implicitly Dagfinn and Andres, I'm not sure I\nbelieve this. Pavel's example is a good one. The leakproof exception\nhelps, but it doesn't cover everything. Users I've encountered throw\nthings like date_trunc() and lpad() into SQL code and expect them to\nbehave (from a performance point of view) like constants, but they\nalso expect 1/0 not to get evaluated too early when e.g. CASE is used.\nIt's difficult to meet both sets of expectations at the same time and\nwe're probably never going to have a perfect solution, but I think\nyou're minimizing the concern too much here.\n\n> This is not a complete fix, because if you write a sub-SELECT the\n> contents of the sub-SELECT are not processed by the outer query's\n> eval_const_expressions pass; instead, we look at it within the\n> sub-SELECT itself, and in that context there's no apparent reason\n> to avoid const-folding. So\n> CASE WHEN x < 0 THEN (SELECT 1/0) END\n> fails even if x is never less than zero. I don't see any great way\n> to avoid that, and I'm not particularly concerned about it anyhow;\n> usually the point of a sub-SELECT like this is to be decoupled from\n> outer query evaluation, so that the behavior should not be that\n> surprising.\n\nI don't think I believe this either. I don't think an average user is\ngoing to expect <expression> to behave differently from (SELECT\n<expression>). This one actually bothers me more than the previous\none. How would we even document it? Sometimes things get inlined,\nsometimes they don't. Sometimes subqueries get pulled up, sometimes\nnot. The current behavior isn't great, but at least it handles these\ncases consistently. Getting the easy cases \"right\" while making the\nbehavior in more complex cases harder to understand is not necessarily\na win.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 24 Jul 2020 12:31:05 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Making CASE error handling less surprising" }, { "msg_contents": "Hi,\n\nOn 2020-07-24 12:31:05 -0400, Robert Haas wrote:\n> On Thu, Jul 23, 2020 at 12:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Every so often we get a complaint like [1] about how a CASE should have\n> > prevented a run-time error and didn't, because constant-folding tried\n> > to evaluate a subexpression that would not have been entered at run-time.\n> \n> Yes, I've heard such complaints from other sources as well.\n> \n> > It struck me that it would not be hard to improve this situation a great\n> > deal. If, within a CASE subexpression that isn't certain to be executed\n> > at runtime, we refuse to pre-evaluate *any* function (essentially, treat\n> > them all as volatile), then we should largely get the semantics that\n> > users expect. There's some potential for query slowdown if a CASE\n> > contains a constant subexpression that we formerly reduced at plan time\n> > and now do not, but that doesn't seem to me to be a very big deal.\n> \n> Like Pavel, and I think implicitly Dagfinn and Andres, I'm not sure I\n> believe this. Pavel's example is a good one. The leakproof exception\n> helps, but it doesn't cover everything. Users I've encountered throw\n> things like date_trunc() and lpad() into SQL code and expect them to\n> behave (from a performance point of view) like constants, but they\n> also expect 1/0 not to get evaluated too early when e.g. CASE is used.\n> It's difficult to meet both sets of expectations at the same time and\n> we're probably never going to have a perfect solution, but I think\n> you're minimizing the concern too much here.\n\nWouldn't the rule that I proposed earlier, namely that sub-expressions\nthat involve only \"proper\" constants continue to get evaluated even\nwithin CASE, largely address that?\n\n\n> I don't think I believe this either. I don't think an average user is\n> going to expect <expression> to behave differently from (SELECT\n> <expression>). This one actually bothers me more than the previous\n> one. How would we even document it? Sometimes things get inlined,\n> sometimes they don't. Sometimes subqueries get pulled up, sometimes\n> not. The current behavior isn't great, but at least it handles these\n> cases consistently. Getting the easy cases \"right\" while making the\n> behavior in more complex cases harder to understand is not necessarily\n> a win.\n\nWell, if we formalize the desired behaviour it's probably a lot easier\nto work towards implementing it in additional cases (like\nsubselects). It doesn't seem to hard to keep track of whether a specific\nsubquery can be evaluate constants in a certain way, if that's what we\nneed.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 24 Jul 2020 09:49:13 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Making CASE error handling less surprising" }, { "msg_contents": "pá 24. 7. 2020 v 18:49 odesílatel Andres Freund <andres@anarazel.de> napsal:\n\n> Hi,\n>\n> On 2020-07-24 12:31:05 -0400, Robert Haas wrote:\n> > On Thu, Jul 23, 2020 at 12:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > Every so often we get a complaint like [1] about how a CASE should have\n> > > prevented a run-time error and didn't, because constant-folding tried\n> > > to evaluate a subexpression that would not have been entered at\n> run-time.\n> >\n> > Yes, I've heard such complaints from other sources as well.\n> >\n> > > It struck me that it would not be hard to improve this situation a\n> great\n> > > deal. If, within a CASE subexpression that isn't certain to be\n> executed\n> > > at runtime, we refuse to pre-evaluate *any* function (essentially,\n> treat\n> > > them all as volatile), then we should largely get the semantics that\n> > > users expect. There's some potential for query slowdown if a CASE\n> > > contains a constant subexpression that we formerly reduced at plan time\n> > > and now do not, but that doesn't seem to me to be a very big deal.\n> >\n> > Like Pavel, and I think implicitly Dagfinn and Andres, I'm not sure I\n> > believe this. Pavel's example is a good one. The leakproof exception\n> > helps, but it doesn't cover everything. Users I've encountered throw\n> > things like date_trunc() and lpad() into SQL code and expect them to\n> > behave (from a performance point of view) like constants, but they\n> > also expect 1/0 not to get evaluated too early when e.g. CASE is used.\n> > It's difficult to meet both sets of expectations at the same time and\n> > we're probably never going to have a perfect solution, but I think\n> > you're minimizing the concern too much here.\n>\n> Wouldn't the rule that I proposed earlier, namely that sub-expressions\n> that involve only \"proper\" constants continue to get evaluated even\n> within CASE, largely address that?\n>\n\nIt doesn't solve a possible performance problem with one shot (EXECUTE stmt\nplpgsql) queries, or with parameterized queries\n\n\n\n\n\n>\n> > I don't think I believe this either. I don't think an average user is\n> > going to expect <expression> to behave differently from (SELECT\n> > <expression>). This one actually bothers me more than the previous\n> > one. How would we even document it? Sometimes things get inlined,\n> > sometimes they don't. Sometimes subqueries get pulled up, sometimes\n> > not. The current behavior isn't great, but at least it handles these\n> > cases consistently. Getting the easy cases \"right\" while making the\n> > behavior in more complex cases harder to understand is not necessarily\n> > a win.\n>\n> Well, if we formalize the desired behaviour it's probably a lot easier\n> to work towards implementing it in additional cases (like\n> subselects). It doesn't seem to hard to keep track of whether a specific\n> subquery can be evaluate constants in a certain way, if that's what we\n> need.\n>\n> Greetings,\n>\n> Andres Freund\n>\n>\n>\n\npá 24. 7. 2020 v 18:49 odesílatel Andres Freund <andres@anarazel.de> napsal:Hi,\n\nOn 2020-07-24 12:31:05 -0400, Robert Haas wrote:\n> On Thu, Jul 23, 2020 at 12:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Every so often we get a complaint like [1] about how a CASE should have\n> > prevented a run-time error and didn't, because constant-folding tried\n> > to evaluate a subexpression that would not have been entered at run-time.\n> \n> Yes, I've heard such complaints from other sources as well.\n> \n> > It struck me that it would not be hard to improve this situation a great\n> > deal.  If, within a CASE subexpression that isn't certain to be executed\n> > at runtime, we refuse to pre-evaluate *any* function (essentially, treat\n> > them all as volatile), then we should largely get the semantics that\n> > users expect.  There's some potential for query slowdown if a CASE\n> > contains a constant subexpression that we formerly reduced at plan time\n> > and now do not, but that doesn't seem to me to be a very big deal.\n> \n> Like Pavel, and I think implicitly Dagfinn and Andres, I'm not sure I\n> believe this. Pavel's example is a good one. The leakproof exception\n> helps, but it doesn't cover everything. Users I've encountered throw\n> things like date_trunc() and lpad() into SQL code and expect them to\n> behave (from a performance point of view) like constants, but they\n> also expect 1/0 not to get evaluated too early when e.g. CASE is used.\n> It's difficult to meet both sets of expectations at the same time and\n> we're probably never going to have a perfect solution, but I think\n> you're minimizing the concern too much here.\n\nWouldn't the rule that I proposed earlier, namely that sub-expressions\nthat involve only \"proper\" constants continue to get evaluated even\nwithin CASE, largely address that?It doesn't solve a possible performance problem with one shot (EXECUTE stmt plpgsql) queries, or with parameterized queries \n\n\n> I don't think I believe this either. I don't think an average user is\n> going to expect <expression> to behave differently from (SELECT\n> <expression>). This one actually bothers me more than the previous\n> one. How would we even document it? Sometimes things get inlined,\n> sometimes they don't. Sometimes subqueries get pulled up, sometimes\n> not. The current behavior isn't great, but at least it handles these\n> cases consistently. Getting the easy cases \"right\" while making the\n> behavior in more complex cases harder to understand is not necessarily\n> a win.\n\nWell, if we formalize the desired behaviour it's probably a lot easier\nto work towards implementing it in additional cases (like\nsubselects). It doesn't seem to hard to keep track of whether a specific\nsubquery can be evaluate constants in a certain way, if that's what we\nneed.\n\nGreetings,\n\nAndres Freund", "msg_date": "Fri, 24 Jul 2020 19:03:30 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Making CASE error handling less surprising" }, { "msg_contents": "Hi,\n\nOn 2020-07-24 19:03:30 +0200, Pavel Stehule wrote:\n> p� 24. 7. 2020 v 18:49 odes�latel Andres Freund <andres@anarazel.de> napsal:\n> > Wouldn't the rule that I proposed earlier, namely that sub-expressions\n> > that involve only \"proper\" constants continue to get evaluated even\n> > within CASE, largely address that?\n> >\n> \n> It doesn't solve a possible performance problem with one shot (EXECUTE stmt\n> plpgsql) queries, or with parameterized queries\n\nWhat precisely are you thinking of here? Most expressions involving\nparameters would still get constant evaluated - it'd just be inside CASE\netc that they wouldn't anymore? Do you think it's that common to have a\nparameter reference inside an expression inside a CASE where it's\ncrucial that that parameter reference gets constant evaluated? I'd think\nthat's a bit of a stretch.\n\nYour earlier example of a WHEN ... THEN upper('constant') ... would\nstill have the upper('constant') be evaluated, because it doesn't\ninvolve a parameter. And e.g. THEN upper('constant') * $1 would also\nstill have the upper('constant') be evaluated, just the multiplication\nwith $1 wouldn't get evaluated.\n\n\nI'm not sure what you're concerned about with the one-shot bit?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 24 Jul 2020 10:13:45 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Making CASE error handling less surprising" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Like Pavel, and I think implicitly Dagfinn and Andres, I'm not sure I\n> believe this. Pavel's example is a good one. The leakproof exception\n> helps, but it doesn't cover everything. Users I've encountered throw\n> things like date_trunc() and lpad() into SQL code and expect them to\n> behave (from a performance point of view) like constants, but they\n> also expect 1/0 not to get evaluated too early when e.g. CASE is used.\n> It's difficult to meet both sets of expectations at the same time and\n> we're probably never going to have a perfect solution, but I think\n> you're minimizing the concern too much here.\n\nI've quoted this point before, but ... we can make queries arbitrarily\nfast, if we don't have to give the right answer. I think we've seen\nenough complaints on this topic now to make it clear that what we're\ndoing today with CASE is the wrong answer.\n\nThe performance argument can be made to cut both ways, too. If somebody's\ngot a very expensive function in a CASE arm that they don't expect to\nreach, having it be evaluated anyway because it's got constant inputs\nisn't going to make them happy.\n\nThe real bottom line is: if you don't want to do this, how else do\nyou want to fix the problem? I'm no longer willing to deny that\nthere is a problem.\n\n> I don't think I believe this either. I don't think an average user is\n> going to expect <expression> to behave differently from (SELECT\n> <expression>).\n\nAgreed, that's poorly (or not at all?) documented. But it's been\ntrue all along, and this patch isn't changing that behavior at all.\nI'm not sure if we should do anything more than improve the docs,\nbut in any case it seems independent of the CASE issue.\n\n> The current behavior isn't great, but at least it handles these\n> cases consistently.\n\nReally?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 24 Jul 2020 13:18:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Making CASE error handling less surprising" }, { "msg_contents": "Hi,\n\nOn 2020-07-23 22:34:53 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I'm a bit worried about a case like:\n> \n> > CREATE FUNCTION yell(int, int)\n> > RETURNS int\n> > IMMUTABLE\n> > LANGUAGE SQL AS $$\n> > SELECT CASE WHEN $1 != 0 THEN 17 / $2 ELSE NULL END\n> > $$;\n> \n> > EXPLAIN SELECT yell(g.i, 0) FROM generate_series(1, 10) g(i);\n> \n> > I don't think the parameters here would have been handled before\n> > inlining, right?\n> \n> Ah, I see what you mean. Yeah, that throws an error today, and it\n> still would with the patch I was envisioning (attached), because\n> inlining does Param substitution in a different way. I'm not\n> sure that we could realistically fix the inlining case with this\n> sort of approach.\n\nThinking about it a bit it seems we could solve that fairly easy if we\ndon't replace function parameter with actual Const nodes, but instead\nwith a PseudoConst parameter. If we map that to the same expression\nevaluation step that should be fairly cheap to implement, basically\nadding a bunch of 'case PseudoConst:' to the Const cases. That way we\ncould take the type of constness into account before constant folding.\n\nAlternatively we could add a new field to Const, indicating the 'source'\nor 'context of the Const, which we then could take into account during\nconstant evaluation.\n\n\n> I think this bears out the comment I made before that this approach\n> still leaves us with a very complicated behavior. Maybe we should\n> stick with the previous approach, possibly supplemented with a\n> leakproofness exception.\n\nISTM that most of the complication has to be dealt with in either\napproach?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 24 Jul 2020 10:26:16 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Making CASE error handling less surprising" }, { "msg_contents": "pá 24. 7. 2020 v 19:13 odesílatel Andres Freund <andres@anarazel.de> napsal:\n\n> Hi,\n>\n> On 2020-07-24 19:03:30 +0200, Pavel Stehule wrote:\n> > pá 24. 7. 2020 v 18:49 odesílatel Andres Freund <andres@anarazel.de>\n> napsal:\n> > > Wouldn't the rule that I proposed earlier, namely that sub-expressions\n> > > that involve only \"proper\" constants continue to get evaluated even\n> > > within CASE, largely address that?\n> > >\n> >\n> > It doesn't solve a possible performance problem with one shot (EXECUTE\n> stmt\n> > plpgsql) queries, or with parameterized queries\n>\n> What precisely are you thinking of here? Most expressions involving\n> parameters would still get constant evaluated - it'd just be inside CASE\n> etc that they wouldn't anymore? Do you think it's that common to have a\n> parameter reference inside an expression inside a CASE where it's\n> crucial that that parameter reference gets constant evaluated? I'd think\n> that's a bit of a stretch.\n>\n> Your earlier example of a WHEN ... THEN upper('constant') ... would\n> still have the upper('constant') be evaluated, because it doesn't\n> involve a parameter. And e.g. THEN upper('constant') * $1 would also\n> still have the upper('constant') be evaluated, just the multiplication\n> with $1 wouldn't get evaluated.\n>\n>\n> I'm not sure what you're concerned about with the one-shot bit?\n>\n\nNow query parameters are evaluated like constant.\n\nI can imagine WHERE clause like WHERE col = CASE $1 WHEN true THEN\nupper($2) ELSE $2 END\n\nI remember applications that use these strange queries to support\nparameterized behaviour - like case sensitive or case insensitive searching.\n\n\n\n> Greetings,\n>\n> Andres Freund\n>\n\npá 24. 7. 2020 v 19:13 odesílatel Andres Freund <andres@anarazel.de> napsal:Hi,\n\nOn 2020-07-24 19:03:30 +0200, Pavel Stehule wrote:\n> pá 24. 7. 2020 v 18:49 odesílatel Andres Freund <andres@anarazel.de> napsal:\n> > Wouldn't the rule that I proposed earlier, namely that sub-expressions\n> > that involve only \"proper\" constants continue to get evaluated even\n> > within CASE, largely address that?\n> >\n> \n> It doesn't solve a possible performance problem with one shot (EXECUTE stmt\n> plpgsql) queries, or with parameterized queries\n\nWhat precisely are you thinking of here? Most expressions involving\nparameters would still get constant evaluated - it'd just be inside CASE\netc that they wouldn't anymore? Do you think it's that common to have a\nparameter reference inside an expression inside a CASE where it's\ncrucial that that parameter reference gets constant evaluated? I'd think\nthat's a bit of a stretch.\n\nYour earlier example of a WHEN ... THEN upper('constant') ... would\nstill have the upper('constant') be evaluated, because it doesn't\ninvolve a parameter. And e.g. THEN upper('constant') * $1 would also\nstill have the upper('constant') be evaluated, just the multiplication\nwith $1 wouldn't get evaluated.\n\n\nI'm not sure what you're concerned about with the one-shot bit?Now query parameters are evaluated like constant.I can imagine WHERE clause like WHERE col = CASE  $1 WHEN true THEN upper($2) ELSE $2 ENDI remember applications  that use these strange queries to support parameterized behaviour - like case sensitive or case insensitive searching. \n\nGreetings,\n\nAndres Freund", "msg_date": "Fri, 24 Jul 2020 19:30:37 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Making CASE error handling less surprising" }, { "msg_contents": "\n\nOn July 24, 2020 10:30:37 AM PDT, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>pá 24. 7. 2020 v 19:13 odesílatel Andres Freund <andres@anarazel.de>\n>napsal:\n>> Your earlier example of a WHEN ... THEN upper('constant') ... would\n>> still have the upper('constant') be evaluated, because it doesn't\n>> involve a parameter. And e.g. THEN upper('constant') * $1 would also\n>> still have the upper('constant') be evaluated, just the\n>multiplication\n>> with $1 wouldn't get evaluated.\n>>\n>>\n>> I'm not sure what you're concerned about with the one-shot bit?\n>>\n>\n>Now query parameters are evaluated like constant.\n\nHow's that related to oneeshot plans?\n\n>I can imagine WHERE clause like WHERE col = CASE $1 WHEN true THEN\n>upper($2) ELSE $2 END\n>\n>I remember applications that use these strange queries to support\n>parameterized behaviour - like case sensitive or case insensitive\n>searching.\n\nI don't buy this as a significant issue. This could much more sensibly be written as an OR. If the policy is that we cannot regress anything then there's no way we can improve at all.\n\nAndres\n\nAndres\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Fri, 24 Jul 2020 10:40:56 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Making CASE error handling less surprising" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Wouldn't the rule that I proposed earlier, namely that sub-expressions\n> that involve only \"proper\" constants continue to get evaluated even\n> within CASE, largely address that?\n\nThe more I think about that the less I like it. It'd make the behavior\neven harder to reason about than it is now, and it doesn't fix the issue\nfor subquery pullup cases.\n\nBasically this seems like a whole lot of thrashing to try to preserve\nall the details of a behavior that is kind of accidental to begin with.\nThe argument that it's a performance issue seems hypothetical too,\nrather than founded on any observed results.\n\nBTW, to the extent that there is a performance issue, we could perhaps\nfix it if we resurrected the \"cache stable subexpressions\" patch that\nwas kicking around a year or two ago. That'd give us both\nat-most-one-evaluation and no-evaluation-until-necessary behaviors,\nif we made sure to apply it to stable CASE arms.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 24 Jul 2020 13:46:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Making CASE error handling less surprising" }, { "msg_contents": "pá 24. 7. 2020 v 19:46 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Andres Freund <andres@anarazel.de> writes:\n> > Wouldn't the rule that I proposed earlier, namely that sub-expressions\n> > that involve only \"proper\" constants continue to get evaluated even\n> > within CASE, largely address that?\n>\n> The more I think about that the less I like it. It'd make the behavior\n> even harder to reason about than it is now, and it doesn't fix the issue\n> for subquery pullup cases.\n>\n> Basically this seems like a whole lot of thrashing to try to preserve\n> all the details of a behavior that is kind of accidental to begin with.\n> The argument that it's a performance issue seems hypothetical too,\n> rather than founded on any observed results.\n>\n> BTW, to the extent that there is a performance issue, we could perhaps\n> fix it if we resurrected the \"cache stable subexpressions\" patch that\n> was kicking around a year or two ago. That'd give us both\n> at-most-one-evaluation and no-evaluation-until-necessary behaviors,\n> if we made sure to apply it to stable CASE arms.\n>\n\n+1\n\nregards\n\nPavel\n\n\n> regards, tom lane\n>\n>\n>\n\npá 24. 7. 2020 v 19:46 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Andres Freund <andres@anarazel.de> writes:\n> Wouldn't the rule that I proposed earlier, namely that sub-expressions\n> that involve only \"proper\" constants continue to get evaluated even\n> within CASE, largely address that?\n\nThe more I think about that the less I like it.  It'd make the behavior\neven harder to reason about than it is now, and it doesn't fix the issue\nfor subquery pullup cases.\n\nBasically this seems like a whole lot of thrashing to try to preserve\nall the details of a behavior that is kind of accidental to begin with.\nThe argument that it's a performance issue seems hypothetical too,\nrather than founded on any observed results.\n\nBTW, to the extent that there is a performance issue, we could perhaps\nfix it if we resurrected the \"cache stable subexpressions\" patch that\nwas kicking around a year or two ago.  That'd give us both\nat-most-one-evaluation and no-evaluation-until-necessary behaviors,\nif we made sure to apply it to stable CASE arms.+1 regardsPavel\n\n                        regards, tom lane", "msg_date": "Fri, 24 Jul 2020 21:02:39 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Making CASE error handling less surprising" }, { "msg_contents": "On Fri, Jul 24, 2020 at 7:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > Like Pavel, and I think implicitly Dagfinn and Andres, I'm not sure I\n> > believe this. Pavel's example is a good one. The leakproof exception\n> > helps, but it doesn't cover everything. Users I've encountered throw\n> > things like date_trunc() and lpad() into SQL code and expect them to\n> > behave (from a performance point of view) like constants, but they\n> > also expect 1/0 not to get evaluated too early when e.g. CASE is used.\n> > It's difficult to meet both sets of expectations at the same time and\n> > we're probably never going to have a perfect solution, but I think\n> > you're minimizing the concern too much here.\n>\n> I've quoted this point before, but ... we can make queries arbitrarily\n> fast, if we don't have to give the right answer. I think we've seen\n> enough complaints on this topic now to make it clear that what we're\n> doing today with CASE is the wrong answer.\n>\n\nSo here's my concern in a little more detail.\n\nFor small databases, these performance concerns are not big deals. But for\nlarge, heavily loaded databases one tends to run into all of the\npathological cases more frequently. In other words the overhead for the\nlargest users will likely not be proportional to the gains of the newer\nusers who are surprised by the current behavior. The more complex we make\nexceptions as to how the planner works, the more complex the knowledge\nrequired to work on the high end of the database is. So the complexity\nhere is such that I just don't think is worth it.\n\n\n> The performance argument can be made to cut both ways, too. If somebody's\n> got a very expensive function in a CASE arm that they don't expect to\n> reach, having it be evaluated anyway because it's got constant inputs\n> isn't going to make them happy.\n>\n\nHowever in this case we would be evaluating the expensive case arm every\ntime it is invoked (i.e. for every row matched), right? It is hard to see\nthis as even being close to a performance gain or even approximately\nneutral because the cases where you have a significant gain are likely to\nbe extremely rare, and the penalties for when the cost applies will be many\nmultiples of the maximum gain.\n\n>\n> The real bottom line is: if you don't want to do this, how else do\n> you want to fix the problem? I'm no longer willing to deny that\n> there is a problem.\n>\n\nI see three ways forward.\n\nThe first (probably the best) would be a solution along the lines of yours\nalong with a session-level GUC variable which could determine whether case\nbranches can fold constants. This has several important benefits:\n\n1. It gets a fix in shortly for those who want it.\n2. It ensures this is optional behavior for the more experienced users\n(where one can better decide which direction to go), and\n3. It makes the behavior explicit, documented, and thus more easily\nunderstood.\n\nA third approach would be to allow some sort of \"constant evaluation\nmechanism\" maybe with its own memory context where constants could be\ncached on first evaluation under the statement memory context. That would\nsolve the problem more gneerally.\n\n\n>\n> > I don't think I believe this either. I don't think an average user is\n> > going to expect <expression> to behave differently from (SELECT\n> > <expression>).\n>\n> Agreed, that's poorly (or not at all?) documented. But it's been\n> true all along, and this patch isn't changing that behavior at all.\n> I'm not sure if we should do anything more than improve the docs,\n> but in any case it seems independent of the CASE issue.\n>\n> > The current behavior isn't great, but at least it handles these\n> > cases consistently.\n>\n> Really?\n>\n> regards, tom lane\n>\n>\n>\n\n-- \nBest Regards,\nChris Travers\nHead of Database\n\nTel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com\nSaarbrücker Straße 37a, 10405 Berlin\n\nOn Fri, Jul 24, 2020 at 7:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Robert Haas <robertmhaas@gmail.com> writes:\n> Like Pavel, and I think implicitly Dagfinn and Andres, I'm not sure I\n> believe this. Pavel's example is a good one. The leakproof exception\n> helps, but it doesn't cover everything. Users I've encountered throw\n> things like date_trunc() and lpad() into SQL code and expect them to\n> behave (from a performance point of view) like constants, but they\n> also expect 1/0 not to get evaluated too early when e.g. CASE is used.\n> It's difficult to meet both sets of expectations at the same time and\n> we're probably never going to have a perfect solution, but I think\n> you're minimizing the concern too much here.\n\nI've quoted this point before, but ... we can make queries arbitrarily\nfast, if we don't have to give the right answer.  I think we've seen\nenough complaints on this topic now to make it clear that what we're\ndoing today with CASE is the wrong answer.So here's my concern in a little more detail. For small databases, these performance concerns are not big deals. But for large, heavily loaded databases one tends to run into all of the pathological cases more frequently.  In other words the overhead for the largest users will likely not be proportional to the gains of the newer users who are surprised by the current behavior.  The more complex we make exceptions as to how the planner works, the more complex the knowledge required to work on the high end of the database is.  So the complexity here is such that I just don't think is worth it.\n\nThe performance argument can be made to cut both ways, too.  If somebody's\ngot a very expensive function in a CASE arm that they don't expect to\nreach, having it be evaluated anyway because it's got constant inputs\nisn't going to make them happy.However in this case we would be evaluating the expensive case arm every time it is invoked (i.e. for every row matched), right?  It is hard to see this as even being close to a performance gain or even approximately neutral because the cases where you have a significant gain are likely to be extremely rare, and the penalties for when the cost applies will be many multiples of the maximum gain.\n\nThe real bottom line is: if you don't want to do this, how else do\nyou want to fix the problem?  I'm no longer willing to deny that\nthere is a problem.I see three ways forward.The first (probably the best) would be a solution along the lines of yours along with a session-level GUC variable which could determine whether case branches can fold constants.  This has several important benefits:1.  It gets a fix in shortly for those who want it.2.  It ensures this is optional behavior for the more experienced users (where one can better decide which direction to go), and3.  It makes the behavior explicit, documented, and thus more easily understood.A third approach would be to allow some sort of \"constant evaluation mechanism\" maybe with its own memory context where constants could be cached on first evaluation under the statement memory context.  That would solve the problem more gneerally. \n\n> I don't think I believe this either. I don't think an average user is\n> going to expect <expression> to behave differently from (SELECT\n> <expression>).\n\nAgreed, that's poorly (or not at all?) documented.  But it's been\ntrue all along, and this patch isn't changing that behavior at all.\nI'm not sure if we should do anything more than improve the docs,\nbut in any case it seems independent of the CASE issue.\n\n> The current behavior isn't great, but at least it handles these\n> cases consistently.\n\nReally?\n\n                        regards, tom lane\n\n\n-- Best Regards,Chris TraversHead of DatabaseTel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com Saarbrücker Straße 37a, 10405 Berlin", "msg_date": "Sun, 26 Jul 2020 19:27:15 +0200", "msg_from": "Chris Travers <chris.travers@adjust.com>", "msg_from_op": false, "msg_subject": "Re: Making CASE error handling less surprising" }, { "msg_contents": "On Sun, Jul 26, 2020 at 1:27 PM Chris Travers <chris.travers@adjust.com> wrote:\n> The first (probably the best) would be a solution along the lines of yours along with a session-level GUC variable which could determine whether case branches can fold constants.\n\nBluntly, that seems like a terrible idea. It's great if you are an\nexpert DBA, because then you can adjust the behavior on your own\nsystem according to what works best for you. But if you are trying to\nwrite portable code that will work on any PostgreSQL instance, you now\nhave to remember to test it with every possible value of the GUC and\nmake sure it behaves the same way under all of them. That is a major\nburden on authors of tools and extensions, and if we add even three or\nfour such GUCs with three or four possible values each, there are\nsuddenly dozens or even hundreds of possible combinations to test. I\nthink that adding GUCs for this kind of thing is a complete\nnon-starter for that reason.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 29 Jul 2020 09:53:55 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Making CASE error handling less surprising" }, { "msg_contents": "On Fri, Jul 24, 2020 at 1:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The real bottom line is: if you don't want to do this, how else do\n> you want to fix the problem? I'm no longer willing to deny that\n> there is a problem.\n\nThat's the wrong question. The right question is whether we're\nsufficiently certain that a particular proposal is an improvement over\nthe status quo to justify changing something. It's better to do\nnothing than to do something that makes some cases better and other\ncases worse, because then instead of users having a problem with this,\nthey have a variety of different problems depending on which release\nthey are running. IMHO, changing the semantics of something like this\nis really scary and should be approached with great caution.\n\nYou don't have to deny that something is a problem in order to admit\nthat you might not have a perfect solution.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 29 Jul 2020 09:59:02 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Making CASE error handling less surprising" } ]
[ { "msg_contents": "One of our clients caught an error \"failed to find parent tuple for \nheap-only tuple at (50661,130) in table \"tbl'\" in PostgreSQL v12.\n\nSteps to reproduce (REL_12_STABLE):\n\n1) Create table with primary key, create brin index, fill table with \nsome initial data:\n\ncreate table tbl (id int primary key, a int) with (fillfactor=50);\ncreate index idx on tbl using brin (a) with (autosummarize=on);\ninsert into tbl select i, i from generate_series(0,100000) as i;\n\n2) Run script test_brin.sql using pgbench:\n\n  pgbench postgres -f ../review/brin_test.sql  -n -T 120\n\nThe script is a bit messy because I was trying to reproduce a \nproblematic workload. Though I didn't manage to simplify it.\nThe idea is that it inserts new values into the table to produce \nunindexed pages and also updates some values to trigger HOT-updates on \nthese pages.\n\n3) Open psql session and run brin_summarize_new_values\n\nselect brin_summarize_new_values('idx'::regclass::oid); \\watch 2\n\nWait a bit. And in psql you will see the ERROR.\n\nThis error is caused by the problem with root_offsets array bounds. It \noccurs if a new HOT tuple was inserted after we've collected \nroot_offsets, and thus we don't have root_offset for tuple's offnum. \nConcurrent insertions are possible, because brin_summarize_new_values() \nonly holds ShareUpdateLock on table and no lock (only pin) on the page.\n\nThe draft fix is in the attachments. It saves root_offsets_size and \nchecks that we only access valid fields.\nPatch also adds some debug messages, just to ensure that problem was caught.\n\nTODO:\n\n- check if  heapam_index_validate_scan() has the same problem\n- code cleanup\n- test other PostgreSQL versions\n\n[1] \nhttps://www.postgresql.org/message-id/flat/CA%2BTgmoYgwjmmjK24Qxb_vWAu8_Hh7gfVFcr3%2BR7ocdLvYOWJXg%40mail.gmail.com\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Thu, 23 Jul 2020 20:39:11 +0300", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": true, "msg_subject": "[BUG] Error in BRIN summarization" }, { "msg_contents": "On 23.07.2020 20:39, Anastasia Lubennikova wrote:\n> One of our clients caught an error \"failed to find parent tuple for \n> heap-only tuple at (50661,130) in table \"tbl'\" in PostgreSQL v12.\n>\n> Steps to reproduce (REL_12_STABLE):\n>\n> 1) Create table with primary key, create brin index, fill table with \n> some initial data:\n>\n> create table tbl (id int primary key, a int) with (fillfactor=50);\n> create index idx on tbl using brin (a) with (autosummarize=on);\n> insert into tbl select i, i from generate_series(0,100000) as i;\n>\n> 2) Run script test_brin.sql using pgbench:\n>\n>  pgbench postgres -f ../review/brin_test.sql  -n -T 120\n>\n> The script is a bit messy because I was trying to reproduce a \n> problematic workload. Though I didn't manage to simplify it.\n> The idea is that it inserts new values into the table to produce \n> unindexed pages and also updates some values to trigger HOT-updates on \n> these pages.\n>\n> 3) Open psql session and run brin_summarize_new_values\n>\n> select brin_summarize_new_values('idx'::regclass::oid); \\watch 2\n>\n> Wait a bit. And in psql you will see the ERROR.\n>\n> This error is caused by the problem with root_offsets array bounds. It \n> occurs if a new HOT tuple was inserted after we've collected \n> root_offsets, and thus we don't have root_offset for tuple's offnum. \n> Concurrent insertions are possible, because \n> brin_summarize_new_values() only holds ShareUpdateLock on table and no \n> lock (only pin) on the page.\n>\n> The draft fix is in the attachments. It saves root_offsets_size and \n> checks that we only access valid fields.\n> Patch also adds some debug messages, just to ensure that problem was \n> caught.\n>\n> TODO:\n>\n> - check if  heapam_index_validate_scan() has the same problem\n> - code cleanup\n> - test other PostgreSQL versions\n>\n> [1] \n> https://www.postgresql.org/message-id/flat/CA%2BTgmoYgwjmmjK24Qxb_vWAu8_Hh7gfVFcr3%2BR7ocdLvYOWJXg%40mail.gmail.com\n>\n\nHere is the updated version of the fix.\nThe problem can be reproduced on all supported versions, so I suggest to \nbackpatch it.\nCode slightly changed in v12, so here are two patches: one for versions \n9.5 to 11 and another for versions from 12 to master.\n\nAs for heapam_index_validate_scan(), I've tried to reproduce the same \nerror with CREATE INDEX CONCURRENTLY, but haven't found any problem with it.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Mon, 27 Jul 2020 18:21:06 +0300", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: [BUG] Error in BRIN summarization" }, { "msg_contents": "On 2020-Jul-27, Anastasia Lubennikova wrote:\n\n> Here is the updated version of the fix.\n> The problem can be reproduced on all supported versions, so I suggest to\n> backpatch it.\n> Code slightly changed in v12, so here are two patches: one for versions 9.5\n> to 11 and another for versions from 12 to master.\n\nHi Anastasia, thanks for this report and fix. I was considering this\nlast week and noticed that the patch changes the ABI of\nheap_get_root_tuples, which may be problematic in back branches. I\nsuggest that for unreleased branches (12 and prior) we need to create a\nnew function with the new signature, and keep heap_get_root_tuples\nunchanged. In 13 and master we don't need that trick, so we can keep\nthe code as you have it in this version of the patch.\n\nOffsetNumber\nheap_get_root_tuples_new(Page page, OffsetNumber *root_offsets)\n{ .. full implementation ... }\n\n/* ABI compatibility only */\nvoid\nheap_get_root_tuples(Page page, OffsetNumber *root_offsets)\n{\n\t(void) heap_get_root_tuples_new(page, root_offsets);\n}\n\n\n(I was also considering whether it needs to be a loop to reobtain root\ntuples, in case a concurrent transaction can create a new item while\nwe're checking that item; but I don't think that can really happen for\none individual tuple.)\n\nThanks\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 27 Jul 2020 13:25:29 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Error in BRIN summarization" }, { "msg_contents": "On Mon, Jul 27, 2020 at 10:25 AM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n> (I was also considering whether it needs to be a loop to reobtain root\n> tuples, in case a concurrent transaction can create a new item while\n> we're checking that item; but I don't think that can really happen for\n> one individual tuple.)\n\nI wonder if something like that is the underlying problem in a recent\nproblem case involving a \"REINDEX index\npg_class_tblspc_relfilenode_index\" command that runs concurrently with\nthe regression tests:\n\nhttps://postgr.es/m/CAH2-WzmBxu4o=pMsniur+bwHqCGCmV_AOLkuK6BuU7ngA6evqw@mail.gmail.com\n\nWe see a violation of the HOT invariant in this case, though only for\na system catalog index, and only in fairly particular circumstances\ninvolving concurrency.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 28 Jul 2020 12:18:09 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [BUG] Error in BRIN summarization" }, { "msg_contents": "On 27.07.2020 20:25, Alvaro Herrera wrote:\n> On 2020-Jul-27, Anastasia Lubennikova wrote:\n>\n>> Here is the updated version of the fix.\n>> The problem can be reproduced on all supported versions, so I suggest to\n>> backpatch it.\n>> Code slightly changed in v12, so here are two patches: one for versions 9.5\n>> to 11 and another for versions from 12 to master.\n>>\n>>\n>>\n>> (I was also considering whether it needs to be a loop to reobtain root\n>> tuples, in case a concurrent transaction can create a new item while\n>> we're checking that item; but I don't think that can really happen for\n>> one individual tuple.)\nI don't think we need a recheck for a single tuple, because we only care \nabout finding its root, which simply must exist somewhere on this page, \nas concurrent pruning is not allowed. We also may catch root_offsets[] \nfor subsequent tuples, but it's okay if we don't. These tuples will do \nthe same recheck on their turn.\n\n\nWhile testing this fix, Alexander Lakhin spotted another problem. I \nsimplified  the test case to this:\n\n1) prepare a table with brin index\n\ncreate table tbl (iint, bchar(1000)) with (fillfactor=10);\ninsert into tbl select i, md5(i::text) from generate_series(0,1000) as i;\ncreate index idx on tbl using brin(i, b) with (pages_per_range=1);\n\n2) run brin_desummarize_range() in a loop:\n\necho \"-- desummarize all ranges\n SELECT FROM generate_series(0, pg_relation_size('tbl')/8192 - 1) as i, lateral (SELECT brin_desummarize_range('idx', i)) as d;\n-- summarize them back\nVACUUM tbl\" > brin_desum_test.sql\n\npgbench postgres -f  brin_desum_test.sql -n -T 120\n\n\n3) run a search that invokes bringetbitmap:\n\n set enable_seqscan to off;\n  explain analyze select * from tbl where i>10 and i<100; \\watch 1\n\nAfter a few runs, it will fail with \"ERROR: corrupted BRIN index: \ninconsistent range map\"\n\nThe problem is caused by a race in page locking in \nbrinGetTupleForHeapBlock [1]:\n\n(1) bitmapsan locks revmap->rm_currBuf and finds the address of the \ntuple on a regular page \"page\", then unlocks revmap->rm_currBuf\n(2) in another transaction desummarize locks both revmap->rm_currBuf and \n\"page\", cleans up the tuple and unlocks both buffers\n(1) bitmapscan locks buffer, containing \"page\", attempts to access the \ntuple and fails to find it\n\n\nAt first, I tried to fix it by holding the lock on revmap->rm_currBuf \nuntil we locked the regular page, but it causes a deadlock with \nbrinsummarize(), It can be easily reproduced with the same test as above.\nIs there any rule about the order of locking revmap and regular pages in \nbrin? I haven't found anything in README.\n\nAs an alternative, we can leave locks as is and add a recheck, before \nthrowing an error.\n\nWhat do you think?\n\n[1] \nhttps://github.com/postgres/postgres/blob/master/src/backend/access/brin/brin_revmap.c#L269\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\nOn 27.07.2020 20:25, Alvaro Herrera\n wrote:\n\n\nOn 2020-Jul-27, Anastasia Lubennikova wrote:\n\n\n\nHere is the updated version of the fix.\nThe problem can be reproduced on all supported versions, so I suggest to\nbackpatch it.\nCode slightly changed in v12, so here are two patches: one for versions 9.5\nto 11 and another for versions from 12 to master.\n\n\n\n(I was also considering whether it needs to be a loop to reobtain root\ntuples, in case a concurrent transaction can create a new item while\nwe're checking that item; but I don't think that can really happen for\none individual tuple.)\n\n\n\n I don't think we need a recheck for a single tuple, because we only\n care about finding its root, which simply must exist somewhere on\n this page, as concurrent pruning is not allowed. We also may catch\n root_offsets[] for subsequent tuples, but it's okay if we don't.\n These tuples will do the same recheck on their turn.\n\n\nWhile testing this fix, Alexander Lakhin spotted another problem.\n I simplified  the test case to this:\n\n 1) prepare a table with brin index\n\ncreate table tbl (i int, b char(1000)) with (fillfactor=10);\ninsert into tbl select i, md5(i::text) from generate_series(0,1000) as i;\ncreate index idx on tbl using brin(i, b) with (pages_per_range=1);\n\n2) run brin_desummarize_range() in a loop:\n\necho \"-- desummarize all ranges\n SELECT FROM generate_series(0, pg_relation_size('tbl')/8192 - 1) as i, lateral (SELECT brin_desummarize_range('idx', i)) as d; \n-- summarize them back \nVACUUM tbl\" > brin_desum_test.sql\n\npgbench postgres -f  brin_desum_test.sql -n -T 120\n\n 3) run a search that invokes bringetbitmap:\n\n set enable_seqscan to off;\n explain analyze select * from tbl where i>10 and i<100; \\watch 1\n\nAfter a few runs, it will fail with \"ERROR: corrupted BRIN index:\n inconsistent range map\"\n\n The problem is caused by a race in page locking in\n brinGetTupleForHeapBlock [1]:\n\n (1) bitmapsan locks revmap->rm_currBuf and finds the address of\n the tuple on a regular page \"page\", then unlocks\n revmap->rm_currBuf\n (2) in another transaction desummarize locks both\n revmap->rm_currBuf and \"page\", cleans up the tuple and unlocks\n both buffers\n (1) bitmapscan locks buffer, containing \"page\", attempts to access\n the tuple and fails to find it\n\n\n At first, I tried to fix it by holding the lock on\n revmap->rm_currBuf until we locked the regular page, but it\n causes a deadlock with brinsummarize(), It can be easily\n reproduced with the same test as above.\n Is there any rule about the order of locking revmap and regular\n pages in brin? I haven't found anything in README.\n\n As an alternative, we can leave locks as is and add a recheck,\n before throwing an error.\n\n What do you think?\n\n [1] https://github.com/postgres/postgres/blob/master/src/backend/access/brin/brin_revmap.c#L269\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Thu, 30 Jul 2020 16:40:46 +0300", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: [BUG] Error in BRIN summarization" }, { "msg_contents": "On 30.07.2020 16:40, Anastasia Lubennikova wrote:\n> While testing this fix, Alexander Lakhin spotted another problem.\n>\n> After a few runs, it will fail with \"ERROR: corrupted BRIN index: \n> inconsistent range map\"\n>\n> The problem is caused by a race in page locking in \n> brinGetTupleForHeapBlock [1]:\n>\n> (1) bitmapsan locks revmap->rm_currBuf and finds the address of the \n> tuple on a regular page \"page\", then unlocks revmap->rm_currBuf\n> (2) in another transaction desummarize locks both revmap->rm_currBuf \n> and \"page\", cleans up the tuple and unlocks both buffers\n> (1) bitmapscan locks buffer, containing \"page\", attempts to access the \n> tuple and fails to find it\n>\n>\n> At first, I tried to fix it by holding the lock on revmap->rm_currBuf \n> until we locked the regular page, but it causes a deadlock with \n> brinsummarize(), It can be easily reproduced with the same test as above.\n> Is there any rule about the order of locking revmap and regular pages \n> in brin? I haven't found anything in README.\n>\n> As an alternative, we can leave locks as is and add a recheck, before \n> throwing an error.\n>\nHere are the updated patches for both problems.\n\n1) brin_summarize_fix_REL_12_v2 fixes\n\"failed to find parent tuple for heap-only tuple at (50661,130) in table \n\"tbl'\"\n\nThis patch checks that we only access initialized entries of \nroot_offsets[] array. If necessary, collect the array again. One recheck \nis enough here, since concurrent pruning is not possible.\n\n2) brin_pagelock_fix_REL_12_v1.patch fixes\n\"ERROR: corrupted BRIN index: inconsistent range map\"\n\nThis patch adds a recheck as suggested in previous message.\nI am not sure if one recheck is enough to eliminate the race completely, \nbut the problem cannot be reproduced anymore.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Mon, 10 Aug 2020 20:30:44 +0300", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: [BUG] Error in BRIN summarization" }, { "msg_contents": "On 2020-Jul-30, Anastasia Lubennikova wrote:\n\n> While testing this fix, Alexander Lakhin spotted another problem. I\n> simplified� the test case to this:\n\nAh, good catch. I think a cleaner way to fix this problem is to just\nconsider the range as not summarized and return NULL from there, as in\nthe attached patch. Running your test case with a telltale WARNING\nadded at that point, it's clear that it's being hit.\n\nBy returning NULL, we're forcing the caller to scan the heap, which is\nnot great. But note that if you retry, and your VACUUM hasn't run yet\nby the time we go through the loop again, the same thing would happen.\nSo it seems to me a good enough answer.\n\nA much more troubling thought is what happens if the range is\ndesummarized, then the index item is used for the summary of a different\nrange. Then the index might end up returning corrupt results.\n\n> At first, I tried to fix it by holding the lock on revmap->rm_currBuf until\n> we locked the regular page, but it causes a deadlock with brinsummarize(),\n> It can be easily reproduced with the same test as above.\n> Is there any rule about the order of locking revmap and regular pages in\n> brin? I haven't found anything in README.\n\nUmm, I thought that stuff was in the README, but it seems I didn't add\nit there. I think I had a .org file with my notes on that ... must be\nin an older laptop disk, because it's not in my worktree for that. I'll\nsee if I can fish it out.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 11 Aug 2020 19:43:21 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Error in BRIN summarization" }, { "msg_contents": "On 2020-Jul-23, Anastasia Lubennikova wrote:\n\n> This error is caused by the problem with root_offsets array bounds. It\n> occurs if a new HOT tuple was inserted after we've collected root_offsets,\n> and thus we don't have root_offset for tuple's offnum. Concurrent insertions\n> are possible, because brin_summarize_new_values() only holds ShareUpdateLock\n> on table and no lock (only pin) on the page.\n\nExcellent detective work, thanks.\n\n> The draft fix is in the attachments. It saves root_offsets_size and checks\n> that we only access valid fields.\n\nI think this is more complicated than necessary. It seems easier to\nsolve this problem by just checking whether the given root pointer is\nset to InvalidOffsetNumber, which is already done in the existing coding\nof heap_get_root_tuples (only they spell it \"0\" rather than\nInvalidOffsetNumber, which I propose to change). AFAIR this should only\nhappen in the 'anyvisible' mode, so I added that in an assert.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 11 Aug 2020 20:19:52 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Error in BRIN summarization" }, { "msg_contents": "On 2020-Jul-28, Peter Geoghegan wrote:\n\n> On Mon, Jul 27, 2020 at 10:25 AM Alvaro Herrera\n> <alvherre@2ndquadrant.com> wrote:\n> > (I was also considering whether it needs to be a loop to reobtain root\n> > tuples, in case a concurrent transaction can create a new item while\n> > we're checking that item; but I don't think that can really happen for\n> > one individual tuple.)\n> \n> I wonder if something like that is the underlying problem in a recent\n> problem case involving a \"REINDEX index\n> pg_class_tblspc_relfilenode_index\" command that runs concurrently with\n> the regression tests:\n> \n> https://postgr.es/m/CAH2-WzmBxu4o=pMsniur+bwHqCGCmV_AOLkuK6BuU7ngA6evqw@mail.gmail.com\n> \n> We see a violation of the HOT invariant in this case, though only for\n> a system catalog index, and only in fairly particular circumstances\n> involving concurrency.\n\nHmm. As far as I understand, the bug Anastasia reports can only hit an\nindex build that occurs concurrently to heap updates; and that cannot\nhappen for a regular index build, only for CREATE INDEX CONCURRENTLY and\nREINDEX CONCURRENTLY. So unless I miss something, it's not related to\nthat other bug.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 12 Aug 2020 12:00:06 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Error in BRIN summarization" }, { "msg_contents": "On 2020-Aug-11, Alvaro Herrera wrote:\n\n> I think this is more complicated than necessary. It seems easier to\n> solve this problem by just checking whether the given root pointer is\n> set to InvalidOffsetNumber, which is already done in the existing coding\n> of heap_get_root_tuples (only they spell it \"0\" rather than\n> InvalidOffsetNumber, which I propose to change). AFAIR this should only\n> happen in the 'anyvisible' mode, so I added that in an assert.\n\n'anyvisible' mode is not required AFAICS; reading the code, I think this\ncould also hit REINDEX CONCURRENTLY and CREATE INDEX CONCURRENTLY, which\ndo not use that flag. I didn't try to reproduce it there, though.\nAnyway, I'm going to remove that Assert() I added.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 12 Aug 2020 12:01:49 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Error in BRIN summarization" }, { "msg_contents": "On 2020-Aug-11, Alvaro Herrera wrote:\n\n> A much more troubling thought is what happens if the range is\n> desummarized, then the index item is used for the summary of a different\n> range. Then the index might end up returning corrupt results.\n\nActually, this is not a concern because the brin tuple's bt_blkno is\nrechecked before returning it, and if it doesn't match what we're\nsearching, the loop is restarted. It becomes an infinite loop problem\nif the revmap is pointing to a tuple that's labelled with a different\nrange's blkno. So I think my patch as posted is a sufficient fix for\nthis problem.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 12 Aug 2020 14:02:18 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Error in BRIN summarization" }, { "msg_contents": "On 2020-Aug-12, Alvaro Herrera wrote:\n\n> 'anyvisible' mode is not required AFAICS; reading the code, I think this\n> could also hit REINDEX CONCURRENTLY and CREATE INDEX CONCURRENTLY, which\n> do not use that flag. I didn't try to reproduce it there, though.\n> Anyway, I'm going to remove that Assert() I added.\n\nSo this is what I propose as the final form of the fix.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 12 Aug 2020 15:58:19 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Error in BRIN summarization" }, { "msg_contents": "On 12.08.2020 22:58, Alvaro Herrera wrote:\n> On 2020-Aug-12, Alvaro Herrera wrote:\n>\n>> 'anyvisible' mode is not required AFAICS; reading the code, I think this\n>> could also hit REINDEX CONCURRENTLY and CREATE INDEX CONCURRENTLY, which\n>> do not use that flag. I didn't try to reproduce it there, though.\n>> Anyway, I'm going to remove that Assert() I added.\n> So this is what I propose as the final form of the fix.\n>\nCool.\nThis version looks much simpler than mine and passes the tests fine.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Thu, 13 Aug 2020 13:06:18 +0300", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: [BUG] Error in BRIN summarization" }, { "msg_contents": "On 2020-Aug-13, Anastasia Lubennikova wrote:\n\n> Cool.\n> This version looks much simpler than mine and passes the tests fine.\n\nThanks, pushed it to all branches. Thanks for diagnosing this problem!\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 13 Aug 2020 17:53:49 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Error in BRIN summarization" }, { "msg_contents": "hyrax's latest report suggests that this patch has issues under\nCLOBBER_CACHE_ALWAYS:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hyrax&dt=2020-08-13%2005%3A09%3A58\n\nHard to tell whether there's an actual bug there or just test instability,\nbut either way it needs to be resolved.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 15 Aug 2020 10:54:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [BUG] Error in BRIN summarization" }, { "msg_contents": "On 2020-Aug-15, Tom Lane wrote:\n\n> hyrax's latest report suggests that this patch has issues under\n> CLOBBER_CACHE_ALWAYS:\n> \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hyrax&dt=2020-08-13%2005%3A09%3A58\n> \n> Hard to tell whether there's an actual bug there or just test instability,\n> but either way it needs to be resolved.\n\nHmm, the only explanation I can see for this is that autovacuum managed\nto summarize the range before the test script did it. So the solution\nwould simply be to disable autovacuum for the table across the whole\nscript.\n\nI'm running the scripts and dependencies to verify that fix, but under\nCLOBBER_CACHE_ALWAYS that takes quite a bit.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 17 Aug 2020 15:07:58 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Error in BRIN summarization" }, { "msg_contents": "On 2020-Aug-17, Alvaro Herrera wrote:\n\n> Hmm, the only explanation I can see for this is that autovacuum managed\n> to summarize the range before the test script did it. So the solution\n> would simply be to disable autovacuum for the table across the whole\n> script.\n> \n> I'm running the scripts and dependencies to verify that fix, but under\n> CLOBBER_CACHE_ALWAYS that takes quite a bit.\n\nI ran a subset of tests a few times, but was unable to reproduce the\nproblem. I'll just push this to all branches and hope for the best.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 17 Aug 2020 16:21:01 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Error in BRIN summarization" } ]
[ { "msg_contents": "Hi,\n\nIn a development branch of mine Thomas / the CF bot found a relatively\nrare regression failures. That turned out to be because there was an\nedge case in which heap_page_prune() was a bit more pessimistic than\nlazy_scan_heap(). But I wonder if this isn't an issue more broadly:\n\nThe issue I am concerned about is lazy_scan_heap()'s logic for DEAD HOT\nupdated tuples:\n\n\t\t\t\t\t/*\n\t\t\t\t\t * Ordinarily, DEAD tuples would have been removed by\n\t\t\t\t\t * heap_page_prune(), but it's possible that the tuple\n\t\t\t\t\t * state changed since heap_page_prune() looked. In\n\t\t\t\t\t * particular an INSERT_IN_PROGRESS tuple could have\n\t\t\t\t\t * changed to DEAD if the inserter aborted. So this\n\t\t\t\t\t * cannot be considered an error condition.\n\t\t\t\t\t *\n\t\t\t\t\t * If the tuple is HOT-updated then it must only be\n\t\t\t\t\t * removed by a prune operation; so we keep it just as if\n\t\t\t\t\t * it were RECENTLY_DEAD. Also, if it's a heap-only\n\t\t\t\t\t * tuple, we choose to keep it, because it'll be a lot\n\t\t\t\t\t * cheaper to get rid of it in the next pruning pass than\n\t\t\t\t\t * to treat it like an indexed tuple. Finally, if index\n\t\t\t\t\t * cleanup is disabled, the second heap pass will not\n\t\t\t\t\t * execute, and the tuple will not get removed, so we must\n\t\t\t\t\t * treat it like any other dead tuple that we choose to\n\t\t\t\t\t * keep.\n\t\t\t\t\t *\n\t\t\t\t\t * If this were to happen for a tuple that actually needed\n\t\t\t\t\t * to be deleted, we'd be in trouble, because it'd\n\t\t\t\t\t * possibly leave a tuple below the relation's xmin\n\t\t\t\t\t * horizon alive. heap_prepare_freeze_tuple() is prepared\n\t\t\t\t\t * to detect that case and abort the transaction,\n\t\t\t\t\t * preventing corruption.\n\t\t\t\t\t */\n\t\t\t\t\tif (HeapTupleIsHotUpdated(&tuple) ||\n\t\t\t\t\t\tHeapTupleIsHeapOnly(&tuple) ||\n\t\t\t\t\t\tparams->index_cleanup == VACOPT_TERNARY_DISABLED)\n\t\t\t\t\t\tnkeep += 1;\n\t\t\t\t\telse\n\t\t\t\t\t\ttupgone = true; /* we can delete the tuple */\n\t\t\t\t\tall_visible = false;\n\t\t\t\t\tbreak;\n\nIn the case the HOT logic triggers, we'll call\nheap_prepare_freeze_tuple() even when the tuple is dead. Which then can\nlead us to\n\t\tif (TransactionIdPrecedes(xid, cutoff_xid))\n\t\t{\n\t\t\t/*\n\t\t\t * If we freeze xmax, make absolutely sure that it's not an XID\n\t\t\t * that is important. (Note, a lock-only xmax can be removed\n\t\t\t * independent of committedness, since a committed lock holder has\n\t\t\t * released the lock).\n\t\t\t */\n\t\t\tif (!HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask) &&\n\t\t\t\tTransactionIdDidCommit(xid))\n\t\t\t\tereport(PANIC,\n\t\t\t\t\t\t(errcode(ERRCODE_DATA_CORRUPTED),\n\t\t\t\t\t\t errmsg_internal(\"cannot freeze committed xmax %u\",\n\t\t\t\t\t\t\t\t\t\t xid)));\n\t\t\tfreeze_xmax = true;\n\n(before those errors we'd just have unset xmax)\n\nNow obviously the question is whether it's possible that\nheap_page_prune() left alive anything that could be seen as DEAD for the\ncheck in lazy_scan_heap(), and that additionally is older than the\ncutoff passed to heap_prepare_freeze_tuple().\n\nI'm not sure - it seems like it could be possible in some corner cases,\nwhen transactions abort after the heap_page_prune() but before the\nsecond HeapTupleSatisfiesVacuum().\n\nBut regardless of whether it's possible today, it seems extremely\nfragile. ISTM we should at least have a bunch of additional error checks\nin the HOT branch for HEAPTUPLE_DEAD.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 23 Jul 2020 11:10:18 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "HOT vs freezing issue causing \"cannot freeze committed xmax\"" }, { "msg_contents": "On Thu, Jul 23, 2020 at 2:10 PM Andres Freund <andres@anarazel.de> wrote:\n> In the case the HOT logic triggers, we'll call\n> heap_prepare_freeze_tuple() even when the tuple is dead.\n\nI think this is very bad. I've always been confused about these\ncomments, but I couldn't quite put my finger on the problem. Now I\nthink I can: the comments here imagine that we have the option either\nto set tupgone, causing the line pointer to be marked unused by an\neventual call to lazy_vacuum_page(), or that we can decline to set\ntupgone, which will leave the tuple around to be handled by the next\nvacuum.\n\nHowever, we don't really have a choice at all. A choice implies that\neither option is correct, and therefore we can elect the one we\nprefer. But here, it's not just that one option is incorrect, but that\nboth options are incorrect. Setting tupgone controls whether or not\nthe tuple is considered for freezing. If we DON'T consider freezing\nit, then we might manage to advance relfrozenxid while an older XID\nstill exists in the table. If we DO consider freezing it, we will\ncorrectly conclude that it needs to be frozen, but then the freezing\ncode is in an impossible situation, because it has no provision for\ngetting rid of tuples, only for keeping them. Its logic assumes that\nwhenever we are freezing xmin or xmax we do that in a way that causes\nthe tuple to be visible to everyone, but this tuple should be visible\nto no one. So with your changes it now errors out instead of\ncorrupting data, but that's just replacing one bad thing (data\ncorruption) with another (VACUUM failures).\n\nI think the actual correct behavior here is to mark the line pointer\nas dead. The easiest way to accomplish that is probably to have\nlazy_scan_heap() just emit an extra XLOG_HEAP2_CLEAN record beyond\nwhatever HOT-pruning already did, if it's necessary. A better solution\nwould probably be to merge HOT-pruning with setting things all-visible\nand have a single function that does both, but that seems a lot more\ninvasive, and definitely unsuitable for back-patching.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 24 Jul 2020 11:06:58 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: HOT vs freezing issue causing \"cannot freeze committed xmax\"" }, { "msg_contents": "Hi,\n\nOn 2020-07-24 11:06:58 -0400, Robert Haas wrote:\n> On Thu, Jul 23, 2020 at 2:10 PM Andres Freund <andres@anarazel.de> wrote:\n> > In the case the HOT logic triggers, we'll call\n> > heap_prepare_freeze_tuple() even when the tuple is dead.\n>\n> I think this is very bad. I've always been confused about these\n> comments, but I couldn't quite put my finger on the problem. Now I\n> think I can: the comments here imagine that we have the option either\n> to set tupgone, causing the line pointer to be marked unused by an\n> eventual call to lazy_vacuum_page(), or that we can decline to set\n> tupgone, which will leave the tuple around to be handled by the next\n> vacuum.\n\nYea. I think the only saving grace is that it's not obvious when the\nsituation can arise without prior corruption. But even if that's actuall\nimpossible, it seems extremely fragile. I stared at heap_prune_chain()\nfor quite a while and couldn't convince myself either way.\n\n\n> However, we don't really have a choice at all. A choice implies that\n> either option is correct, and therefore we can elect the one we\n> prefer. But here, it's not just that one option is incorrect, but that\n> both options are incorrect. Setting tupgone controls whether or not\n> the tuple is considered for freezing. If we DON'T consider freezing\n> it, then we might manage to advance relfrozenxid while an older XID\n> still exists in the table. If we DO consider freezing it, we will\n> correctly conclude that it needs to be frozen, but then the freezing\n> code is in an impossible situation, because it has no provision for\n> getting rid of tuples, only for keeping them. Its logic assumes that\n> whenever we are freezing xmin or xmax we do that in a way that causes\n> the tuple to be visible to everyone, but this tuple should be visible\n> to no one. So with your changes it now errors out instead of\n> corrupting data, but that's just replacing one bad thing (data\n> corruption) with another (VACUUM failures).\n\nI suspect that the legitimate cases hitting this branch won't error out,\nbecause then xmin/xmax aren't old enough to need to be frozen.\n\n\n> I think the actual correct behavior here is to mark the line pointer\n> as dead.\n\nThat's not trivial, because just doing that naively will break HOT.\n\n\n> The easiest way to accomplish that is probably to have\n> lazy_scan_heap() just emit an extra XLOG_HEAP2_CLEAN record beyond\n> whatever HOT-pruning already did, if it's necessary. A better solution\n> would probably be to merge HOT-pruning with setting things all-visible\n> and have a single function that does both, but that seems a lot more\n> invasive, and definitely unsuitable for back-patching.\n\nI suspect that merging pruning and this logic in lazy_scan_heap() really\nis the only proper way to solve this kind of issue.\n\nI wonder if, given we don't know if this issue can be hit in a real\ndatabase, and given that it already triggers an error, the right way to\ndeal with this in the back-branches is to emit a more precise error\nmessage. I.e. if we hit this branch, and either xmin/xmax are older than\nthe cutoff, then we issue a more specific ERROR.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 24 Jul 2020 09:55:14 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: HOT vs freezing issue causing \"cannot freeze committed xmax\"" } ]
[ { "msg_contents": "Hi,\n\nAfter adding a few assertions to validate the connection scalability\npatch I saw failures that also apply to master:\n\nI added an assertion to TransactionIdIsCurrentTransactionId(),\n*IsInProgress(), ... ensuring that the xid is within an expected\nrange. Which promptly failed in isolation tests.\n\nThe reason for that is that heap_abort_speculative() sets xmin to\nInvalidTransactionId but does *not* add HEAP_XMIN_INVALID to infomask.\n\nThe various HeapTupleSatisfies* routines avoid calling those routines\nfor invalid xmins by checking HeapTupleHeaderXminInvalid() first. But\nsince heap_abort_speculative() didn't set that, we end up calling\nTransactionIdIsCurrentTransactionId(InvalidTransactionId) which then\ntriggered my assertion.\n\n\nObviously I can trivially fix that by adjusting the assertions to allow\nInvalidTransactionId. But to me it seems fragile to only have xmin == 0\nin one rare occasion, and to rely on TransactionIdIs* to return\nprecisely the right thing without those functions necessarily having\nbeen designed with that in mind.\n\n\nI think we should change heap_abort_speculative() to set\nHEAP_XMIN_INVALID in master. But we can't really do anything about\nexisting tuples without it - therefore we will have to forever take care\nabout encountering that combination :(.\n\n\nPerhaps we should instead or additionally make\nHeapTupleHeaderXminInvalid() explicitly check for InvalidTransactionId?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 23 Jul 2020 12:40:42 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "heap_abort_speculative() sets xmin to Invalid* without\n HEAP_XMIN_INVALID" }, { "msg_contents": "On 2020-Jul-23, Andres Freund wrote:\n\n> I think we should change heap_abort_speculative() to set\n> HEAP_XMIN_INVALID in master.\n\n+1\n\n> But we can't really do anything about\n> existing tuples without it - therefore we will have to forever take care\n> about encountering that combination :(.\n> \n> Perhaps we should instead or additionally make\n> HeapTupleHeaderXminInvalid() explicitly check for InvalidTransactionId?\n\n+1 for doing it as an additional fix, with a fat comment somewhere\nexplaining where such tuples would come from.\n\nAdditionally, but perhaps not very usefully, maybe we could have a\nmechanism to inject such tuples so that code can be hardened against the\ncondition.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 23 Jul 2020 17:49:09 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: heap_abort_speculative() sets xmin to Invalid* without\n HEAP_XMIN_INVALID" }, { "msg_contents": "On Thu, Jul 23, 2020 at 2:49 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> On 2020-Jul-23, Andres Freund wrote:\n>\n> > I think we should change heap_abort_speculative() to set\n> > HEAP_XMIN_INVALID in master.\n>\n> +1\n\n+1\n\n> +1 for doing it as an additional fix, with a fat comment somewhere\n> explaining where such tuples would come from.\n\nThere could be an opportunity to put this on a formal footing by doing\nsomething in the amcheck heap checker patch.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 23 Jul 2020 20:51:24 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: heap_abort_speculative() sets xmin to Invalid* without\n HEAP_XMIN_INVALID" } ]
[ { "msg_contents": "I realize I've never quite known this; where does the planner get the row estimates for an empty table? Example:\n\npsql (11.8)\nType \"help\" for help.\n\nxof=# CREATE TABLE t (i integer, t text, j integer);\nCREATE TABLE\nxof=# VACUUM ANALYZE t;\nVACUUM\nxof=# EXPLAIN ANALYZE SELECT * FROM t;\n QUERY PLAN \n------------------------------------------------------------------------------------------------\n Seq Scan on t (cost=0.00..22.00 rows=1200 width=40) (actual time=0.015..0.015 rows=0 loops=1)\n Planning Time: 5.014 ms\n Execution Time: 0.094 ms\n(3 rows)\n\nxof=# INSERT INTO t values(1, 'this', 2);\nINSERT 0 1\nxof=# EXPLAIN ANALYZE SELECT * FROM t;\n QUERY PLAN \n------------------------------------------------------------------------------------------------\n Seq Scan on t (cost=0.00..22.00 rows=1200 width=40) (actual time=0.010..0.011 rows=1 loops=1)\n Planning Time: 0.039 ms\n Execution Time: 0.021 ms\n(3 rows)\n\nxof=# VACUUM ANALYZE t;\nVACUUM\nxof=# EXPLAIN ANALYZE SELECT * FROM t;\n QUERY PLAN \n--------------------------------------------------------------------------------------------\n Seq Scan on t (cost=0.00..1.01 rows=1 width=13) (actual time=0.008..0.008 rows=1 loops=1)\n Planning Time: 0.069 ms\n Execution Time: 0.019 ms\n(3 rows)\n\nxof=# DELETE FROM t;\nDELETE 0\nxof=# VACUUM ANALYZE t;\nVACUUM\nxof=# EXPLAIN ANALYZE SELECT * FROM t;\n QUERY PLAN \n------------------------------------------------------------------------------------------------\n Seq Scan on t (cost=0.00..29.90 rows=1990 width=13) (actual time=0.004..0.004 rows=0 loops=1)\n Planning Time: 0.034 ms\n Execution Time: 0.015 ms\n(3 rows)\n\n\n--\n-- Christophe Pettus\n xof@thebuild.com\n\n\n\n", "msg_date": "Thu, 23 Jul 2020 21:01:25 -0700", "msg_from": "Christophe Pettus <xof@thebuild.com>", "msg_from_op": true, "msg_subject": "Row estimates for empty tables" }, { "msg_contents": "On Fri, 24 Jul 2020 at 16:01, Christophe Pettus <xof@thebuild.com> wrote:\n> I realize I've never quite known this; where does the planner get the row estimates for an empty table? Example:\n\nWe just assume there are 10 pages if the relation has not yet been\nvacuumed or analyzed. The row estimates you see are the number of\ntimes 1 tuple is likely to fit onto a single page multiplied by the\nassumed 10 pages. If you had made your table wider then the planner\nwould have assumed fewer rows\n\nThere's a comment that justifies the 10 pages, which, as of master is\nin table_block_relation_estimate_size(). It'll be somewhere else in\npg12.\n\n* HACK: if the relation has never yet been vacuumed, use a minimum size\n* estimate of 10 pages. The idea here is to avoid assuming a\n* newly-created table is really small, even if it currently is, because\n* that may not be true once some data gets loaded into it. Once a vacuum\n* or analyze cycle has been done on it, it's more reasonable to believe\n* the size is somewhat stable.\n*\n* (Note that this is only an issue if the plan gets cached and used again\n* after the table has been filled. What we're trying to avoid is using a\n* nestloop-type plan on a table that has grown substantially since the\n* plan was made. Normally, autovacuum/autoanalyze will occur once enough\n* inserts have happened and cause cached-plan invalidation; but that\n* doesn't happen instantaneously, and it won't happen at all for cases\n* such as temporary tables.)\n*\n* We approximate \"never vacuumed\" by \"has relpages = 0\", which means this\n* will also fire on genuinely empty relations. Not great, but\n* fortunately that's a seldom-seen case in the real world, and it\n* shouldn't degrade the quality of the plan too much anyway to err in\n* this direction.\n*\n* If the table has inheritance children, we don't apply this heuristic.\n* Totally empty parent tables are quite common, so we should be willing\n* to believe that they are empty.\n\nThe code which decides if the table has been vacuumed here assumes it\nhas not if pg_class.relpages == 0. So even if you were to manually\nvacuum the table the code here would think it's not yet been vacuumed.\n\nDavid\n\n\n", "msg_date": "Fri, 24 Jul 2020 16:56:49 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Row estimates for empty tables" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Fri, 24 Jul 2020 at 16:01, Christophe Pettus <xof@thebuild.com> wrote:\n>> I realize I've never quite known this; where does the planner get the row estimates for an empty table? Example:\n\n> We just assume there are 10 pages if the relation has not yet been\n> vacuumed or analyzed. The row estimates you see are the number of\n> times 1 tuple is likely to fit onto a single page multiplied by the\n> assumed 10 pages. If you had made your table wider then the planner\n> would have assumed fewer rows\n\nYeah. Also note that since we have no ANALYZE stats in this scenario,\nthe row width estimate is going to be backed into via some guesses\nbased on column data types. (It's fine for fixed-width types, much\nless fine for var-width.)\n\nThere's certainly not a lot besides tradition to justify the exact\nnumbers used in this case. However, we do have a good deal of\npractical experience to justify the principle of \"never assume a\ntable is empty, or even contains just one row, unless you're really\nsure of that\". Otherwise you tend to end up with nestloop joins that\nwill perform horrifically if you were wrong. The other join types\nare notably less brittle.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 24 Jul 2020 09:48:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Row estimates for empty tables" }, { "msg_contents": "\n\n> On Jul 24, 2020, at 06:48, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> There's certainly not a lot besides tradition to justify the exact\n> numbers used in this case. \n\nSince we already special-case parent tables for partition sets, would a storage parameter that lets you either tell the planner \"no, really, zero is reasonable here\" or sets a minimum number of rows to plan for be reasonable? I happened to get bit by this tracking down an issue where several tables in a large query had zero rows, and the planner's assumption of a few pages worth caused some sub-optimal plans. The performance hit wasn't huge, but they were being joined to some *very* large tables, and the differences added up.\n--\n-- Christophe Pettus\n xof@thebuild.com\n\n\n\n", "msg_date": "Fri, 24 Jul 2020 07:38:08 -0700", "msg_from": "Christophe Pettus <xof@thebuild.com>", "msg_from_op": true, "msg_subject": "Re: Row estimates for empty tables" }, { "msg_contents": "pá 24. 7. 2020 v 16:38 odesílatel Christophe Pettus <xof@thebuild.com>\nnapsal:\n\n>\n>\n> > On Jul 24, 2020, at 06:48, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > There's certainly not a lot besides tradition to justify the exact\n> > numbers used in this case.\n>\n> Since we already special-case parent tables for partition sets, would a\n> storage parameter that lets you either tell the planner \"no, really, zero\n> is reasonable here\" or sets a minimum number of rows to plan for be\n> reasonable? I happened to get bit by this tracking down an issue where\n> several tables in a large query had zero rows, and the planner's assumption\n> of a few pages worth caused some sub-optimal plans. The performance hit\n> wasn't huge, but they were being joined to some *very* large tables, and\n> the differences added up.\n>\n\nI did this patch ten years ago. GoodData application\nhttps://www.gooddata.com/ uses Postgres lot, and this application stores\nsome results in tables (as guard against repeated calculations). Lot of\nthese tables have zero or one row.\n\nAlthough we ran an ANALYZE over all tables - the queries on empty tables\nhad very bad plans, and I had to fix it by this patch. Another company uses\na fake one row in table - so there is no possibility to have a really empty\ntable.\n\nIt is an issue for special, not typical applications (this situation is\ntypical for some OLAP patterns) - it is not too often - but some clean\nsolution (instead hacking postgres) can be nice.\n\nRegards\n\nPavel\n\n> --\n> -- Christophe Pettus\n> xof@thebuild.com\n>\n>\n>\n>\n\npá 24. 7. 2020 v 16:38 odesílatel Christophe Pettus <xof@thebuild.com> napsal:\n\n> On Jul 24, 2020, at 06:48, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> There's certainly not a lot besides tradition to justify the exact\n> numbers used in this case. \n\nSince we already special-case parent tables for partition sets, would a storage parameter that lets you either tell the planner \"no, really, zero is reasonable here\" or sets a minimum number of rows to plan for be reasonable?  I happened to get bit by this tracking down an issue where several tables in a large query had zero rows, and the planner's assumption of a few pages worth caused some sub-optimal plans.  The performance hit wasn't huge, but they were being joined to some *very* large tables, and the differences added up.I did this patch ten years ago.  GoodData application https://www.gooddata.com/  uses Postgres lot, and this application stores some results in tables (as guard against repeated calculations). Lot of these tables have zero or one row. Although we ran an ANALYZE over all tables - the queries on empty tables had very bad plans, and I had to fix it by this patch. Another company uses a fake one row in table - so there is no possibility to have a really empty table.It is an issue for special, not typical applications (this situation is typical for some OLAP patterns)  - it is not too often - but some clean solution (instead hacking postgres) can be nice.RegardsPavel\n--\n-- Christophe Pettus\n   xof@thebuild.com", "msg_date": "Fri, 24 Jul 2020 21:14:04 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Row estimates for empty tables" }, { "msg_contents": "\n> On Jul 24, 2020, at 12:14, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> \n> this application stores some results in tables (as guard against repeated calculations). Lot of these tables have zero or one row. \n\nYes, that's the situation we encountered, too. It's not very common (and even less common, I would assume, that it results in a bad plan), but it did in this case.\n\n--\n-- Christophe Pettus\n xof@thebuild.com\n\n\n\n", "msg_date": "Fri, 24 Jul 2020 12:35:35 -0700", "msg_from": "Christophe Pettus <xof@thebuild.com>", "msg_from_op": true, "msg_subject": "Re: Row estimates for empty tables" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> pá 24. 7. 2020 v 16:38 odesílatel Christophe Pettus <xof@thebuild.com>\n> napsal:\n>> Since we already special-case parent tables for partition sets, would a\n>> storage parameter that lets you either tell the planner \"no, really, zero\n>> is reasonable here\" or sets a minimum number of rows to plan for be\n>> reasonable?\n\n> It is an issue for special, not typical applications (this situation is\n> typical for some OLAP patterns) - it is not too often - but some clean\n> solution (instead hacking postgres) can be nice.\n\nThe core issue here is \"how do we know whether the table is likely to stay\nempty?\". I can think of a couple of more or less klugy solutions:\n\n1. Arrange to send out a relcache inval when adding the first page to\na table, and then remove the planner hack for disbelieving relpages = 0.\nI fear this'd be a mess from a system structural standpoint, but it might\nwork fairly transparently.\n\n2. Establish the convention that vacuuming or analyzing an empty table\nis what you do to tell the system that this state is going to persist.\nThat's more or less what the existing comments in plancat.c envision,\nbut we never made a definition for how the occurrence of that event\nwould be recorded in the catalogs, other than setting relpages > 0.\nRather than adding another pg_class column, I'm tempted to say that\nvacuum/analyze should set relpages to a minimum of 1, even if the\nrelation has zero pages. That does get the job done:\n\nregression=# create table foo(f1 text);\nCREATE TABLE\nregression=# explain select * from foo;\n QUERY PLAN \n--------------------------------------------------------\n Seq Scan on foo (cost=0.00..23.60 rows=1360 width=32)\n(1 row)\n\nregression=# vacuum foo; -- doesn't help\nVACUUM\nregression=# explain select * from foo;\n QUERY PLAN \n--------------------------------------------------------\n Seq Scan on foo (cost=0.00..23.60 rows=1360 width=32)\n(1 row)\nregression=# update pg_class set relpages = 1 where relname = 'foo';\nUPDATE 1\nregression=# explain select * from foo;\n QUERY PLAN \n----------------------------------------------------\n Seq Scan on foo (cost=0.00..0.00 rows=1 width=32)\n(1 row)\n\n(We're still estimating one row, but that's as a result of different\ndecisions that I'm not nearly as willing to compromise on...)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 24 Jul 2020 17:09:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Row estimates for empty tables" }, { "msg_contents": "\n\n> On Jul 24, 2020, at 14:09, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Rather than adding another pg_class column, I'm tempted to say that\n> vacuum/analyze should set relpages to a minimum of 1, even if the\n> relation has zero pages. \n\nIf there's not an issue about relpages != actual pages on disk, that certain seems straight-forward, and no *more* hacky than the current situation.\n\n--\n-- Christophe Pettus\n xof@thebuild.com\n\n\n\n", "msg_date": "Fri, 24 Jul 2020 14:54:21 -0700", "msg_from": "Christophe Pettus <xof@thebuild.com>", "msg_from_op": true, "msg_subject": "Re: Row estimates for empty tables" }, { "msg_contents": "[ redirecting to -hackers ]\n\nI wrote:\n> The core issue here is \"how do we know whether the table is likely to stay\n> empty?\". I can think of a couple of more or less klugy solutions:\n\n> 1. Arrange to send out a relcache inval when adding the first page to\n> a table, and then remove the planner hack for disbelieving relpages = 0.\n> I fear this'd be a mess from a system structural standpoint, but it might\n> work fairly transparently.\n\nI experimented with doing this. It's not hard to code, if you don't mind\nhaving RelationGetBufferForTuple calling CacheInvalidateRelcache. I'm not\nsure whether that code path might cause any long-term problems, but it\nseems to work OK right now. However, this solution causes massive\n\"failures\" in the regression tests as a result of plans changing. I'm\nsure that's partly because we use so many small tables in the tests.\nNonetheless, it's not promising from the standpoint of not causing\nunexpected problems in the real world.\n\n> 2. Establish the convention that vacuuming or analyzing an empty table\n> is what you do to tell the system that this state is going to persist.\n> That's more or less what the existing comments in plancat.c envision,\n> but we never made a definition for how the occurrence of that event\n> would be recorded in the catalogs, other than setting relpages > 0.\n> Rather than adding another pg_class column, I'm tempted to say that\n> vacuum/analyze should set relpages to a minimum of 1, even if the\n> relation has zero pages.\n\nI also tried this, and it seems a lot more promising: no existing\nregression test cases change. So perhaps we should do the attached\nor something like it.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 24 Jul 2020 18:34:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Row estimates for empty tables" }, { "msg_contents": "On Sat, 25 Jul 2020 at 10:34, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wrote:\n> > 1. Arrange to send out a relcache inval when adding the first page to\n> > a table, and then remove the planner hack for disbelieving relpages = 0.\n> > I fear this'd be a mess from a system structural standpoint, but it might\n> > work fairly transparently.\n>\n> I experimented with doing this. It's not hard to code, if you don't mind\n> having RelationGetBufferForTuple calling CacheInvalidateRelcache. I'm not\n> sure whether that code path might cause any long-term problems, but it\n> seems to work OK right now. However, this solution causes massive\n> \"failures\" in the regression tests as a result of plans changing. I'm\n> sure that's partly because we use so many small tables in the tests.\n> Nonetheless, it's not promising from the standpoint of not causing\n> unexpected problems in the real world.\n\nI guess all these changes would be the planner moving towards a plan\nthat suits having fewer rows for the given table better. If so, that\ndoes seem quite scary as we already have enough problems from the\nplanner choosing poor plans when it thinks there are fewer rows than\nthere actually are. Don't we need to keep something like the 10-page\nestimate there so safer plans are produced before auto-vacuum gets in\nand gathers some proper stats?\n\nI think if anything we'd want to move in the direction of producing\nmore cautious plans when the estimated number of rows is low. Perhaps\nespecially so for when the planner opts to do things like perform a\nnon-parameterized nested loop join when it thinks the RelOptInfo with,\nsay 3, unbeknown-to-the-planner, correlated, base restrict quals that\nare thought to produce just 1 row, but actually produce many more.\n\n> > 2. Establish the convention that vacuuming or analyzing an empty table\n> > is what you do to tell the system that this state is going to persist.\n> > That's more or less what the existing comments in plancat.c envision,\n> > but we never made a definition for how the occurrence of that event\n> > would be recorded in the catalogs, other than setting relpages > 0.\n> > Rather than adding another pg_class column, I'm tempted to say that\n> > vacuum/analyze should set relpages to a minimum of 1, even if the\n> > relation has zero pages.\n>\n> I also tried this, and it seems a lot more promising: no existing\n> regression test cases change. So perhaps we should do the attached\n> or something like it.\n\nThis sounds like a more plausible solution. At least this way there's\nan escape hatch for people who suffer due to this.\n\nDavid\n\n\n", "msg_date": "Sat, 25 Jul 2020 12:37:19 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Row estimates for empty tables" }, { "msg_contents": "so 25. 7. 2020 v 0:34 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> [ redirecting to -hackers ]\n>\n> I wrote:\n> > The core issue here is \"how do we know whether the table is likely to\n> stay\n> > empty?\". I can think of a couple of more or less klugy solutions:\n>\n\nFor these special cases is probably possible to ensure ANALYZE before any\nSELECT. When the table is created, then it is analyzed, and after that it\nis published and used for SELECT. Usually this table is not modified ever.\n\nBecause it is a special case, then it is not necessarily too sophisticated\na solution. But for built in solution it can be designed more goneral\n\n\n\n> > 1. Arrange to send out a relcache inval when adding the first page to\n> > a table, and then remove the planner hack for disbelieving relpages = 0.\n> > I fear this'd be a mess from a system structural standpoint, but it might\n> > work fairly transparently.\n>\n> I experimented with doing this. It's not hard to code, if you don't mind\n> having RelationGetBufferForTuple calling CacheInvalidateRelcache. I'm not\n> sure whether that code path might cause any long-term problems, but it\n> seems to work OK right now. However, this solution causes massive\n> \"failures\" in the regression tests as a result of plans changing. I'm\n> sure that's partly because we use so many small tables in the tests.\n> Nonetheless, it's not promising from the standpoint of not causing\n> unexpected problems in the real world.\n>\n> > 2. Establish the convention that vacuuming or analyzing an empty table\n> > is what you do to tell the system that this state is going to persist.\n> > That's more or less what the existing comments in plancat.c envision,\n> > but we never made a definition for how the occurrence of that event\n> > would be recorded in the catalogs, other than setting relpages > 0.\n> > Rather than adding another pg_class column, I'm tempted to say that\n> > vacuum/analyze should set relpages to a minimum of 1, even if the\n> > relation has zero pages.\n>\n> I also tried this, and it seems a lot more promising: no existing\n> regression test cases change. So perhaps we should do the attached\n> or something like it.\n>\n\nI am sending a patch that is years used in GoodData.\n\nI am not sure if the company uses 0 or 1, but I can ask.\n\nRegards\n\nPavel\n\n\n\n> regards, tom lane\n>\n>", "msg_date": "Sat, 25 Jul 2020 05:40:08 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Row estimates for empty tables" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> I am sending a patch that is years used in GoodData.\n\nI'm quite unexcited about that. I'd be the first to agree that the\nten-pages estimate is a hack, but it's not an improvement to ask users\nto think of a better value ... especially not as a one-size-fits-\nall-relations GUC setting.\n\nI did have an idea that I think is better than my previous one:\nrather than lying about the value of relpages, let's represent the\ncase where we don't know the tuple density by setting reltuples = -1\ninitially. This leads to a patch that's a good bit more invasive than\nthe quick-hack solution, but I think it's a lot cleaner on the whole.\n\nA possible objection is that this changes the FDW API slightly, as\nGetForeignRelSize callbacks now need to deal with rel->tuples possibly\nbeing -1. We could avoid an API break if we made plancat.c clamp\nthat value to zero; but then FDWs still couldn't tell the difference\nbetween the \"empty\" and \"never analyzed\" cases, and I think this is\njust as much of an issue for them as for the core code.\n\nI'll add this to the upcoming CF.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 23 Aug 2020 17:08:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Row estimates for empty tables" }, { "msg_contents": "ne 23. 8. 2020 v 23:08 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > I am sending a patch that is years used in GoodData.\n>\n> I'm quite unexcited about that. I'd be the first to agree that the\n> ten-pages estimate is a hack, but it's not an improvement to ask users\n> to think of a better value ... especially not as a one-size-fits-\n> all-relations GUC setting.\n>\n\nThis patch is just a workaround that works well 10 years (but for one\nspecial use case) - nothing more. Without this patch that application\ncannot work ever.\n\n\n> I did have an idea that I think is better than my previous one:\n> rather than lying about the value of relpages, let's represent the\n> case where we don't know the tuple density by setting reltuples = -1\n> initially. This leads to a patch that's a good bit more invasive than\n> the quick-hack solution, but I think it's a lot cleaner on the whole.\n>\n\n> A possible objection is that this changes the FDW API slightly, as\n> GetForeignRelSize callbacks now need to deal with rel->tuples possibly\n> being -1. We could avoid an API break if we made plancat.c clamp\n> that value to zero; but then FDWs still couldn't tell the difference\n> between the \"empty\" and \"never analyzed\" cases, and I think this is\n> just as much of an issue for them as for the core code.\n>\n\n> I'll add this to the upcoming CF.\n>\n\nI'll check it\n\nRegards\n\nPavel\n\n>\n> regards, tom lane\n>\n>\n\nne 23. 8. 2020 v 23:08 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> I am sending a patch that is years used in GoodData.\n\nI'm quite unexcited about that.  I'd be the first to agree that the\nten-pages estimate is a hack, but it's not an improvement to ask users\nto think of a better value ... especially not as a one-size-fits-\nall-relations GUC setting.This patch is just a workaround that works well 10 years (but for one special use case) - nothing more. Without this patch that application cannot work ever.\n\nI did have an idea that I think is better than my previous one:\nrather than lying about the value of relpages, let's represent the\ncase where we don't know the tuple density by setting reltuples = -1\ninitially.  This leads to a patch that's a good bit more invasive than\nthe quick-hack solution, but I think it's a lot cleaner on the whole.  \n\nA possible objection is that this changes the FDW API slightly, as\nGetForeignRelSize callbacks now need to deal with rel->tuples possibly\nbeing -1.  We could avoid an API break if we made plancat.c clamp\nthat value to zero; but then FDWs still couldn't tell the difference\nbetween the \"empty\" and \"never analyzed\" cases, and I think this is\njust as much of an issue for them as for the core code. \n\nI'll add this to the upcoming CF.I'll check itRegardsPavel\n\n                        regards, tom lane", "msg_date": "Mon, 24 Aug 2020 21:43:49 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Row estimates for empty tables" }, { "msg_contents": "On Fri, Jul 24, 2020 at 09:14:04PM +0200, Pavel Stehule wrote:\n> p� 24. 7. 2020 v 16:38 odes�latel Christophe Pettus <xof@thebuild.com> napsal:\n> > Since we already special-case parent tables for partition sets, would a\n> > storage parameter that lets you either tell the planner \"no, really, zero\n> > is reasonable here\" or sets a minimum number of rows to plan for be\n> > reasonable? I happened to get bit by this tracking down an issue where\n> > several tables in a large query had zero rows, and the planner's assumption\n> > of a few pages worth caused some sub-optimal plans. The performance hit\n> > wasn't huge, but they were being joined to some *very* large tables, and\n> > the differences added up.\n> \n> I did this patch ten years ago. GoodData application\n> https://www.gooddata.com/ uses Postgres lot, and this application stores\n> some results in tables (as guard against repeated calculations). Lot of\n> these tables have zero or one row.\n> \n> Although we ran an ANALYZE over all tables - the queries on empty tables\n> had very bad plans, and I had to fix it by this patch. Another company uses\n> a fake one row in table - so there is no possibility to have a really empty\n> table.\n> \n> It is an issue for special, not typical applications (this situation is\n> typical for some OLAP patterns) - it is not too often - but some clean\n> solution (instead hacking postgres) can be nice.\n\nOn Mon, Aug 24, 2020 at 09:43:49PM +0200, Pavel Stehule wrote:\n> This patch is just a workaround that works well 10 years (but for one\n> special use case) - nothing more. Without this patch that application\n> cannot work ever.\n\nMy own workaround was here:\nhttps://www.postgresql.org/message-id/20200427181034.GA28974@telsasoft.com\n|... 1) create an child table: CREATE TABLE x_child() INHERITS(x)\n|and, 2) change the query to use \"select from ONLY\".\n|\n|(1) allows the planner to believe that the table really is empty, a conclusion\n|it otherwise avoids and (2) avoids decending into the child (for which the\n|planner would likewise avoid the conclusion that it's actually empty).\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 24 Aug 2020 19:06:25 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Row estimates for empty tables" }, { "msg_contents": "po 24. 8. 2020 v 21:43 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> ne 23. 8. 2020 v 23:08 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>\n>> Pavel Stehule <pavel.stehule@gmail.com> writes:\n>> > I am sending a patch that is years used in GoodData.\n>>\n>> I'm quite unexcited about that. I'd be the first to agree that the\n>> ten-pages estimate is a hack, but it's not an improvement to ask users\n>> to think of a better value ... especially not as a one-size-fits-\n>> all-relations GUC setting.\n>>\n>\n> This patch is just a workaround that works well 10 years (but for one\n> special use case) - nothing more. Without this patch that application\n> cannot work ever.\n>\n>\n>> I did have an idea that I think is better than my previous one:\n>> rather than lying about the value of relpages, let's represent the\n>> case where we don't know the tuple density by setting reltuples = -1\n>> initially. This leads to a patch that's a good bit more invasive than\n>> the quick-hack solution, but I think it's a lot cleaner on the whole.\n>>\n>\n>> A possible objection is that this changes the FDW API slightly, as\n>> GetForeignRelSize callbacks now need to deal with rel->tuples possibly\n>> being -1. We could avoid an API break if we made plancat.c clamp\n>> that value to zero; but then FDWs still couldn't tell the difference\n>> between the \"empty\" and \"never analyzed\" cases, and I think this is\n>> just as much of an issue for them as for the core code.\n>>\n>\n>> I'll add this to the upcoming CF.\n>>\n>\n> I'll check it\n>\n\nI think it can work. It is a good enough solution for people who need a\ndifferent behaviour with minimal impact on people who don't need a change.\n\nRegards\n\nPavel\n\n\n>\n> Regards\n>\n> Pavel\n>\n>>\n>> regards, tom lane\n>>\n>>\n\npo 24. 8. 2020 v 21:43 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:ne 23. 8. 2020 v 23:08 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> I am sending a patch that is years used in GoodData.\n\nI'm quite unexcited about that.  I'd be the first to agree that the\nten-pages estimate is a hack, but it's not an improvement to ask users\nto think of a better value ... especially not as a one-size-fits-\nall-relations GUC setting.This patch is just a workaround that works well 10 years (but for one special use case) - nothing more. Without this patch that application cannot work ever.\n\nI did have an idea that I think is better than my previous one:\nrather than lying about the value of relpages, let's represent the\ncase where we don't know the tuple density by setting reltuples = -1\ninitially.  This leads to a patch that's a good bit more invasive than\nthe quick-hack solution, but I think it's a lot cleaner on the whole.  \n\nA possible objection is that this changes the FDW API slightly, as\nGetForeignRelSize callbacks now need to deal with rel->tuples possibly\nbeing -1.  We could avoid an API break if we made plancat.c clamp\nthat value to zero; but then FDWs still couldn't tell the difference\nbetween the \"empty\" and \"never analyzed\" cases, and I think this is\njust as much of an issue for them as for the core code. \n\nI'll add this to the upcoming CF.I'll check itI  think it can work. It is a good enough solution for people who need a different behaviour with minimal impact on people who don't need a change.RegardsPavel RegardsPavel\n\n                        regards, tom lane", "msg_date": "Tue, 25 Aug 2020 09:32:42 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Row estimates for empty tables" }, { "msg_contents": "út 25. 8. 2020 v 9:32 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> po 24. 8. 2020 v 21:43 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n>\n>>\n>>\n>> ne 23. 8. 2020 v 23:08 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>>\n>>> Pavel Stehule <pavel.stehule@gmail.com> writes:\n>>> > I am sending a patch that is years used in GoodData.\n>>>\n>>> I'm quite unexcited about that. I'd be the first to agree that the\n>>> ten-pages estimate is a hack, but it's not an improvement to ask users\n>>> to think of a better value ... especially not as a one-size-fits-\n>>> all-relations GUC setting.\n>>>\n>>\n>> This patch is just a workaround that works well 10 years (but for one\n>> special use case) - nothing more. Without this patch that application\n>> cannot work ever.\n>>\n>>\n>>> I did have an idea that I think is better than my previous one:\n>>> rather than lying about the value of relpages, let's represent the\n>>> case where we don't know the tuple density by setting reltuples = -1\n>>> initially. This leads to a patch that's a good bit more invasive than\n>>> the quick-hack solution, but I think it's a lot cleaner on the whole.\n>>>\n>>\n>>> A possible objection is that this changes the FDW API slightly, as\n>>> GetForeignRelSize callbacks now need to deal with rel->tuples possibly\n>>> being -1. We could avoid an API break if we made plancat.c clamp\n>>> that value to zero; but then FDWs still couldn't tell the difference\n>>> between the \"empty\" and \"never analyzed\" cases, and I think this is\n>>> just as much of an issue for them as for the core code.\n>>>\n>>\n>>> I'll add this to the upcoming CF.\n>>>\n>>\n>> I'll check it\n>>\n>\n> I think it can work. It is a good enough solution for people who need a\n> different behaviour with minimal impact on people who don't need a change.\n>\n\nall tests passed\n\nI'll mark this patch as ready for commit\n\nRegards\n\nPavel\n\n\n> Regards\n>\n> Pavel\n>\n>\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>>>\n>>> regards, tom lane\n>>>\n>>>\n\nút 25. 8. 2020 v 9:32 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:po 24. 8. 2020 v 21:43 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:ne 23. 8. 2020 v 23:08 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> I am sending a patch that is years used in GoodData.\n\nI'm quite unexcited about that.  I'd be the first to agree that the\nten-pages estimate is a hack, but it's not an improvement to ask users\nto think of a better value ... especially not as a one-size-fits-\nall-relations GUC setting.This patch is just a workaround that works well 10 years (but for one special use case) - nothing more. Without this patch that application cannot work ever.\n\nI did have an idea that I think is better than my previous one:\nrather than lying about the value of relpages, let's represent the\ncase where we don't know the tuple density by setting reltuples = -1\ninitially.  This leads to a patch that's a good bit more invasive than\nthe quick-hack solution, but I think it's a lot cleaner on the whole.  \n\nA possible objection is that this changes the FDW API slightly, as\nGetForeignRelSize callbacks now need to deal with rel->tuples possibly\nbeing -1.  We could avoid an API break if we made plancat.c clamp\nthat value to zero; but then FDWs still couldn't tell the difference\nbetween the \"empty\" and \"never analyzed\" cases, and I think this is\njust as much of an issue for them as for the core code. \n\nI'll add this to the upcoming CF.I'll check itI  think it can work. It is a good enough solution for people who need a different behaviour with minimal impact on people who don't need a change.all tests passedI'll mark this patch as ready for commitRegardsPavelRegardsPavel RegardsPavel\n\n                        regards, tom lane", "msg_date": "Tue, 25 Aug 2020 09:57:25 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Row estimates for empty tables" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> I'll mark this patch as ready for commit\n\nPushed, thanks for looking.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 30 Aug 2020 12:23:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Row estimates for empty tables" }, { "msg_contents": "ne 30. 8. 2020 v 18:23 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > I'll mark this patch as ready for commit\n>\n> Pushed, thanks for looking.\n>\n\nThank you\n\nPavel\n\n>\n> regards, tom lane\n>\n\nne 30. 8. 2020 v 18:23 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> I'll mark this patch as ready for commit\n\nPushed, thanks for looking.Thank you Pavel\n\n                        regards, tom lane", "msg_date": "Sun, 30 Aug 2020 19:13:31 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Row estimates for empty tables" } ]
[ { "msg_contents": "Hi, hackers\n\nThe source looks like:\n\n\tcase ECPGt_bytea:\n\t{\n\t\tstruct ECPGgeneric_varchar *variable =\n\t\t(struct ECPGgeneric_varchar *) (var->value);\n\n\t\t......\n\t}\n\nI think the developer intend to use struct ECPGgeneric_bytea instead of struct ECPGgeneric_varchar\n\nIs this thoughts right?\n\nI have wrote a patch to fix this typo", "msg_date": "Fri, 24 Jul 2020 08:05:23 +0000", "msg_from": "\"Wang, Shenhao\" <wangsh.fnst@cn.fujitsu.com>", "msg_from_op": true, "msg_subject": "handle a ECPG_bytea typo" }, { "msg_contents": "On Fri, Jul 24, 2020 at 1:35 PM Wang, Shenhao\n<wangsh.fnst@cn.fujitsu.com> wrote:\n>\n> Hi, hackers\n>\n> The source looks like:\n>\n> case ECPGt_bytea:\n> {\n> struct ECPGgeneric_varchar *variable =\n> (struct ECPGgeneric_varchar *) (var->value);\n>\n> ......\n> }\n>\n> I think the developer intend to use struct ECPGgeneric_bytea instead of struct ECPGgeneric_varchar\n>\n> Is this thoughts right?\n>\n> I have wrote a patch to fix this typo\n\nI felt the changes look correct. The reason it might be working\nearlier is because the structure members are the same for both the\ndata structures ECPGgeneric_bytea & ECPGgeneric_varchar.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 25 Jul 2020 07:22:15 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: handle a ECPG_bytea typo" }, { "msg_contents": "On Sat, Jul 25, 2020 at 07:22:15AM +0530, vignesh C wrote:\n> I felt the changes look correct. The reason it might be working\n> earlier is because the structure members are the same for both the\n> data structures ECPGgeneric_bytea & ECPGgeneric_varchar.\n\nECPGset_noind_null() and ECPGis_noind_null() in misc.c show that\nECPGgeneric_bytea is attached to ECPGt_bytea. The two structures may\nbe the same now, but if a bug fix or a code change involves a change\nin the structure definition we could run into problems. So let's fix\nand back-patch this change. I am not spotting other areas impacted,\nand I'll try to take care at the beginning of next week.\n--\nMichael", "msg_date": "Sat, 25 Jul 2020 18:17:42 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: handle a ECPG_bytea typo" }, { "msg_contents": "On Sat, Jul 25, 2020 at 06:17:42PM +0900, Michael Paquier wrote:\n> ECPGset_noind_null() and ECPGis_noind_null() in misc.c show that\n> ECPGgeneric_bytea is attached to ECPGt_bytea. The two structures may\n> be the same now, but if a bug fix or a code change involves a change\n> in the structure definition we could run into problems. So let's fix\n> and back-patch this change. I am not spotting other areas impacted,\n> and I'll try to take care at the beginning of next week.\n\nOkay, fixed as e971357. The issue came from 050710b, so this fix was\nonly needed in 12~.\n--\nMichael", "msg_date": "Mon, 27 Jul 2020 10:31:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: handle a ECPG_bytea typo" } ]
[ { "msg_contents": "For logical replication there is no need to implement this, but others are\nusing the pgoutput plugin for Change Data Capture. The reason they are\nusing pgoutput is because it is guaranteed to be available as it is in core\npostgres.\n\nImplementing LogicalDecodeMessageCB provides some synchronization facility\nthat is not easily replicated.\n\nThoughts ?\n\nDave Cramer\n\nFor logical replication there is no need to implement this, but others are using the pgoutput plugin for Change Data Capture. The reason they are using pgoutput is because it is guaranteed to be available as it is in core postgres. Implementing LogicalDecodeMessageCB provides some synchronization facility that is not easily replicated.Thoughts ?Dave Cramer", "msg_date": "Fri, 24 Jul 2020 11:33:52 -0400", "msg_from": "Dave Cramer <davecramer@gmail.com>", "msg_from_op": true, "msg_subject": "Any objections to implementing LogicalDecodeMessageCB for pgoutput?" }, { "msg_contents": "Hi,\n\nOn 2020-07-24 11:33:52 -0400, Dave Cramer wrote:\n> For logical replication there is no need to implement this, but others are\n> using the pgoutput plugin for Change Data Capture. The reason they are\n> using pgoutput is because it is guaranteed to be available as it is in core\n> postgres.\n> \n> Implementing LogicalDecodeMessageCB provides some synchronization facility\n> that is not easily replicated.\n\nIt's definitely useful. Probably needs to be parameter that signals\nwhether they should be sent out?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 24 Jul 2020 09:16:33 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Any objections to implementing LogicalDecodeMessageCB for\n pgoutput?" }, { "msg_contents": "On Wed, Jul 29, 2020 at 9:41 PM Dave Cramer <davecramer@gmail.com> wrote:\n\n> For logical replication there is no need to implement this, but others are\n> using the pgoutput plugin for Change Data Capture. The reason they are\n> using pgoutput is because it is guaranteed to be available as it is in core\n> postgres.\n>\n> Implementing LogicalDecodeMessageCB provides some synchronization facility\n> that is not easily replicated.\n>\n> Thoughts ?\n>\n\nAttached is a draft patch that adds this functionality into the pgoutput\nplugin. A slot consumer can pass 'messages' as an option to include\nlogical messages from pg_logical_emit_message in the replication flow.\n\nFWIW, we have been using pg_logical_emit_message to send application-level\nevents alongside our change-data-capture for about two years, and we would\nmove this part of our stack to pgoutput if message support was available.\n\nLooking forward to discussion and feedback.\n\nCheers,\nDave", "msg_date": "Wed, 29 Jul 2020 22:26:04 -0500", "msg_from": "David Pirotte <dpirotte@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Any objections to implementing LogicalDecodeMessageCB for\n pgoutput?" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nHi\r\n\r\nI have tried the patch and it functions as described. The attached tap test case is comprehensive and is passing. However, the patch does not apply well on the current master; I had to checkout to a much earlier commit to be able to patch correctly. The patch will need to be rebased to the current master.\r\n\r\nThanks\r\n\r\nCary Huang\r\n-------------\r\nHighGo Software Inc. (Canada)\r\ncary.huang@highgo.ca\r\nwww.highgo.ca", "msg_date": "Fri, 04 Sep 2020 20:54:46 +0000", "msg_from": "Cary Huang <cary.huang@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: Any objections to implementing LogicalDecodeMessageCB for\n pgoutput?" }, { "msg_contents": "Hi,\n\nOn 2020-07-29 22:26:04 -0500, David Pirotte wrote:\n> FWIW, we have been using pg_logical_emit_message to send application-level\n> events alongside our change-data-capture for about two years, and we would\n> move this part of our stack to pgoutput if message support was available.\n\nYea, it's really useful for this kind of thing.\n\n\n> @@ -119,14 +124,16 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)\n> \n> static void\n> parse_output_parameters(List *options, uint32 *protocol_version,\n> -\t\t\t\t\t\tList **publication_names, bool *binary)\n> +\t\t\t\t\t\tList **publication_names, bool *binary, bool *messages)\n\nI think it might be time to add a PgOutputParameters struct, instead of\nadding more and more output parameters to\nparse_output_parameters. Alternatively just passing PGOutputData owuld\nmake sense.\n\n\n> diff --git a/src/test/subscription/t/015_messages.pl b/src/test/subscription/t/015_messages.pl\n> new file mode 100644\n> index 0000000000..4709e69f4e\n> --- /dev/null\n> +++ b/src/test/subscription/t/015_messages.pl\n\nA test verifying that a non-transactional message in later aborted\ntransaction is handled correctly would be good.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 8 Sep 2020 12:18:23 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Any objections to implementing LogicalDecodeMessageCB for\n pgoutput?" }, { "msg_contents": "On Tue, Sep 08, 2020 at 12:18:23PM -0700, Andres Freund wrote:\n> A test verifying that a non-transactional message in later aborted\n> transaction is handled correctly would be good.\n\nOn top of that, the patch needs a rebase as it visibly fails to apply,\nper the CF bot.\n--\nMichael", "msg_date": "Thu, 24 Sep 2020 13:22:30 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Any objections to implementing LogicalDecodeMessageCB for\n pgoutput?" }, { "msg_contents": "David,\n\nOn Thu, 24 Sep 2020 at 00:22, Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Sep 08, 2020 at 12:18:23PM -0700, Andres Freund wrote:\n> > A test verifying that a non-transactional message in later aborted\n> > transaction is handled correctly would be good.\n>\n> On top of that, the patch needs a rebase as it visibly fails to apply,\n> per the CF bot.\n> --\n> Michael\n>\n\nWhere are you with this? Are you able to work on it ?\nDave Cramer\n\nDavid,On Thu, 24 Sep 2020 at 00:22, Michael Paquier <michael@paquier.xyz> wrote:On Tue, Sep 08, 2020 at 12:18:23PM -0700, Andres Freund wrote:\n> A test verifying that a non-transactional message in later aborted\n> transaction is handled correctly would be good.\n\nOn top of that, the patch needs a rebase as it visibly fails to apply,\nper the CF bot.\n--\nMichaelWhere are you with this? Are you able to work on it ?Dave Cramer", "msg_date": "Tue, 3 Nov 2020 08:19:18 -0500", "msg_from": "Dave Cramer <davecramer@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Any objections to implementing LogicalDecodeMessageCB for\n pgoutput?" }, { "msg_contents": "On Tue, Nov 3, 2020 at 7:19 AM Dave Cramer <davecramer@gmail.com> wrote:\n\n> David,\n>\n> On Thu, 24 Sep 2020 at 00:22, Michael Paquier <michael@paquier.xyz> wrote:\n>\n>> On Tue, Sep 08, 2020 at 12:18:23PM -0700, Andres Freund wrote:\n>> > A test verifying that a non-transactional message in later aborted\n>> > transaction is handled correctly would be good.\n>>\n>> On top of that, the patch needs a rebase as it visibly fails to apply,\n>> per the CF bot.\n>> --\n>> Michael\n>>\n>\n> Where are you with this? Are you able to work on it ?\n> Dave Cramer\n>\n\nApologies for the delay, here.\n\nI've attached v2 of this patch which applies cleanly to master. The patch\nalso now includes a test demonstrating that pg_logical_emit_message\ncorrectly sends non-transactional messages when called inside a transaction\nthat is rolled back. (Thank you, Andres, for this suggestion.) The only\nother change is that I added this new message type into the\nLogicalRepMsgType enum added earlier this week.\n\nLet me know what you think.\n\nCheers,\nDave", "msg_date": "Wed, 4 Nov 2020 21:46:13 -0600", "msg_from": "David Pirotte <dpirotte@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Any objections to implementing LogicalDecodeMessageCB for\n pgoutput?" }, { "msg_contents": "On Thu, Nov 5, 2020 at 9:16 AM David Pirotte <dpirotte@gmail.com> wrote:\n>\n> On Tue, Nov 3, 2020 at 7:19 AM Dave Cramer <davecramer@gmail.com> wrote:\n>>\n>> David,\n>>\n>> On Thu, 24 Sep 2020 at 00:22, Michael Paquier <michael@paquier.xyz> wrote:\n>>>\n>>> On Tue, Sep 08, 2020 at 12:18:23PM -0700, Andres Freund wrote:\n>>> > A test verifying that a non-transactional message in later aborted\n>>> > transaction is handled correctly would be good.\n>>>\n>>> On top of that, the patch needs a rebase as it visibly fails to apply,\n>>> per the CF bot.\n>>> --\n>>> Michael\n>>\n>>\n>> Where are you with this? Are you able to work on it ?\n>> Dave Cramer\n>\n>\n> Apologies for the delay, here.\n>\n> I've attached v2 of this patch which applies cleanly to master. The patch also now includes a test demonstrating that pg_logical_emit_message correctly sends non-transactional messages when called inside a transaction that is rolled back. (Thank you, Andres, for this suggestion.) The only other change is that I added this new message type into the LogicalRepMsgType enum added earlier this week.\n>\n> Let me know what you think.\n\nThis feature looks useful. Here are some comments.\n\n+/*\n+ * Write MESSAGE to stream\n+ */\n+void\n+logicalrep_write_message(StringInfo out, ReorderBufferTXN *txn, XLogRecPtr lsn,\n+ bool transactional, const char *prefix, Size sz,\n+ const char *message)\n+{\n+ uint8 flags = 0;\n+\n+ pq_sendbyte(out, LOGICAL_REP_MSG_MESSAGE);\n+\n\nSimilar to the UPDATE/DELETE/INSERT records decoded when streaming is being\nused, we need to add transaction id for transactional messages. May be we add\nthat even in case of non-streaming case and use it to decide whether it's a\ntransactional message or not. That might save us a byte when we are adding a\ntransaction id.\n\n+ /* encode and send message flags */\n+ if (transactional)\n+ flags |= MESSAGE_TRANSACTIONAL;\n+\n+ pq_sendint8(out, flags);\n\nIs 8 bits enough considering future improvements? What if we need to use more\nthan 8 bit flags?\n\n@@ -1936,6 +1936,9 @@ apply_dispatch(StringInfo s)\n apply_handle_origin(s);\n return;\n\n+ case LOGICAL_REP_MSG_MESSAGE:\n\nShould we add the logical message to the WAL downstream so that it flows\nfurther down to a cascaded logical replica. Should that be controlled\nby an option?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 6 Nov 2020 18:35:43 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Any objections to implementing LogicalDecodeMessageCB for\n pgoutput?" }, { "msg_contents": "On Fri, Nov 6, 2020 at 7:05 AM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n\n> +/*\n> + * Write MESSAGE to stream\n> + */\n> +void\n> +logicalrep_write_message(StringInfo out, ReorderBufferTXN *txn,\n> XLogRecPtr lsn,\n> + bool transactional, const char *prefix, Size sz,\n> + const char *message)\n> +{\n> + uint8 flags = 0;\n> +\n> + pq_sendbyte(out, LOGICAL_REP_MSG_MESSAGE);\n> +\n>\n> Similar to the UPDATE/DELETE/INSERT records decoded when streaming is being\n> used, we need to add transaction id for transactional messages. May be we\n> add\n> that even in case of non-streaming case and use it to decide whether it's a\n> transactional message or not. That might save us a byte when we are adding\n> a\n> transaction id.\n>\n\nMy preference is to add in the xid when streaming is enabled. (1) It is a\nmore consistent implementation with the other message types, and (2) it\nsaves 3 bytes when streaming is disabled. I've attached an updated patch.\nIt is not a strong preference, though, if you suggest a different approach.\n\n\n> + /* encode and send message flags */\n> + if (transactional)\n> + flags |= MESSAGE_TRANSACTIONAL;\n> +\n> + pq_sendint8(out, flags);\n>\n> Is 8 bits enough considering future improvements? What if we need to use\n> more\n> than 8 bit flags?\n>\n\n8 possible flags already sounds like a lot, here, so I suspect that a byte\nwill be sufficient for the foreseeable future. If we needed to go beyond\nthat, it'd be a protocol version bump. (Well, it might first warrant\nreflection as to why we had so many flags...)\n\n\n> @@ -1936,6 +1936,9 @@ apply_dispatch(StringInfo s)\n> apply_handle_origin(s);\n> return;\n>\n> + case LOGICAL_REP_MSG_MESSAGE:\n>\n> Should we add the logical message to the WAL downstream so that it flows\n> further down to a cascaded logical replica. Should that be controlled\n> by an option?\n>\n\nHmm, I can't think of a use case for this, but perhaps someone could. Do\nyou, or does anyone, have something in mind? I think we provide a lot of\nvalue with logical messages in pgoutput without supporting consumption from\na downstream replica, so perhaps this is better considered separately.\n\nIf we want this, I think we would add a \"messages\" option on the\nsubscription. If present, the subscriber will receive messages and pass\nthem to any downstream subscribers. I started working on this and it does\nexpand the change's footprint. As is, a developer would consume messages by\nconnecting to a pgoutput slot on the message's origin. (e.g. via Debezium\nor a custom client) The subscription and logical worker infrastructure\ndon't know about messages, but they would need to in order to support\nconsuming an origin's messages on a downstream logical replica. In\nany case, I'll keep working on it so we can see what it looks like.\n\nCheers,\nDave", "msg_date": "Wed, 18 Nov 2020 00:04:34 -0600", "msg_from": "David Pirotte <dpirotte@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Any objections to implementing LogicalDecodeMessageCB for\n pgoutput?" }, { "msg_contents": "On Wed, 18 Nov 2020 at 03:04, David Pirotte <dpirotte@gmail.com> wrote:\n\n> On Fri, Nov 6, 2020 at 7:05 AM Ashutosh Bapat <\n> ashutosh.bapat.oss@gmail.com> wrote:\n>\n>> +/*\n>> + * Write MESSAGE to stream\n>> + */\n>> +void\n>> +logicalrep_write_message(StringInfo out, ReorderBufferTXN *txn,\n>> XLogRecPtr lsn,\n>> + bool transactional, const char *prefix, Size sz,\n>> + const char *message)\n>> +{\n>> + uint8 flags = 0;\n>> +\n>> + pq_sendbyte(out, LOGICAL_REP_MSG_MESSAGE);\n>> +\n>>\n>> Similar to the UPDATE/DELETE/INSERT records decoded when streaming is\n>> being\n>> used, we need to add transaction id for transactional messages. May be we\n>> add\n>> that even in case of non-streaming case and use it to decide whether it's\n>> a\n>> transactional message or not. That might save us a byte when we are\n>> adding a\n>> transaction id.\n>>\n>\n> I also reviewed your patch. This feature would be really useful for\nreplication\nscenarios. Supporting this feature means that you don't need to use a table\nto\npass messages from one node to another one. Here are a few comments/ideas.\n\n@@ -1936,6 +1936,9 @@ apply_dispatch(StringInfo s)\n apply_handle_origin(s);\n return;\n\n+ case LOGICAL_REP_MSG_MESSAGE:\n+ return;\n+\n\nI added a comment explaining that this message is not used by logical\nreplication but it could possibly be useful for other applications using\npgoutput. See patch 0003.\n\nAndres mentioned in this thread [1] that we could simplify the\nparse_output_parameters. I refactored this function to pass only\nPGOutputData\nto it and also move enable_streaming to this struct. I use a similar\napproach\nin wal2json; it is easier to get the options since it is available in the\nlogical decoding context. See patch 0004.\n\n\n> My preference is to add in the xid when streaming is enabled. (1) It is a\n> more consistent implementation with the other message types, and (2) it\n> saves 3 bytes when streaming is disabled. I've attached an updated patch.\n> It is not a strong preference, though, if you suggest a different approach.\n>\n>\nI agree with this approach. xid is available in the BEGIN message if the\nMESSAGE is transactional. For non-transactional messages, xid is not\navailable.\nYour implementation is not consistent with the other pgoutput_XXX functions\nthat check in_streaming in the pgoutput_XXX and pass parameters to other\nfunctions that require it. See patch 005.\n\nThe last patch 0006 overhauls your tests. I added/changed some comments,\nreplaced identifiers with uppercase letters, used 'pgoutput' as prefix,\nchecked\nthe prefix, and avoided a checkpoint during the test. There are possibly\nother\nimprovements that I didn't mention here. Maybe you can use\nencode(substr(data,\n1, 1), 'escape') instead of comparing the ASCII code (77).\n\n\n> Should we add the logical message to the WAL downstream so that it flows\n>>\n> further down to a cascaded logical replica. Should that be controlled\n>> by an option?\n>>\n>\n> Hmm, I can't think of a use case for this, but perhaps someone could. Do\n> you, or does anyone, have something in mind? I think we provide a lot of\n> value with logical messages in pgoutput without supporting consumption from\n> a downstream replica, so perhaps this is better considered separately.\n>\n> If we want this, I think we would add a \"messages\" option on the\n> subscription. If present, the subscriber will receive messages and pass\n> them to any downstream subscribers. I started working on this and it does\n> expand the change's footprint. As is, a developer would consume messages by\n> connecting to a pgoutput slot on the message's origin. (e.g. via Debezium\n> or a custom client) The subscription and logical worker infrastructure\n> don't know about messages, but they would need to in order to support\n> consuming an origin's messages on a downstream logical replica. In\n> any case, I'll keep working on it so we can see what it looks like.\n>\n> The decision to send received messages to downstream nodes should be made\nby\nthe subscriber. If the subscriber wants to replicate messages to downstream\nnodes, the worker should call LogLogicalMessage.\n\nThis does not belong to this patch but when/if this patch is committed, I\nwill\nsubmit a patch to filter messages by prefix. wal2json has a similar\n(filter-msg-prefixes / add-msg-prefixes) feature and it is useful for cases\nwhere you are handling multiple output plugins like wal2json and pgoutput.\nThe\nidea is to avoid sending useless messages to some node that (i) don't know\nhow\nto process it and (ii) has no interest in it.\n\nPS> I'm attaching David's patches (0001 and 0002) again to keep cfbot happy.\n\n[1]\nhttps://www.postgresql.org/message-id/20200908191823.pmsoobzearkrmtg4%40alap3.anarazel.de\n\n\n-- \nEuler Taveira http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 25 Nov 2020 00:28:30 -0300", "msg_from": "Euler Taveira <euler.taveira@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Any objections to implementing LogicalDecodeMessageCB for\n pgoutput?" }, { "msg_contents": "Hi David,\n\nOn 11/24/20 10:28 PM, Euler Taveira wrote:\n> \n> I also reviewed your patch. This feature would be really useful for \n> replication\n> scenarios. Supporting this feature means that you don't need to use a \n> table to\n> pass messages from one node to another one. Here are a few comments/ideas.\n\nDo you know when you'll have a chance to look at Euler's suggestions? \nAlso, have Andres' suggestions/concerns upthread been addressed?\n\nMarked Waiting on Author.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Tue, 9 Mar 2021 09:28:45 -0500", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Any objections to implementing LogicalDecodeMessageCB for\n pgoutput?" }, { "msg_contents": "On Wed, Nov 25, 2020 at 8:58 AM Euler Taveira\n<euler.taveira@2ndquadrant.com> wrote:\n>\n> On Wed, 18 Nov 2020 at 03:04, David Pirotte <dpirotte@gmail.com> wrote:\n>>\n>> On Fri, Nov 6, 2020 at 7:05 AM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:\n>>>\n>>> +/*\n>>> + * Write MESSAGE to stream\n>>> + */\n>>> +void\n>>> +logicalrep_write_message(StringInfo out, ReorderBufferTXN *txn, XLogRecPtr lsn,\n>>> + bool transactional, const char *prefix, Size sz,\n>>> + const char *message)\n>>> +{\n>>> + uint8 flags = 0;\n>>> +\n>>> + pq_sendbyte(out, LOGICAL_REP_MSG_MESSAGE);\n>>> +\n>>>\n>>> Similar to the UPDATE/DELETE/INSERT records decoded when streaming is being\n>>> used, we need to add transaction id for transactional messages. May be we add\n>>> that even in case of non-streaming case and use it to decide whether it's a\n>>> transactional message or not. That might save us a byte when we are adding a\n>>> transaction id.\n>>\n>>\n> I also reviewed your patch. This feature would be really useful for replication\n> scenarios. Supporting this feature means that you don't need to use a table to\n> pass messages from one node to another one. Here are a few comments/ideas.\n>\n\nYour ideas/suggestions look good to me. Don't we need to provide a\nread function corresponding to logicalrep_write_message? We have it\nfor other write functions. Can you please combine all of your changes\ninto one patch?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 1 Apr 2021 15:49:01 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Any objections to implementing LogicalDecodeMessageCB for\n pgoutput?" }, { "msg_contents": "On Thu, Apr 1, 2021, at 7:19 AM, Amit Kapila wrote:\n> Your ideas/suggestions look good to me. Don't we need to provide a\n> read function corresponding to logicalrep_write_message? We have it\n> for other write functions. Can you please combine all of your changes\n> into one patch?\nThanks for taking a look at this patch. I didn't consider a\nlogicalrep_read_message function because the protocol doesn't support it yet.\n\n/*\n* Logical replication does not use generic logical messages yet.\n* Although, it could be used by other applications that use this\n* output plugin.\n*/\n\nSomeone that is inspecting the code in the future could possibly check this\ndiscussion to understand why this function isn't available.\n\nThis new patch set version has 2 patches that is because there are 2 separate\nchanges: parse_output_parameters() refactor and logical decoding message\nsupport.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/", "msg_date": "Fri, 02 Apr 2021 20:55:52 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?Q?Re:_Any_objections_to_implementing_LogicalDecodeMessageCB_for_?=\n =?UTF-8?Q?pgoutput=3F?=" }, { "msg_contents": "On Sat, Apr 3, 2021 at 5:26 AM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Thu, Apr 1, 2021, at 7:19 AM, Amit Kapila wrote:\n>\n> This new patch set version has 2 patches that is because there are 2 separate\n> changes: parse_output_parameters() refactor and logical decoding message\n> support.\n>\n\nI have made few minor changes in the attached. (a) Initialize the\nstreaming message callback API, (b) update docs to reflect that XID\ncan be sent for streaming of in-progress transactions, I see that the\nsame information needs to be updated for a few other protocol message\nbut we can do that as a separate patch (c) slightly tweaked the commit\nmessages\n\nLet me know what you think? I am planning to push this tomorrow unless\nyou or someone else has any comments.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Mon, 5 Apr 2021 12:36:07 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Any objections to implementing LogicalDecodeMessageCB for\n pgoutput?" }, { "msg_contents": "On Mon, Apr 5, 2021, at 4:06 AM, Amit Kapila wrote:\n> I have made few minor changes in the attached. (a) Initialize the\n> streaming message callback API, (b) update docs to reflect that XID\n> can be sent for streaming of in-progress transactions, I see that the\n> same information needs to be updated for a few other protocol message\n> but we can do that as a separate patch (c) slightly tweaked the commit\n> messages\nGood catch. I completely forgot the streaming of in progress transactions. I\nagree that the documentation for transaction should be added as a separate\npatch since the scope is beyond this feature.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Mon, Apr 5, 2021, at 4:06 AM, Amit Kapila wrote:I have made few minor changes in the attached. (a) Initialize thestreaming message callback API, (b) update docs to reflect that XIDcan be sent for streaming of in-progress transactions, I see that thesame information needs to be updated for a few other protocol messagebut we can do that as a separate patch (c) slightly tweaked the commitmessagesGood catch. I completely forgot the streaming of in progress transactions. Iagree that the documentation for transaction should be added as a separatepatch since the scope is beyond this feature.--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Mon, 05 Apr 2021 09:15:33 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?Q?Re:_Any_objections_to_implementing_LogicalDecodeMessageCB_for_?=\n =?UTF-8?Q?pgoutput=3F?=" }, { "msg_contents": "On Mon, Apr 5, 2021 at 5:45 PM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Mon, Apr 5, 2021, at 4:06 AM, Amit Kapila wrote:\n>\n> I have made few minor changes in the attached. (a) Initialize the\n> streaming message callback API, (b) update docs to reflect that XID\n> can be sent for streaming of in-progress transactions, I see that the\n> same information needs to be updated for a few other protocol message\n> but we can do that as a separate patch (c) slightly tweaked the commit\n> messages\n>\n> Good catch. I completely forgot the streaming of in progress transactions. I\n> agree that the documentation for transaction should be added as a separate\n> patch since the scope is beyond this feature.\n>\n\nI have pushed this work and updated the CF entry accordingly.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 7 Apr 2021 10:50:50 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Any objections to implementing LogicalDecodeMessageCB for\n pgoutput?" }, { "msg_contents": "On Wed, Apr 7, 2021, at 2:20 AM, Amit Kapila wrote:\n> I have pushed this work and updated the CF entry accordingly.\nGreat. Thank you.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Wed, Apr 7, 2021, at 2:20 AM, Amit Kapila wrote:I have pushed this work and updated the CF entry accordingly.Great. Thank you.--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Wed, 07 Apr 2021 09:09:41 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?Q?Re:_Any_objections_to_implementing_LogicalDecodeMessageCB_for_?=\n =?UTF-8?Q?pgoutput=3F?=" } ]
[ { "msg_contents": "Greetings,\n\nI'm looking into an issue that we're seeing on the PG archives server\nwith runaway queries that don't seem to ever want to end- and ignore\nsignals.\n\nThis is PG11, 11.8-1.pgdg100+1 specifically on Debian/buster and what\nwe're seeing is the loop in hlCover() (wparser_def.c:2071 to 2093) is\nlasting an awful long time without any CFI call. It's possible the CFI\ncall should actually go elsewhere, but the complete lack of any CFI in\nwparser_def.c or tsvector_op.c seems a bit concerning.\n\nI'm suspicious there's something else going on here that's causing this\nto take a long time but I don't have any smoking gun as yet.\n\nThanks,\n\nStephen", "msg_date": "Fri, 24 Jul 2020 12:05:35 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": true, "msg_subject": "Missing CFI in hlCover()?" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> I'm looking into an issue that we're seeing on the PG archives server\n> with runaway queries that don't seem to ever want to end- and ignore\n> signals.\n\n> This is PG11, 11.8-1.pgdg100+1 specifically on Debian/buster and what\n> we're seeing is the loop in hlCover() (wparser_def.c:2071 to 2093) is\n> lasting an awful long time without any CFI call. It's possible the CFI\n> call should actually go elsewhere, but the complete lack of any CFI in\n> wparser_def.c or tsvector_op.c seems a bit concerning.\n\n> I'm suspicious there's something else going on here that's causing this\n> to take a long time but I don't have any smoking gun as yet.\n\nHm. I'd vote for a CFI within the recursion in TS_execute(), if there's\nnot one there yet. Maybe hlFirstIndex needs one too --- if there are\na lot of words in the query, maybe that could be slow? Did you pin the\nblame down any more precisely than hlCover?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 24 Jul 2020 12:21:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Missing CFI in hlCover()?" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > I'm looking into an issue that we're seeing on the PG archives server\n> > with runaway queries that don't seem to ever want to end- and ignore\n> > signals.\n> \n> > This is PG11, 11.8-1.pgdg100+1 specifically on Debian/buster and what\n> > we're seeing is the loop in hlCover() (wparser_def.c:2071 to 2093) is\n> > lasting an awful long time without any CFI call. It's possible the CFI\n> > call should actually go elsewhere, but the complete lack of any CFI in\n> > wparser_def.c or tsvector_op.c seems a bit concerning.\n> \n> > I'm suspicious there's something else going on here that's causing this\n> > to take a long time but I don't have any smoking gun as yet.\n> \n> Hm. I'd vote for a CFI within the recursion in TS_execute(), if there's\n> not one there yet. Maybe hlFirstIndex needs one too --- if there are\n> a lot of words in the query, maybe that could be slow? Did you pin the\n> blame down any more precisely than hlCover?\n\nI've definitely seen hlFirstIndex take a few seconds to run (while\nrunning this under gdb and stepping through), so that could be a good\nchoice to place one (perhaps even additionally to this...). I have to\nadmit to wondering if we shouldn't consider having one in\ncheck_stack_depth() to try and reduce the risk of us forgetting to have\none in sensible places, though I've not really looked at all the callers\nand that might not be reasonable in some cases (though I wonder if maybe\nwe consider having a 'default' version that has a CFI, and an alternate\nthat doesn't...).\n\nThe depth of recursion for TS_execute_recurse() is actually not all that\nbad either though (only 6 or so, as the query string here is:\n\"ERROR: The required file is not available\"), so maybe that isn't really\nthe right thing to be thinking here.\n\nDown in checkcondition_HL(), checkval->len is 213601, and it seems to\npretty much always end up with a result of TS_NO, but doesn't seem to\ntake all that long to arrive at that.\n\nOver in hlFirstIndex():\n\nhlFirstIndex (prs=0x7ffc2d4b16c0, prs=0x7ffc2d4b16c0, pos=219518, query=0x559619f81528) at ./build/../src/backend/tsearch/wparser_def.c:2013\n2013 hlFirstIndex(HeadlineParsedText *prs, TSQuery query, int pos)\n(gdb) n\n2026 if (item->type == QI_VAL &&\n(gdb) \n2029 item++;\n(gdb) p pos\n$72 = 219518\n(gdb) p prs->curwords\n$73 = 583766\n(gdb) \n\nHere's a full backtrace down to the checkcondition_HL():\n\n(gdb) i s\n#0 checkcondition_HL (opaque=0x7ffc2d4b11f0, val=0x559619f815c0, data=0x0) at ./build/../src/backend/tsearch/wparser_def.c:1981\n#1 0x0000559617eced2b in TS_execute_recurse (curitem=0x559619f815c0, arg=arg@entry=0x7ffc2d4b11f0, flags=flags@entry=0, chkcond=chkcond@entry=0x559617df0120 <checkcondition_HL>)\n at ./build/../src/backend/utils/adt/tsvector_op.c:1872 #2 0x0000559617ecedd1 in TS_execute_recurse (curitem=0x559619f815a8, arg=arg@entry=0x7ffc2d4b11f0, flags=flags@entry=0, chkcond=chkcond@entry=0x559617df0120 <checkcondition_HL>)\n at ./build/../src/backend/utils/adt/tsvector_op.c:1892\n#3 0x0000559617ecedd1 in TS_execute_recurse (curitem=0x559619f81590, arg=arg@entry=0x7ffc2d4b11f0, flags=flags@entry=0, chkcond=chkcond@entry=0x559617df0120 <checkcondition_HL>) at ./build/../src/backend/utils/adt/tsvector_op.c:1892\n#4 0x0000559617ecedd1 in TS_execute_recurse (curitem=0x559619f81578, arg=arg@entry=0x7ffc2d4b11f0, flags=flags@entry=0, chkcond=chkcond@entry=0x559617df0120 <checkcondition_HL>)\n at ./build/../src/backend/utils/adt/tsvector_op.c:1892\n#5 0x0000559617ecedd1 in TS_execute_recurse (curitem=0x559619f81560, arg=arg@entry=0x7ffc2d4b11f0, flags=flags@entry=0, chkcond=chkcond@entry=0x559617df0120 <checkcondition_HL>) at ./build/../src/backend/utils/adt/tsvector_op.c:1892\n#6 0x0000559617ecedd1 in TS_execute_recurse (curitem=0x559619f81548, arg=arg@entry=0x7ffc2d4b11f0, flags=flags@entry=0, chkcond=chkcond@entry=0x559617df0120 <checkcondition_HL>)\n at ./build/../src/backend/utils/adt/tsvector_op.c:1892\n#7 0x0000559617ecedd1 in TS_execute_recurse (curitem=curitem@entry=0x559619f81530, arg=arg@entry=0x7ffc2d4b11f0, flags=flags@entry=0, chkcond=chkcond@entry=0x559617df0120 <checkcondition_HL>)\n at ./build/../src/backend/utils/adt/tsvector_op.c:1892\n#8 0x0000559617ed26d9 in TS_execute (curitem=curitem@entry=0x559619f81530, arg=arg@entry=0x7ffc2d4b11f0, flags=flags@entry=0, chkcond=chkcond@entry=0x559617df0120 <checkcondition_HL>)\n at ./build/../src/backend/utils/adt/tsvector_op.c:1854\n#9 0x0000559617df041e in hlCover (prs=prs@entry=0x7ffc2d4b16c0, query=query@entry=0x559619f81528, p=p@entry=0x7ffc2d4b12a0, q=q@entry=0x7ffc2d4b12a4) at ./build/../src/backend/tsearch/wparser_def.c:2075 #10 0x0000559617df1a2d in mark_hl_words (max_words=35, min_words=15, shortword=3, highlightall=<optimized out>, query=<optimized out>, prs=0x7ffc2d4b16c0) at ./build/../src/backend/tsearch/wparser_def.c:2393\n#11 prsd_headline (fcinfo=<optimized out>) at ./build/../src/backend/tsearch/wparser_def.c:2614\n#12 0x0000559617f0cdab in FunctionCall3Coll (flinfo=flinfo@entry=0x559619fe1d90, collation=collation@entry=0, arg1=arg1@entry=140721068381888, arg2=<optimized out>, arg3=arg3@entry=94103169144104)\n at ./build/../src/backend/utils/fmgr/fmgr.c:1170\n#13 0x0000559617def48b in ts_headline_byid_opt (fcinfo=fcinfo@entry=0x7ffc2d4b1740) at ./build/../src/backend/tsearch/wparser.c:336\n#14 0x0000559617f0c484 in DirectFunctionCall4Coll (func=0x559617def380 <ts_headline_byid_opt>, collation=<optimized out>, arg1=<optimized out>, arg2=<optimized out>, arg3=<optimized out>, arg4=<optimized out>)\n at ./build/../src/backend/utils/fmgr/fmgr.c:877\n#15 0x0000559617c8ad41 in ExecInterpExpr (state=0x559619f9b170, econtext=0x559619fd2ae0, isnull=<optimized out>) at ./build/../src/backend/executor/execExprInterp.c:678\n#16 0x0000559617cb51da in ExecEvalExprSwitchContext (isNull=0x7ffc2d4b1b97, econtext=0x559619fd2ae0, state=0x559619f9b170) at ./build/../src/include/executor/executor.h:313\n#17 ExecProject (projInfo=0x559619f9b168) at ./build/../src/include/executor/executor.h:347\n#18 ExecResult (pstate=<optimized out>) at ./build/../src/backend/executor/nodeResult.c:136\n#19 0x0000559617c955d9 in ExecProcNodeInstr (node=0x559619fd29d0) at ./build/../src/backend/executor/execProcnode.c:461\n#20 0x0000559617cadc08 in ExecProcNode (node=0x559619fd29d0) at ./build/../src/include/executor/executor.h:247\n#21 ExecLimit (pstate=0x559619fd27e0) at ./build/../src/backend/executor/nodeLimit.c:149\n#22 0x0000559617c955d9 in ExecProcNodeInstr (node=0x559619fd27e0) at ./build/../src/backend/executor/execProcnode.c:461\n#23 0x0000559617c8e37b in ExecProcNode (node=0x559619fd27e0) at ./build/../src/include/executor/executor.h:247\n#24 ExecutePlan (execute_once=<optimized out>, dest=0x559618187460 <donothingDR>, direction=<optimized out>, numberTuples=0, sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>,\n planstate=0x559619fd27e0, estate=0x559619fd25d0) at ./build/../src/backend/executor/execMain.c:1723\n#25 standard_ExecutorRun (queryDesc=0x559619f92af8, direction=<optimized out>, count=0, execute_once=<optimized out>) at ./build/../src/backend/executor/execMain.c:364\n#26 0x00007f607e5e5045 in pgss_ExecutorRun (queryDesc=0x559619f92af8, direction=ForwardScanDirection, count=0, execute_once=<optimized out>) at ./build/../contrib/pg_stat_statements/pg_stat_statements.c:892\n#27 0x0000559617c251d6 in ExplainOnePlan (plannedstmt=plannedstmt@entry=0x559619f937e8, into=into@entry=0x0, es=es@entry=0x559619fda528,\n queryString=queryString@entry=0x559619ea3e80 \"explain (analyze, buffers) SELECT messageid, date, subject, _from, ts_rank_cd(fti, plainto_tsquery('public.pg', 'ERROR: The required file is not available')), ts\n_headline(bodytxt, plainto_tsquery('pub\"..., params=params@entry=0x0, queryEnv=<optimized out>, planduration=0x7ffc2d4b1e70) at ./build/../src/backend/commands/explain.c:536\n#28 0x0000559617c254ee in ExplainOneQuery (queryEnv=<optimized out>, params=0x0,\n queryString=0x559619ea3e80 \"explain (analyze, buffers) SELECT messageid, date, subject, _from, ts_rank_cd(fti, plainto_tsquery('public.pg', 'ERROR: The required file is not available')), ts_headline(bodytxt,\n plainto_tsquery('pub\"..., es=0x559619fda528, into=0x0, cursorOptions=<optimized out>, query=<optimized out>) at ./build/../src/backend/commands/explain.c:372\n#29 ExplainOneQuery (query=<optimized out>, cursorOptions=<optimized out>, into=0x0, es=0x559619fda528,\n queryString=0x559619ea3e80 \"explain (analyze, buffers) SELECT messageid, date, subject, _from, ts_rank_cd(fti, plainto_tsquery('public.pg', 'ERROR: The required file is not available')), ts_headline(bodytxt,\n plainto_tsquery('pub\"..., params=0x0, queryEnv=0x0) at ./build/../src/backend/commands/explain.c:340\n#30 0x0000559617c25a2d in ExplainQuery (pstate=pstate@entry=0x559619f8c0e8, stmt=stmt@entry=0x55961a045650,\n queryString=queryString@entry=0x559619ea3e80 \"explain (analyze, buffers) SELECT messageid, date, subject, _from, ts_rank_cd(fti, plainto_tsquery('public.pg', 'ERROR: The required file is not available')), ts\n_headline(bodytxt, plainto_tsquery('pub\"..., params=params@entry=0x0, queryEnv=queryEnv@entry=0x0, dest=dest@entry=0x559619f8c058) at ./build/../src/backend/commands/explain.c:255\n#31 0x0000559617deb94b in standard_ProcessUtility (pstmt=pstmt@entry=0x55961a045700,\n queryString=queryString@entry=0x559619ea3e80 \"explain (analyze, buffers) SELECT messageid, date, subject, _from, ts_rank_cd(fti, plainto_tsquery('public.pg', 'ERROR: The required file is not available')), ts\n_headline(bodytxt, plainto_tsquery('pub\"..., context=context@entry=PROCESS_UTILITY_TOPLEVEL, params=params@entry=0x0, queryEnv=queryEnv@entry=0x0, dest=dest@entry=0x559619f8c058,\n completionTag=0x7ffc2d4b22b0 \"\") at ./build/../src/backend/tcop/utility.c:675\n#32 0x00007f607e5e7347 in pgss_ProcessUtility (pstmt=0x55961a045700,\n queryString=0x559619ea3e80 \"explain (analyze, buffers) SELECT messageid, date, subject, _from, ts_rank_cd(fti, plainto_tsquery('public.pg', 'ERROR: The required file is not available')), ts_headline(bodytxt,\n plainto_tsquery('pub\"..., context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x559619f8c058, completionTag=0x7ffc2d4b22b0 \"\") at ./build/../contrib/pg_stat_statements/pg_stat_statements.c:1005\n#33 0x0000559617de83d9 in PortalRunUtility (portal=0x559619f33de0, pstmt=0x55961a045700, isTopLevel=<optimized out>, setHoldSnapshot=<optimized out>, dest=0x559619f8c058, completionTag=0x7ffc2d4b22b0 \"\")\n at ./build/../src/backend/tcop/pquery.c:1178\n#34 0x0000559617de91ea in FillPortalStore (portal=0x559619f33de0, isTopLevel=<optimized out>) at ./build/../src/include/nodes/pg_list.h:79\n#35 0x0000559617de9dc7 in PortalRun (portal=portal@entry=0x559619f33de0, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true, dest=dest@entry=0x559619fd7c08,\n altdest=altdest@entry=0x559619fd7c08, completionTag=0x7ffc2d4b24f0 \"\") at ./build/../src/backend/tcop/pquery.c:768\n#36 0x0000559617de5a3e in exec_simple_query (\n query_string=0x559619ea3e80 \"explain (analyze, buffers) SELECT messageid, date, subject, _from, ts_rank_cd(fti, plainto_tsquery('public.pg', 'ERROR: The required file is not available')), ts_headline(bodytxt\n, plainto_tsquery('pub\"...) at ./build/../src/backend/tcop/postgres.c:1145\n#37 0x0000559617de72a6 in PostgresMain (argc=<optimized out>, argv=argv@entry=0x559619efa418, dbname=<optimized out>, username=<optimized out>) at ./build/../src/backend/tcop/postgres.c:4193\n#38 0x0000559617d709e2 in BackendRun (port=0x559619eee4c0) at ./build/../src/backend/postmaster/postmaster.c:4364\n#39 BackendStartup (port=0x559619eee4c0) at ./build/../src/backend/postmaster/postmaster.c:4036\n#40 ServerLoop () at ./build/../src/backend/postmaster/postmaster.c:1707\n#41 0x0000559617d71886 in PostmasterMain (argc=5, argv=0x559619e9e930) at ./build/../src/backend/postmaster/postmaster.c:1380\n#42 0x0000559617aecdc9 in main (argc=5, argv=0x559619e9e930) at ./build/../src/backend/main/main.c:228\n\nand down to the hlFirstIndex():\n\n#0 hlFirstIndex (prs=0x7ffc2d4b16c0, prs=0x7ffc2d4b16c0, pos=219518, query=0x559619f81528) at ./build/../src/backend/tsearch/wparser_def.c:2029\n#1 hlCover (prs=prs@entry=0x7ffc2d4b16c0, query=query@entry=0x559619f81528, p=p@entry=0x7ffc2d4b12a0, q=q@entry=0x7ffc2d4b12a4) at ./build/../src/backend/tsearch/wparser_def.c:2083\n#2 0x0000559617df1a2d in mark_hl_words (max_words=35, min_words=15, shortword=3, highlightall=<optimized out>, query=<optimized out>, prs=0x7ffc2d4b16c0) at ./build/../src/backend/tsearch/wparser_def.c:2393\n#3 prsd_headline (fcinfo=<optimized out>) at ./build/../src/backend/tsearch/wparser_def.c:2614\n#4 0x0000559617f0cdab in FunctionCall3Coll (flinfo=flinfo@entry=0x559619fe1d90, collation=collation@entry=0, arg1=arg1@entry=140721068381888, arg2=<optimized out>, arg3=arg3@entry=94103169144104)\n at ./build/../src/backend/utils/fmgr/fmgr.c:1170\n#5 0x0000559617def48b in ts_headline_byid_opt (fcinfo=fcinfo@entry=0x7ffc2d4b1740) at ./build/../src/backend/tsearch/wparser.c:336\n#6 0x0000559617f0c484 in DirectFunctionCall4Coll (func=0x559617def380 <ts_headline_byid_opt>, collation=<optimized out>, arg1=<optimized out>, arg2=<optimized out>, arg3=<optimized out>, arg4=<optimized out>)\n at ./build/../src/backend/utils/fmgr/fmgr.c:877\n#7 0x0000559617c8ad41 in ExecInterpExpr (state=0x559619f9b170, econtext=0x559619fd2ae0, isnull=<optimized out>) at ./build/../src/backend/executor/execExprInterp.c:678\n#8 0x0000559617cb51da in ExecEvalExprSwitchContext (isNull=0x7ffc2d4b1b97, econtext=0x559619fd2ae0, state=0x559619f9b170) at ./build/../src/include/executor/executor.h:313\n#9 ExecProject (projInfo=0x559619f9b168) at ./build/../src/include/executor/executor.h:347\n#10 ExecResult (pstate=<optimized out>) at ./build/../src/backend/executor/nodeResult.c:136\n#11 0x0000559617c955d9 in ExecProcNodeInstr (node=0x559619fd29d0) at ./build/../src/backend/executor/execProcnode.c:461\n#12 0x0000559617cadc08 in ExecProcNode (node=0x559619fd29d0) at ./build/../src/include/executor/executor.h:247\n#13 ExecLimit (pstate=0x559619fd27e0) at ./build/../src/backend/executor/nodeLimit.c:149\n#14 0x0000559617c955d9 in ExecProcNodeInstr (node=0x559619fd27e0) at ./build/../src/backend/executor/execProcnode.c:461\n#15 0x0000559617c8e37b in ExecProcNode (node=0x559619fd27e0) at ./build/../src/include/executor/executor.h:247\n#16 ExecutePlan (execute_once=<optimized out>, dest=0x559618187460 <donothingDR>, direction=<optimized out>, numberTuples=0, sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>,\n planstate=0x559619fd27e0, estate=0x559619fd25d0) at ./build/../src/backend/executor/execMain.c:1723\n#17 standard_ExecutorRun (queryDesc=0x559619f92af8, direction=<optimized out>, count=0, execute_once=<optimized out>) at ./build/../src/backend/executor/execMain.c:364\n#18 0x00007f607e5e5045 in pgss_ExecutorRun (queryDesc=0x559619f92af8, direction=ForwardScanDirection, count=0, execute_once=<optimized out>) at ./build/../contrib/pg_stat_statements/pg_stat_statements.c:892\n#19 0x0000559617c251d6 in ExplainOnePlan (plannedstmt=plannedstmt@entry=0x559619f937e8, into=into@entry=0x0, es=es@entry=0x559619fda528,\n queryString=queryString@entry=0x559619ea3e80 \"explain (analyze, buffers) SELECT messageid, date, subject, _from, ts_rank_cd(fti, plainto_tsquery('public.pg', 'ERROR: The required file is not available')), ts\n_headline(bodytxt, plainto_tsquery('pub\"..., params=params@entry=0x0, queryEnv=<optimized out>, planduration=0x7ffc2d4b1e70) at ./build/../src/backend/commands/explain.c:536\n#20 0x0000559617c254ee in ExplainOneQuery (queryEnv=<optimized out>, params=0x0,\n queryString=0x559619ea3e80 \"explain (analyze, buffers) SELECT messageid, date, subject, _from, ts_rank_cd(fti, plainto_tsquery('public.pg', 'ERROR: The required file is not available')), ts_headline(bodytxt,\n plainto_tsquery('pub\"..., es=0x559619fda528, into=0x0, cursorOptions=<optimized out>, query=<optimized out>) at ./build/../src/backend/commands/explain.c:372\n#21 ExplainOneQuery (query=<optimized out>, cursorOptions=<optimized out>, into=0x0, es=0x559619fda528,\n queryString=0x559619ea3e80 \"explain (analyze, buffers) SELECT messageid, date, subject, _from, ts_rank_cd(fti, plainto_tsquery('public.pg', 'ERROR: The required file is not available')), ts_headline(bodytxt,\n plainto_tsquery('pub\"..., params=0x0, queryEnv=0x0) at ./build/../src/backend/commands/explain.c:340\n#22 0x0000559617c25a2d in ExplainQuery (pstate=pstate@entry=0x559619f8c0e8, stmt=stmt@entry=0x55961a045650,\n queryString=queryString@entry=0x559619ea3e80 \"explain (analyze, buffers) SELECT messageid, date, subject, _from, ts_rank_cd(fti, plainto_tsquery('public.pg', 'ERROR: The required file is not available')), ts\n_headline(bodytxt, plainto_tsquery('pub\"..., params=params@entry=0x0, queryEnv=queryEnv@entry=0x0, dest=dest@entry=0x559619f8c058) at ./build/../src/backend/commands/explain.c:255\n#23 0x0000559617deb94b in standard_ProcessUtility (pstmt=pstmt@entry=0x55961a045700,\n queryString=queryString@entry=0x559619ea3e80 \"explain (analyze, buffers) SELECT messageid, date, subject, _from, ts_rank_cd(fti, plainto_tsquery('public.pg', 'ERROR: The required file is not available')), ts\n_headline(bodytxt, plainto_tsquery('pub\"..., context=context@entry=PROCESS_UTILITY_TOPLEVEL, params=params@entry=0x0, queryEnv=queryEnv@entry=0x0, dest=dest@entry=0x559619f8c058,\n completionTag=0x7ffc2d4b22b0 \"\") at ./build/../src/backend/tcop/utility.c:675\n#24 0x00007f607e5e7347 in pgss_ProcessUtility (pstmt=0x55961a045700,\n queryString=0x559619ea3e80 \"explain (analyze, buffers) SELECT messageid, date, subject, _from, ts_rank_cd(fti, plainto_tsquery('public.pg', 'ERROR: The required file is not available')), ts_headline(bodytxt,\n plainto_tsquery('pub\"..., context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x559619f8c058, completionTag=0x7ffc2d4b22b0 \"\") at ./build/../contrib/pg_stat_statements/pg_stat_statements.c:1005\n#25 0x0000559617de83d9 in PortalRunUtility (portal=0x559619f33de0, pstmt=0x55961a045700, isTopLevel=<optimized out>, setHoldSnapshot=<optimized out>, dest=0x559619f8c058, completionTag=0x7ffc2d4b22b0 \"\")\n at ./build/../src/backend/tcop/pquery.c:1178\n#26 0x0000559617de91ea in FillPortalStore (portal=0x559619f33de0, isTopLevel=<optimized out>) at ./build/../src/include/nodes/pg_list.h:79\n#27 0x0000559617de9dc7 in PortalRun (portal=portal@entry=0x559619f33de0, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true, dest=dest@entry=0x559619fd7c08,\n altdest=altdest@entry=0x559619fd7c08, completionTag=0x7ffc2d4b24f0 \"\") at ./build/../src/backend/tcop/pquery.c:768\n#28 0x0000559617de5a3e in exec_simple_query (\n query_string=0x559619ea3e80 \"explain (analyze, buffers) SELECT messageid, date, subject, _from, ts_rank_cd(fti, plainto_tsquery('public.pg', 'ERROR: The required file is not available')), ts_headline(bodytxt\n, plainto_tsquery('pub\"...) at ./build/../src/backend/tcop/postgres.c:1145\n#29 0x0000559617de72a6 in PostgresMain (argc=<optimized out>, argv=argv@entry=0x559619efa418, dbname=<optimized out>, username=<optimized out>) at ./build/../src/backend/tcop/postgres.c:4193\n#30 0x0000559617d709e2 in BackendRun (port=0x559619eee4c0) at ./build/../src/backend/postmaster/postmaster.c:4364\n#31 BackendStartup (port=0x559619eee4c0) at ./build/../src/backend/postmaster/postmaster.c:4036\n#32 ServerLoop () at ./build/../src/backend/postmaster/postmaster.c:1707\n#33 0x0000559617d71886 in PostmasterMain (argc=5, argv=0x559619e9e930) at ./build/../src/backend/postmaster/postmaster.c:1380\n#34 0x0000559617aecdc9 in main (argc=5, argv=0x559619e9e930) at ./build/../src/backend/main/main.c:228\n\nThanks!\n\nStephen", "msg_date": "Fri, 24 Jul 2020 12:48:05 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": true, "msg_subject": "Re: Missing CFI in hlCover()?" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> Hm. I'd vote for a CFI within the recursion in TS_execute(), if there's\n>> not one there yet. Maybe hlFirstIndex needs one too --- if there are\n>> a lot of words in the query, maybe that could be slow? Did you pin the\n>> blame down any more precisely than hlCover?\n\n> I've definitely seen hlFirstIndex take a few seconds to run (while\n> running this under gdb and stepping through), so that could be a good\n> choice to place one (perhaps even additionally to this...). I have to\n> admit to wondering if we shouldn't consider having one in\n> check_stack_depth() to try and reduce the risk of us forgetting to have\n> one in sensible places, though I've not really looked at all the callers\n> and that might not be reasonable in some cases (though I wonder if maybe\n> we consider having a 'default' version that has a CFI, and an alternate\n> that doesn't...).\n\nAdding it to check_stack_depth doesn't really seem like a reasonable\nproposal to me; aside from failing to separate concerns, running a\nlong time is quite distinct from taking a lot of stack.\n\nThe reason I'm eyeing TS_execute is that it involves callbacks to\nfunctions that might be pretty complex in themselves, eg during index\nscans. So that would guard a lot of territory besides hlCover. But\nhlFirstIndex could use a CFI too, if you've seen it take that long.\n(I wonder if we need to try to make it faster. I'd supposed that the\nloop was cheap enough to be a non-problem, but with large enough\ndocuments maybe not? It seems like converting to a hash table could\nbe worthwhile for a large doc.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 24 Jul 2020 14:01:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Missing CFI in hlCover()?" }, { "msg_contents": "I tried to duplicate a multiple-second ts_headline call here, and\nfailed to, so there must be something I'm missing. Can you provide\na concrete example? I'd like to do some analysis with perf.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 24 Jul 2020 14:25:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Missing CFI in hlCover()?" }, { "msg_contents": "I wrote:\n> (I wonder if we need to try to make it faster. I'd supposed that the\n> loop was cheap enough to be a non-problem, but with large enough\n> documents maybe not? It seems like converting to a hash table could\n> be worthwhile for a large doc.)\n\nOK, I dug into Stephen's test case off-list. While some CFIs would\nbe a good idea, that's just band-aid'ing the symptom. The actual\nproblem is that hlCover() is taking way too much time. The test case\nboils down to \"common_word & rare_word\" matched to a very long document,\nwherein the rare_word appears only near the front. Once we're past\nthat match, hlCover() tries all the remaining matches for common_word,\nof which there are plenty ... and for each one, it scans clear to the\nend of the document, looking vainly for a substring that will satisfy\nthe \"common_word & rare_word\" query. So obviously, this is O(N^2)\nin the length of the document :-(.\n\nI have to suppose that I introduced this problem in c9b0c678d, since\nAFAIR we weren't getting ts_headline-takes-forever complaints before\nthat. It's not immediately obvious why the preceding algorithm doesn't\nhave a similar issue, but then there's not anything at all that was\nobvious about the preceding algorithm.\n\nThe most obvious way to fix the O(N^2) hazard is to put a limit on the\nlength of \"cover\" (matching substring) that we'll consider. For the\nmark_hl_words caller, we won't highlight more than max_words words\nanyway, so it would be reasonable to bound covers to that length or\nsome small multiple of it. The other caller mark_hl_fragments is\nwilling to highlight up to max_fragments of up to max_words each, and\nthere can be some daylight between the fragments, so it's not quite\nclear what the longest reasonable match is. Still, I doubt it's\nuseful to show a headline consisting of a few words from the start of\nthe document and a few words from thousands of words later, so a limit\nof max_fragments times max_words times something would probably be\nreasonable.\n\nWe could hard-code a rule like that, or we could introduce a new\nexplicit parameter for the maximum cover length. The latter would be\nmore flexible, but we need something back-patchable and I'm concerned\nabout the compatibility hazards of adding a new parameter in minor\nreleases. So on the whole I propose hard-wiring a multiplier of,\nsay, 10 for both these cases.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 29 Jul 2020 19:46:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Missing CFI in hlCover()?" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> We could hard-code a rule like that, or we could introduce a new\n> explicit parameter for the maximum cover length. The latter would be\n> more flexible, but we need something back-patchable and I'm concerned\n> about the compatibility hazards of adding a new parameter in minor\n> releases. So on the whole I propose hard-wiring a multiplier of,\n> say, 10 for both these cases.\n\nThat sounds alright to me, though I do think we should probably still\ntoss a CFI (or two) in this path somewhere as we don't know how long\nsome of these functions might take...\n\nThanks,\n\nStephen", "msg_date": "Thu, 30 Jul 2020 10:22:33 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": true, "msg_subject": "Re: Missing CFI in hlCover()?" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> We could hard-code a rule like that, or we could introduce a new\n>> explicit parameter for the maximum cover length. The latter would be\n>> more flexible, but we need something back-patchable and I'm concerned\n>> about the compatibility hazards of adding a new parameter in minor\n>> releases. So on the whole I propose hard-wiring a multiplier of,\n>> say, 10 for both these cases.\n\n> That sounds alright to me, though I do think we should probably still\n> toss a CFI (or two) in this path somewhere as we don't know how long\n> some of these functions might take...\n\nYeah, of course. I'm still leaning to doing that in TS_execute_recurse.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 30 Jul 2020 10:37:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Missing CFI in hlCover()?" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> >> We could hard-code a rule like that, or we could introduce a new\n> >> explicit parameter for the maximum cover length. The latter would be\n> >> more flexible, but we need something back-patchable and I'm concerned\n> >> about the compatibility hazards of adding a new parameter in minor\n> >> releases. So on the whole I propose hard-wiring a multiplier of,\n> >> say, 10 for both these cases.\n> \n> > That sounds alright to me, though I do think we should probably still\n> > toss a CFI (or two) in this path somewhere as we don't know how long\n> > some of these functions might take...\n> \n> Yeah, of course. I'm still leaning to doing that in TS_execute_recurse.\n\nWorks for me.\n\nThanks!\n\nStephen", "msg_date": "Thu, 30 Jul 2020 10:37:56 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": true, "msg_subject": "Re: Missing CFI in hlCover()?" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> Stephen Frost <sfrost@snowman.net> writes:\n>>> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>>>> We could hard-code a rule like that, or we could introduce a new\n>>>> explicit parameter for the maximum cover length. The latter would be\n>>>> more flexible, but we need something back-patchable and I'm concerned\n>>>> about the compatibility hazards of adding a new parameter in minor\n>>>> releases. So on the whole I propose hard-wiring a multiplier of,\n>>>> say, 10 for both these cases.\n\n>>> That sounds alright to me, though I do think we should probably still\n>>> toss a CFI (or two) in this path somewhere as we don't know how long\n>>> some of these functions might take...\n\n>> Yeah, of course. I'm still leaning to doing that in TS_execute_recurse.\n\n> Works for me.\n\nHere's a proposed patch along that line.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 30 Jul 2020 17:42:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Missing CFI in hlCover()?" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> >> Stephen Frost <sfrost@snowman.net> writes:\n> >>> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> >>>> We could hard-code a rule like that, or we could introduce a new\n> >>>> explicit parameter for the maximum cover length. The latter would be\n> >>>> more flexible, but we need something back-patchable and I'm concerned\n> >>>> about the compatibility hazards of adding a new parameter in minor\n> >>>> releases. So on the whole I propose hard-wiring a multiplier of,\n> >>>> say, 10 for both these cases.\n> \n> >>> That sounds alright to me, though I do think we should probably still\n> >>> toss a CFI (or two) in this path somewhere as we don't know how long\n> >>> some of these functions might take...\n> \n> >> Yeah, of course. I'm still leaning to doing that in TS_execute_recurse.\n> \n> > Works for me.\n> \n> Here's a proposed patch along that line.\n\nI've back-patched this to 11 (which was just a bit of fuzz) and tested\nit out with a couple of different queries that were causing issues\npreviously on the archive server, and they finish in a much more\nreasonable time and react faster to cancel requests/signals.\n\nIf you'd like to play with it more, the PG11 installed on ark2 now has\nthis patch built into it.\n\nSo, looks good to me, and would certainly be nice to get this into the\nnext set of releases, so the archive server doesn't get stuck anymore.\n:)\n\nThanks!\n\nStephen", "msg_date": "Thu, 30 Jul 2020 19:50:34 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": true, "msg_subject": "Re: Missing CFI in hlCover()?" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> Here's a proposed patch along that line.\n\n> I've back-patched this to 11 (which was just a bit of fuzz) and tested\n> it out with a couple of different queries that were causing issues\n> previously on the archive server, and they finish in a much more\n> reasonable time and react faster to cancel requests/signals.\n\nYeah, I'd tried this locally using the data from the one test case you\nshowed me, and it seemed to fix that.\n\n> So, looks good to me, and would certainly be nice to get this into the\n> next set of releases, so the archive server doesn't get stuck anymore.\n\nI'll push this tomorrow if nobody has objected to it.\n\nBTW, I had noticed last night that hlFirstIndex is being unreasonably\nstupid. Many of the \"words\" have null item pointers and hence can't\npossibly match any query item (I think because we have \"words\" for\ninter-word spaces/punctuation as well as the actual words). Checking\nthat, as in the attached v2 patch, makes things a bit faster yet.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 30 Jul 2020 21:25:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Missing CFI in hlCover()?" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> >> Here's a proposed patch along that line.\n> \n> > I've back-patched this to 11 (which was just a bit of fuzz) and tested\n> > it out with a couple of different queries that were causing issues\n> > previously on the archive server, and they finish in a much more\n> > reasonable time and react faster to cancel requests/signals.\n> \n> Yeah, I'd tried this locally using the data from the one test case you\n> showed me, and it seemed to fix that.\n\nGood stuff.\n\n> > So, looks good to me, and would certainly be nice to get this into the\n> > next set of releases, so the archive server doesn't get stuck anymore.\n> \n> I'll push this tomorrow if nobody has objected to it.\n\nSounds good.\n\n> BTW, I had noticed last night that hlFirstIndex is being unreasonably\n> stupid. Many of the \"words\" have null item pointers and hence can't\n> possibly match any query item (I think because we have \"words\" for\n> inter-word spaces/punctuation as well as the actual words). Checking\n> that, as in the attached v2 patch, makes things a bit faster yet.\n\nNice, looks good to me.\n\nThanks!\n\nStephen", "msg_date": "Fri, 31 Jul 2020 08:36:25 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": true, "msg_subject": "Re: Missing CFI in hlCover()?" } ]
[ { "msg_contents": "I went through the system's built-in implicit coercions to see\nwhich ones are unconditionally successful. These could all be\nmarked leakproof, as per attached patch. This came up in the\ncontext of the nearby discussion about CASE, but it seems like\nan independent improvement. If you have a function f(int8)\nthat is leakproof, you don't want it to effectively become\nnon-leakproof when you apply it to an int4 or int2 column.\n\nOne that I didn't mark leakproof is rtrim1(), which is the\ninfrastructure for char(n) to text coercion. It looks like it\nactually does qualify right now, but the code is long enough and\ncomplex enough that I think such a marking would be a bit unsafe.\n\nAny objections?\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 24 Jul 2020 12:17:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Mark unconditionally-safe implicit coercions as leakproof" }, { "msg_contents": "On Fri, Jul 24, 2020 at 12:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I went through the system's built-in implicit coercions to see\n> which ones are unconditionally successful. These could all be\n> marked leakproof, as per attached patch. This came up in the\n> context of the nearby discussion about CASE, but it seems like\n> an independent improvement. If you have a function f(int8)\n> that is leakproof, you don't want it to effectively become\n> non-leakproof when you apply it to an int4 or int2 column.\n>\n> One that I didn't mark leakproof is rtrim1(), which is the\n> infrastructure for char(n) to text coercion. It looks like it\n> actually does qualify right now, but the code is long enough and\n> complex enough that I think such a marking would be a bit unsafe.\n>\n> Any objections?\n\nIMHO, this is a nice improvement.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 24 Jul 2020 12:32:09 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Mark unconditionally-safe implicit coercions as leakproof" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Jul 24, 2020 at 12:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I went through the system's built-in implicit coercions to see\n>> which ones are unconditionally successful. These could all be\n>> marked leakproof, as per attached patch.\n\n> IMHO, this is a nice improvement.\n\nThanks; pushed. On second reading I found that there are a few\nnon-implicit coercions that could usefully be marked leakproof\nas well --- notably float4_numeric and float8_numeric, which should\nbe error-free now that infinities can be converted.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 25 Jul 2020 12:57:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Mark unconditionally-safe implicit coercions as leakproof" } ]
[ { "msg_contents": "Latest Postgres\nWindows 64 bits\nmsvc 2019 64 bits\n\nPatches applied v12-0001 to v12-0007:\n\n C:\\dll\\postgres\\contrib\\pgstattuple\\pgstatapprox.c(74,28): warning C4013:\n'GetOldestXmin' indefinido; assumindo extern retornando int\n[C:\\dll\\postgres\nC:\\dll\\postgres\\contrib\\pg_visibility\\pg_visibility.c(569,29): warning\nC4013: 'GetOldestXmin' indefinido; assumindo extern retornando int\n[C:\\dll\\postgres\\pg_visibility.\nvcxproj]\n C:\\dll\\postgres\\contrib\\pgstattuple\\pgstatapprox.c(74,56): error C2065:\n'PROCARRAY_FLAGS_VACUUM': identificador nao declarado\n[C:\\dll\\postgres\\pgstattuple.vcxproj]\n C:\\dll\\postgres\\contrib\\pg_visibility\\pg_visibility.c(569,58): error\nC2065: 'PROCARRAY_FLAGS_VACUUM': identificador nao declarado\n[C:\\dll\\postgres\\pg_visibility.vcxproj]\n C:\\dll\\postgres\\contrib\\pg_visibility\\pg_visibility.c(686,70): error\nC2065: 'PROCARRAY_FLAGS_VACUUM': identificador nao declarado\n[C:\\dll\\postgres\\pg_visibility.vcxproj]\n\nregards,\nRanier Vilela\n\nLatest PostgresWindows 64 bitsmsvc 2019 64 bitsPatches applied v12-0001 to v12-0007: C:\\dll\\postgres\\contrib\\pgstattuple\\pgstatapprox.c(74,28): warning C4013: 'GetOldestXmin' indefinido; assumindo extern retornando int [C:\\dll\\postgres  C:\\dll\\postgres\\contrib\\pg_visibility\\pg_visibility.c(569,29): warning C4013: 'GetOldestXmin' indefinido; assumindo extern retornando int [C:\\dll\\postgres\\pg_visibility.vcxproj] C:\\dll\\postgres\\contrib\\pgstattuple\\pgstatapprox.c(74,56): error C2065: 'PROCARRAY_FLAGS_VACUUM': identificador nao declarado [C:\\dll\\postgres\\pgstattuple.vcxproj]  C:\\dll\\postgres\\contrib\\pg_visibility\\pg_visibility.c(569,58): error C2065: 'PROCARRAY_FLAGS_VACUUM': identificador nao declarado [C:\\dll\\postgres\\pg_visibility.vcxproj]  C:\\dll\\postgres\\contrib\\pg_visibility\\pg_visibility.c(686,70): error C2065: 'PROCARRAY_FLAGS_VACUUM': identificador nao declarado [C:\\dll\\postgres\\pg_visibility.vcxproj]regards,Ranier Vilela", "msg_date": "Fri, 24 Jul 2020 14:05:04 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On 2020-07-24 14:05:04 -0300, Ranier Vilela wrote:\n> Latest Postgres\n> Windows 64 bits\n> msvc 2019 64 bits\n> \n> Patches applied v12-0001 to v12-0007:\n> \n> C:\\dll\\postgres\\contrib\\pgstattuple\\pgstatapprox.c(74,28): warning C4013:\n> 'GetOldestXmin' indefinido; assumindo extern retornando int\n> [C:\\dll\\postgres\n> C:\\dll\\postgres\\contrib\\pg_visibility\\pg_visibility.c(569,29): warning\n> C4013: 'GetOldestXmin' indefinido; assumindo extern retornando int\n> [C:\\dll\\postgres\\pg_visibility.\n> vcxproj]\n> C:\\dll\\postgres\\contrib\\pgstattuple\\pgstatapprox.c(74,56): error C2065:\n> 'PROCARRAY_FLAGS_VACUUM': identificador nao declarado\n> [C:\\dll\\postgres\\pgstattuple.vcxproj]\n> C:\\dll\\postgres\\contrib\\pg_visibility\\pg_visibility.c(569,58): error\n> C2065: 'PROCARRAY_FLAGS_VACUUM': identificador nao declarado\n> [C:\\dll\\postgres\\pg_visibility.vcxproj]\n> C:\\dll\\postgres\\contrib\\pg_visibility\\pg_visibility.c(686,70): error\n> C2065: 'PROCARRAY_FLAGS_VACUUM': identificador nao declarado\n> [C:\\dll\\postgres\\pg_visibility.vcxproj]\n\nI don't know that's about - there's no call to GetOldestXmin() in\npgstatapprox and pg_visibility after patch 0002? And similarly, the\nPROCARRAY_* references are also removed in the same patch?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 24 Jul 2020 10:16:13 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Em sex., 24 de jul. de 2020 às 14:16, Andres Freund <andres@anarazel.de>\nescreveu:\n\n> On 2020-07-24 14:05:04 -0300, Ranier Vilela wrote:\n> > Latest Postgres\n> > Windows 64 bits\n> > msvc 2019 64 bits\n> >\n> > Patches applied v12-0001 to v12-0007:\n> >\n> > C:\\dll\\postgres\\contrib\\pgstattuple\\pgstatapprox.c(74,28): warning\n> C4013:\n> > 'GetOldestXmin' indefinido; assumindo extern retornando int\n> > [C:\\dll\\postgres\n> > C:\\dll\\postgres\\contrib\\pg_visibility\\pg_visibility.c(569,29): warning\n> > C4013: 'GetOldestXmin' indefinido; assumindo extern retornando int\n> > [C:\\dll\\postgres\\pg_visibility.\n> > vcxproj]\n> > C:\\dll\\postgres\\contrib\\pgstattuple\\pgstatapprox.c(74,56): error C2065:\n> > 'PROCARRAY_FLAGS_VACUUM': identificador nao declarado\n> > [C:\\dll\\postgres\\pgstattuple.vcxproj]\n> > C:\\dll\\postgres\\contrib\\pg_visibility\\pg_visibility.c(569,58): error\n> > C2065: 'PROCARRAY_FLAGS_VACUUM': identificador nao declarado\n> > [C:\\dll\\postgres\\pg_visibility.vcxproj]\n> > C:\\dll\\postgres\\contrib\\pg_visibility\\pg_visibility.c(686,70): error\n> > C2065: 'PROCARRAY_FLAGS_VACUUM': identificador nao declarado\n> > [C:\\dll\\postgres\\pg_visibility.vcxproj]\n>\n> I don't know that's about - there's no call to GetOldestXmin() in\n> pgstatapprox and pg_visibility after patch 0002? And similarly, the\n> PROCARRAY_* references are also removed in the same patch?\n>\nMaybe need to remove them from these places, not?\nC:\\dll\\postgres\\contrib>grep -d GetOldestXmin *.c\nFile pgstattuple\\pgstatapprox.c:\n OldestXmin = GetOldestXmin(rel, PROCARRAY_FLAGS_VACUUM);\nFile pg_visibility\\pg_visibility.c:\n OldestXmin = GetOldestXmin(NULL, PROCARRAY_FLAGS_VACUUM);\n * deadlocks, because surely\nGetOldestXmin() should never take\n RecomputedOldestXmin = GetOldestXmin(NULL,\nPROCARRAY_FLAGS_VACUUM);\n\nregards,\nRanier Vilela\n\nEm sex., 24 de jul. de 2020 às 14:16, Andres Freund <andres@anarazel.de> escreveu:On 2020-07-24 14:05:04 -0300, Ranier Vilela wrote:\n> Latest Postgres\n> Windows 64 bits\n> msvc 2019 64 bits\n> \n> Patches applied v12-0001 to v12-0007:\n> \n>  C:\\dll\\postgres\\contrib\\pgstattuple\\pgstatapprox.c(74,28): warning C4013:\n> 'GetOldestXmin' indefinido; assumindo extern retornando int\n> [C:\\dll\\postgres\n> C:\\dll\\postgres\\contrib\\pg_visibility\\pg_visibility.c(569,29): warning\n> C4013: 'GetOldestXmin' indefinido; assumindo extern retornando int\n> [C:\\dll\\postgres\\pg_visibility.\n> vcxproj]\n>  C:\\dll\\postgres\\contrib\\pgstattuple\\pgstatapprox.c(74,56): error C2065:\n> 'PROCARRAY_FLAGS_VACUUM': identificador nao declarado\n> [C:\\dll\\postgres\\pgstattuple.vcxproj]\n>   C:\\dll\\postgres\\contrib\\pg_visibility\\pg_visibility.c(569,58): error\n> C2065: 'PROCARRAY_FLAGS_VACUUM': identificador nao declarado\n> [C:\\dll\\postgres\\pg_visibility.vcxproj]\n>   C:\\dll\\postgres\\contrib\\pg_visibility\\pg_visibility.c(686,70): error\n> C2065: 'PROCARRAY_FLAGS_VACUUM': identificador nao declarado\n> [C:\\dll\\postgres\\pg_visibility.vcxproj]\n\nI don't know that's about - there's no call to GetOldestXmin() in\npgstatapprox and pg_visibility after patch 0002? And similarly, the\nPROCARRAY_* references are also removed in the same patch?Maybe need to remove them from these places, not? C:\\dll\\postgres\\contrib>grep -d GetOldestXmin *.cFile pgstattuple\\pgstatapprox.c:        OldestXmin = GetOldestXmin(rel, PROCARRAY_FLAGS_VACUUM);File pg_visibility\\pg_visibility.c:                OldestXmin = GetOldestXmin(NULL, PROCARRAY_FLAGS_VACUUM);                                 * deadlocks, because surely GetOldestXmin() should never take                                RecomputedOldestXmin = GetOldestXmin(NULL, PROCARRAY_FLAGS_VACUUM);regards,Ranier Vilela", "msg_date": "Fri, 24 Jul 2020 18:15:15 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On 2020-07-24 18:15:15 -0300, Ranier Vilela wrote:\n> Em sex., 24 de jul. de 2020 �s 14:16, Andres Freund <andres@anarazel.de>\n> escreveu:\n> \n> > On 2020-07-24 14:05:04 -0300, Ranier Vilela wrote:\n> > > Latest Postgres\n> > > Windows 64 bits\n> > > msvc 2019 64 bits\n> > >\n> > > Patches applied v12-0001 to v12-0007:\n> > >\n> > > C:\\dll\\postgres\\contrib\\pgstattuple\\pgstatapprox.c(74,28): warning\n> > C4013:\n> > > 'GetOldestXmin' indefinido; assumindo extern retornando int\n> > > [C:\\dll\\postgres\n> > > C:\\dll\\postgres\\contrib\\pg_visibility\\pg_visibility.c(569,29): warning\n> > > C4013: 'GetOldestXmin' indefinido; assumindo extern retornando int\n> > > [C:\\dll\\postgres\\pg_visibility.\n> > > vcxproj]\n> > > C:\\dll\\postgres\\contrib\\pgstattuple\\pgstatapprox.c(74,56): error C2065:\n> > > 'PROCARRAY_FLAGS_VACUUM': identificador nao declarado\n> > > [C:\\dll\\postgres\\pgstattuple.vcxproj]\n> > > C:\\dll\\postgres\\contrib\\pg_visibility\\pg_visibility.c(569,58): error\n> > > C2065: 'PROCARRAY_FLAGS_VACUUM': identificador nao declarado\n> > > [C:\\dll\\postgres\\pg_visibility.vcxproj]\n> > > C:\\dll\\postgres\\contrib\\pg_visibility\\pg_visibility.c(686,70): error\n> > > C2065: 'PROCARRAY_FLAGS_VACUUM': identificador nao declarado\n> > > [C:\\dll\\postgres\\pg_visibility.vcxproj]\n> >\n> > I don't know that's about - there's no call to GetOldestXmin() in\n> > pgstatapprox and pg_visibility after patch 0002? And similarly, the\n> > PROCARRAY_* references are also removed in the same patch?\n> >\n> Maybe need to remove them from these places, not?\n> C:\\dll\\postgres\\contrib>grep -d GetOldestXmin *.c\n> File pgstattuple\\pgstatapprox.c:\n> OldestXmin = GetOldestXmin(rel, PROCARRAY_FLAGS_VACUUM);\n> File pg_visibility\\pg_visibility.c:\n> OldestXmin = GetOldestXmin(NULL, PROCARRAY_FLAGS_VACUUM);\n> * deadlocks, because surely\n> GetOldestXmin() should never take\n> RecomputedOldestXmin = GetOldestXmin(NULL,\n> PROCARRAY_FLAGS_VACUUM);\n\nThe 0002 patch changed those files:\n\ndiff --git a/contrib/pg_visibility/pg_visibility.c b/contrib/pg_visibility/pg_visibility.c\nindex 68d580ed1e0..37206c50a21 100644\n--- a/contrib/pg_visibility/pg_visibility.c\n+++ b/contrib/pg_visibility/pg_visibility.c\n@@ -563,17 +563,14 @@ collect_corrupt_items(Oid relid, bool all_visible, bool all_frozen)\n \tBufferAccessStrategy bstrategy = GetAccessStrategy(BAS_BULKREAD);\n \tTransactionId OldestXmin = InvalidTransactionId;\n \n-\tif (all_visible)\n-\t{\n-\t\t/* Don't pass rel; that will fail in recovery. */\n-\t\tOldestXmin = GetOldestXmin(NULL, PROCARRAY_FLAGS_VACUUM);\n-\t}\n-\n \trel = relation_open(relid, AccessShareLock);\n \n \t/* Only some relkinds have a visibility map */\n \tcheck_relation_relkind(rel);\n \n+\tif (all_visible)\n+\t\tOldestXmin = GetOldestNonRemovableTransactionId(rel);\n+\n \tnblocks = RelationGetNumberOfBlocks(rel);\n \n \t/*\n@@ -679,11 +676,12 @@ collect_corrupt_items(Oid relid, bool all_visible, bool all_frozen)\n \t\t\t\t * From a concurrency point of view, it sort of sucks to\n \t\t\t\t * retake ProcArrayLock here while we're holding the buffer\n \t\t\t\t * exclusively locked, but it should be safe against\n-\t\t\t\t * deadlocks, because surely GetOldestXmin() should never take\n-\t\t\t\t * a buffer lock. And this shouldn't happen often, so it's\n-\t\t\t\t * worth being careful so as to avoid false positives.\n+\t\t\t\t * deadlocks, because surely GetOldestNonRemovableTransactionId()\n+\t\t\t\t * should never take a buffer lock. And this shouldn't happen\n+\t\t\t\t * often, so it's worth being careful so as to avoid false\n+\t\t\t\t * positives.\n \t\t\t\t */\n-\t\t\t\tRecomputedOldestXmin = GetOldestXmin(NULL, PROCARRAY_FLAGS_VACUUM);\n+\t\t\t\tRecomputedOldestXmin = GetOldestNonRemovableTransactionId(rel);\n \n \t\t\t\tif (!TransactionIdPrecedes(OldestXmin, RecomputedOldestXmin))\n \t\t\t\t\trecord_corrupt_item(items, &tuple.t_self);\n\ndiff --git a/contrib/pgstattuple/pgstatapprox.c b/contrib/pgstattuple/pgstatapprox.c\nindex dbc0fa11f61..3a99333d443 100644\n--- a/contrib/pgstattuple/pgstatapprox.c\n+++ b/contrib/pgstattuple/pgstatapprox.c\n@@ -71,7 +71,7 @@ statapprox_heap(Relation rel, output_type *stat)\n \tBufferAccessStrategy bstrategy;\n \tTransactionId OldestXmin;\n \n-\tOldestXmin = GetOldestXmin(rel, PROCARRAY_FLAGS_VACUUM);\n+\tOldestXmin = GetOldestNonRemovableTransactionId(rel);\n \tbstrategy = GetAccessStrategy(BAS_BULKREAD);\n \n \tnblocks = RelationGetNumberOfBlocks(rel);\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 24 Jul 2020 17:00:32 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Em sex., 24 de jul. de 2020 às 21:00, Andres Freund <andres@anarazel.de>\nescreveu:\n\n> On 2020-07-24 18:15:15 -0300, Ranier Vilela wrote:\n> > Em sex., 24 de jul. de 2020 às 14:16, Andres Freund <andres@anarazel.de>\n> > escreveu:\n> >\n> > > On 2020-07-24 14:05:04 -0300, Ranier Vilela wrote:\n> > > > Latest Postgres\n> > > > Windows 64 bits\n> > > > msvc 2019 64 bits\n> > > >\n> > > > Patches applied v12-0001 to v12-0007:\n> > > >\n> > > > C:\\dll\\postgres\\contrib\\pgstattuple\\pgstatapprox.c(74,28): warning\n> > > C4013:\n> > > > 'GetOldestXmin' indefinido; assumindo extern retornando int\n> > > > [C:\\dll\\postgres\n> > > > C:\\dll\\postgres\\contrib\\pg_visibility\\pg_visibility.c(569,29):\n> warning\n> > > > C4013: 'GetOldestXmin' indefinido; assumindo extern retornando int\n> > > > [C:\\dll\\postgres\\pg_visibility.\n> > > > vcxproj]\n> > > > C:\\dll\\postgres\\contrib\\pgstattuple\\pgstatapprox.c(74,56): error\n> C2065:\n> > > > 'PROCARRAY_FLAGS_VACUUM': identificador nao declarado\n> > > > [C:\\dll\\postgres\\pgstattuple.vcxproj]\n> > > > C:\\dll\\postgres\\contrib\\pg_visibility\\pg_visibility.c(569,58):\n> error\n> > > > C2065: 'PROCARRAY_FLAGS_VACUUM': identificador nao declarado\n> > > > [C:\\dll\\postgres\\pg_visibility.vcxproj]\n> > > > C:\\dll\\postgres\\contrib\\pg_visibility\\pg_visibility.c(686,70):\n> error\n> > > > C2065: 'PROCARRAY_FLAGS_VACUUM': identificador nao declarado\n> > > > [C:\\dll\\postgres\\pg_visibility.vcxproj]\n> > >\n> > > I don't know that's about - there's no call to GetOldestXmin() in\n> > > pgstatapprox and pg_visibility after patch 0002? And similarly, the\n> > > PROCARRAY_* references are also removed in the same patch?\n> > >\n> > Maybe need to remove them from these places, not?\n> > C:\\dll\\postgres\\contrib>grep -d GetOldestXmin *.c\n> > File pgstattuple\\pgstatapprox.c:\n> > OldestXmin = GetOldestXmin(rel, PROCARRAY_FLAGS_VACUUM);\n> > File pg_visibility\\pg_visibility.c:\n> > OldestXmin = GetOldestXmin(NULL, PROCARRAY_FLAGS_VACUUM);\n> > * deadlocks, because surely\n> > GetOldestXmin() should never take\n> > RecomputedOldestXmin =\n> GetOldestXmin(NULL,\n> > PROCARRAY_FLAGS_VACUUM);\n>\n> The 0002 patch changed those files:\n>\n> diff --git a/contrib/pg_visibility/pg_visibility.c\n> b/contrib/pg_visibility/pg_visibility.c\n> index 68d580ed1e0..37206c50a21 100644\n> --- a/contrib/pg_visibility/pg_visibility.c\n> +++ b/contrib/pg_visibility/pg_visibility.c\n> @@ -563,17 +563,14 @@ collect_corrupt_items(Oid relid, bool all_visible,\n> bool all_frozen)\n> BufferAccessStrategy bstrategy = GetAccessStrategy(BAS_BULKREAD);\n> TransactionId OldestXmin = InvalidTransactionId;\n>\n> - if (all_visible)\n> - {\n> - /* Don't pass rel; that will fail in recovery. */\n> - OldestXmin = GetOldestXmin(NULL, PROCARRAY_FLAGS_VACUUM);\n> - }\n> -\n> rel = relation_open(relid, AccessShareLock);\n>\n> /* Only some relkinds have a visibility map */\n> check_relation_relkind(rel);\n>\n> + if (all_visible)\n> + OldestXmin = GetOldestNonRemovableTransactionId(rel);\n> +\n> nblocks = RelationGetNumberOfBlocks(rel);\n>\n> /*\n> @@ -679,11 +676,12 @@ collect_corrupt_items(Oid relid, bool all_visible,\n> bool all_frozen)\n> * From a concurrency point of view, it\n> sort of sucks to\n> * retake ProcArrayLock here while we're\n> holding the buffer\n> * exclusively locked, but it should be\n> safe against\n> - * deadlocks, because surely\n> GetOldestXmin() should never take\n> - * a buffer lock. And this shouldn't\n> happen often, so it's\n> - * worth being careful so as to avoid\n> false positives.\n> + * deadlocks, because surely\n> GetOldestNonRemovableTransactionId()\n> + * should never take a buffer lock. And\n> this shouldn't happen\n> + * often, so it's worth being careful so\n> as to avoid false\n> + * positives.\n> */\n> - RecomputedOldestXmin = GetOldestXmin(NULL,\n> PROCARRAY_FLAGS_VACUUM);\n> + RecomputedOldestXmin =\n> GetOldestNonRemovableTransactionId(rel);\n>\n> if (!TransactionIdPrecedes(OldestXmin,\n> RecomputedOldestXmin))\n> record_corrupt_item(items,\n> &tuple.t_self);\n>\n> diff --git a/contrib/pgstattuple/pgstatapprox.c\n> b/contrib/pgstattuple/pgstatapprox.c\n> index dbc0fa11f61..3a99333d443 100644\n> --- a/contrib/pgstattuple/pgstatapprox.c\n> +++ b/contrib/pgstattuple/pgstatapprox.c\n> @@ -71,7 +71,7 @@ statapprox_heap(Relation rel, output_type *stat)\n> BufferAccessStrategy bstrategy;\n> TransactionId OldestXmin;\n>\n> - OldestXmin = GetOldestXmin(rel, PROCARRAY_FLAGS_VACUUM);\n> + OldestXmin = GetOldestNonRemovableTransactionId(rel);\n> bstrategy = GetAccessStrategy(BAS_BULKREAD);\n>\n> nblocks = RelationGetNumberOfBlocks(rel);\n>\n> Obviously, the\nv12-0002-snapshot-scalability-Don-t-compute-global-horizo.patch patch needs\nto be rebased.\nhttps://github.com/postgres/postgres/blob/master/contrib/pg_visibility/pg_visibility.c\n\n1:\nif (all_visible)\n{\n/ * Don't pass rel; that will fail in recovery. * /\nOldestXmin = GetOldestXmin (NULL, PROCARRAY_FLAGS_VACUUM);\n}\nIt is on line 566 in the current version of git, while the patch is on line\n563.\n\n2:\n* deadlocks, because surely GetOldestXmin () should never take\n* a buffer lock. And this shouldn't happen often, so it's\n* worth being careful so as to avoid false positives.\n* /\nIt is currently on line 682, while in the patch it is on line 679.\n\nregards,\nRanier Vilela\n\nEm sex., 24 de jul. de 2020 às 21:00, Andres Freund <andres@anarazel.de> escreveu:On 2020-07-24 18:15:15 -0300, Ranier Vilela wrote:\n> Em sex., 24 de jul. de 2020 às 14:16, Andres Freund <andres@anarazel.de>\n> escreveu:\n> \n> > On 2020-07-24 14:05:04 -0300, Ranier Vilela wrote:\n> > > Latest Postgres\n> > > Windows 64 bits\n> > > msvc 2019 64 bits\n> > >\n> > > Patches applied v12-0001 to v12-0007:\n> > >\n> > >  C:\\dll\\postgres\\contrib\\pgstattuple\\pgstatapprox.c(74,28): warning\n> > C4013:\n> > > 'GetOldestXmin' indefinido; assumindo extern retornando int\n> > > [C:\\dll\\postgres\n> > > C:\\dll\\postgres\\contrib\\pg_visibility\\pg_visibility.c(569,29): warning\n> > > C4013: 'GetOldestXmin' indefinido; assumindo extern retornando int\n> > > [C:\\dll\\postgres\\pg_visibility.\n> > > vcxproj]\n> > >  C:\\dll\\postgres\\contrib\\pgstattuple\\pgstatapprox.c(74,56): error C2065:\n> > > 'PROCARRAY_FLAGS_VACUUM': identificador nao declarado\n> > > [C:\\dll\\postgres\\pgstattuple.vcxproj]\n> > >   C:\\dll\\postgres\\contrib\\pg_visibility\\pg_visibility.c(569,58): error\n> > > C2065: 'PROCARRAY_FLAGS_VACUUM': identificador nao declarado\n> > > [C:\\dll\\postgres\\pg_visibility.vcxproj]\n> > >   C:\\dll\\postgres\\contrib\\pg_visibility\\pg_visibility.c(686,70): error\n> > > C2065: 'PROCARRAY_FLAGS_VACUUM': identificador nao declarado\n> > > [C:\\dll\\postgres\\pg_visibility.vcxproj]\n> >\n> > I don't know that's about - there's no call to GetOldestXmin() in\n> > pgstatapprox and pg_visibility after patch 0002? And similarly, the\n> > PROCARRAY_* references are also removed in the same patch?\n> >\n> Maybe need to remove them from these places, not?\n> C:\\dll\\postgres\\contrib>grep -d GetOldestXmin *.c\n> File pgstattuple\\pgstatapprox.c:\n>         OldestXmin = GetOldestXmin(rel, PROCARRAY_FLAGS_VACUUM);\n> File pg_visibility\\pg_visibility.c:\n>                 OldestXmin = GetOldestXmin(NULL, PROCARRAY_FLAGS_VACUUM);\n>                                  * deadlocks, because surely\n> GetOldestXmin() should never take\n>                                 RecomputedOldestXmin = GetOldestXmin(NULL,\n> PROCARRAY_FLAGS_VACUUM);\n\nThe 0002 patch changed those files:\n\ndiff --git a/contrib/pg_visibility/pg_visibility.c b/contrib/pg_visibility/pg_visibility.c\nindex 68d580ed1e0..37206c50a21 100644\n--- a/contrib/pg_visibility/pg_visibility.c\n+++ b/contrib/pg_visibility/pg_visibility.c\n@@ -563,17 +563,14 @@ collect_corrupt_items(Oid relid, bool all_visible, bool all_frozen)\n        BufferAccessStrategy bstrategy = GetAccessStrategy(BAS_BULKREAD);\n        TransactionId OldestXmin = InvalidTransactionId;\n\n-       if (all_visible)\n-       {\n-               /* Don't pass rel; that will fail in recovery. */\n-               OldestXmin = GetOldestXmin(NULL, PROCARRAY_FLAGS_VACUUM);\n-       }\n-\n        rel = relation_open(relid, AccessShareLock);\n\n        /* Only some relkinds have a visibility map */\n        check_relation_relkind(rel);\n\n+       if (all_visible)\n+               OldestXmin = GetOldestNonRemovableTransactionId(rel);\n+\n        nblocks = RelationGetNumberOfBlocks(rel);\n\n        /*\n@@ -679,11 +676,12 @@ collect_corrupt_items(Oid relid, bool all_visible, bool all_frozen)\n                                 * From a concurrency point of view, it sort of sucks to\n                                 * retake ProcArrayLock here while we're holding the buffer\n                                 * exclusively locked, but it should be safe against\n-                                * deadlocks, because surely GetOldestXmin() should never take\n-                                * a buffer lock. And this shouldn't happen often, so it's\n-                                * worth being careful so as to avoid false positives.\n+                                * deadlocks, because surely GetOldestNonRemovableTransactionId()\n+                                * should never take a buffer lock. And this shouldn't happen\n+                                * often, so it's worth being careful so as to avoid false\n+                                * positives.\n                                 */\n-                               RecomputedOldestXmin = GetOldestXmin(NULL, PROCARRAY_FLAGS_VACUUM);\n+                               RecomputedOldestXmin = GetOldestNonRemovableTransactionId(rel);\n\n                                if (!TransactionIdPrecedes(OldestXmin, RecomputedOldestXmin))\n                                        record_corrupt_item(items, &tuple.t_self);\n\ndiff --git a/contrib/pgstattuple/pgstatapprox.c b/contrib/pgstattuple/pgstatapprox.c\nindex dbc0fa11f61..3a99333d443 100644\n--- a/contrib/pgstattuple/pgstatapprox.c\n+++ b/contrib/pgstattuple/pgstatapprox.c\n@@ -71,7 +71,7 @@ statapprox_heap(Relation rel, output_type *stat)\n        BufferAccessStrategy bstrategy;\n        TransactionId OldestXmin;\n\n-       OldestXmin = GetOldestXmin(rel, PROCARRAY_FLAGS_VACUUM);\n+       OldestXmin = GetOldestNonRemovableTransactionId(rel);\n        bstrategy = GetAccessStrategy(BAS_BULKREAD);\n\n        nblocks = RelationGetNumberOfBlocks(rel);\nObviously, the v12-0002-snapshot-scalability-Don-t-compute-global-horizo.patch patch needs to be rebased.\nhttps://github.com/postgres/postgres/blob/master/contrib/pg_visibility/pg_visibility.c\n1:if (all_visible){/ * Don't pass rel; that will fail in recovery. * /OldestXmin = GetOldestXmin (NULL, PROCARRAY_FLAGS_VACUUM);}It is on line 566 in the current version of git, while the patch is on line 563.2:* deadlocks, because surely GetOldestXmin () should never take* a buffer lock. And this shouldn't happen often, so it's* worth being careful so as to avoid false positives.* /It is currently on line 682, while in the patch it is on line 679.regards,Ranier Vilela", "msg_date": "Sat, 25 Jul 2020 09:57:48 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" } ]
[ { "msg_contents": "Hi hackers,\nWe discussed in another email thread[1], that it will be helpful if we can\ndisplay offset along with block number in vacuum error. Here, proposing a\npatch to add offset along with block number in vacuum errors.\n\nIn commit b61d161(Introduce vacuum errcontext to display additional\ninformation), we added vacuum errcontext to display additional\ninformation(block number) so that in case of vacuum error, we can identify\nwhich block we are getting error. Addition to block number, if we can\ndisplay offset, then it will be more helpful for users. So to display\noffset, here proposing two different methods(Thanks Robert for suggesting\nthese 2 methods):\n\n*Method 1:* We can report the TID as well as the block number in\nerrcontext.\n- errcontext(\"while scanning block %u of relation \\\"%s.%s\\\"\",\n- errinfo->blkno, errinfo->relnamespace, errinfo->relname);\n+ errcontext(\"while scanning block %u and offset %u of relation \\\"%s.%s\\\"\",\n+ errinfo->blkno, errinfo->offnum, errinfo->relnamespace,\nerrinfo->relname);\n\nAbove fix requires more calls to update_vacuum_error_info(). Attaching\nv01_0001 patch for this method.\n\n*Method 2: *We can improve the error messages by passing the relevant TID\nto heap_prepare_freeze_tuple and having it report the TID as part of the\nerror message or in the error detail.\n ereport(ERROR,\n (errcode(ERRCODE_DATA_CORRUPTED),\n- errmsg_internal(\"found xmin %u from before relfrozenxid %u\",\n+ errmsg_internal(\"for block %u and offnum %u, found xmin %u from before\nrelfrozenxid %u\",\n+ ItemPointerGetBlockNumber(tid),\n+ ItemPointerGetOffsetNumber(tid),\n xid, relfrozenxid)));\n\nAttaching v01_0002 patch for this method.\n\nPlease let me know your thoughts.\n\n[1] : http://postgr.es/m/20200713223822.az6fo3m2x4t42xz2@alap3.anarazel.de\n-- \nThanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 24 Jul 2020 23:18:43 +0530", "msg_from": "Mahendra Singh Thalor <mahi6run@gmail.com>", "msg_from_op": true, "msg_subject": "display offset along with block number in vacuum errors" }, { "msg_contents": "On Fri, Jul 24, 2020 at 11:18:43PM +0530, Mahendra Singh Thalor wrote:\n> In commit b61d161(Introduce vacuum errcontext to display additional\n> information), we added vacuum errcontext to display additional\n> information(block number) so that in case of vacuum error, we can identify\n> which block we are getting error. Addition to block number, if we can\n> display offset, then it will be more helpful for users. So to display\n> offset, here proposing two different methods(Thanks Robert for suggesting\n> these 2 methods):\n\n new_rel_pages = count_nondeletable_pages(onerel, vacrelstats);\n vacrelstats->blkno = new_rel_pages;\n+ vacrelstats->offnum = InvalidOffsetNumber;\n\nAdding more context would be interesting for some cases, but not all\ncontrary to what your patch does in some code paths like\nlazy_truncate_heap() as you would show up an offset of 0 for an\ninvalid value. This could confuse some users. Note that we are\ncareful enough to not print a context message if the block number is\ninvalid.\n--\nMichael", "msg_date": "Sat, 25 Jul 2020 18:32:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: display offset along with block number in vacuum errors" }, { "msg_contents": "On Sat, 25 Jul 2020 at 02:49, Mahendra Singh Thalor <mahi6run@gmail.com> wrote:\n>\n> Hi hackers,\n> We discussed in another email thread[1], that it will be helpful if we can display offset along with block number in vacuum error. Here, proposing a patch to add offset along with block number in vacuum errors.\n>\n> In commit b61d161(Introduce vacuum errcontext to display additional information), we added vacuum errcontext to display additional information(block number) so that in case of vacuum error, we can identify which block we are getting error. Addition to block number, if we can display offset, then it will be more helpful for users. So to display offset, here proposing two different methods(Thanks Robert for suggesting these 2 methods):\n>\n> Method 1: We can report the TID as well as the block number in errcontext.\n> - errcontext(\"while scanning block %u of relation \\\"%s.%s\\\"\",\n> - errinfo->blkno, errinfo->relnamespace, errinfo->relname);\n> + errcontext(\"while scanning block %u and offset %u of relation \\\"%s.%s\\\"\",\n> + errinfo->blkno, errinfo->offnum, errinfo->relnamespace, errinfo->relname);\n>\n> Above fix requires more calls to update_vacuum_error_info(). Attaching v01_0001 patch for this method.\n>\n> Method 2: We can improve the error messages by passing the relevant TID to heap_prepare_freeze_tuple and having it report the TID as part of the error message or in the error detail.\n> ereport(ERROR,\n> (errcode(ERRCODE_DATA_CORRUPTED),\n> - errmsg_internal(\"found xmin %u from before relfrozenxid %u\",\n> + errmsg_internal(\"for block %u and offnum %u, found xmin %u from before relfrozenxid %u\",\n> + ItemPointerGetBlockNumber(tid),\n> + ItemPointerGetOffsetNumber(tid),\n> xid, relfrozenxid)));\n>\n> Attaching v01_0002 patch for this method.\n>\n> Please let me know your thoughts.\n>\n\n+1 for adding offset in error messages.\n\nI had a look at 0001 patch. You've set the vacuum error info but I\nthink an error won't happen during setting itemids unused:\n\n@@ -1924,14 +1932,22 @@ lazy_vacuum_page(Relation onerel, BlockNumber\nblkno, Buffer buffer,\n BlockNumber tblk;\n OffsetNumber toff;\n ItemId itemid;\n+ LVSavedErrInfo loc_saved_err_info;\n\n tblk =\nItemPointerGetBlockNumber(&dead_tuples->itemptrs[tupindex]);\n if (tblk != blkno)\n break; /* past end of\ntuples for this block */\n toff =\nItemPointerGetOffsetNumber(&dead_tuples->itemptrs[tupindex]);\n+\n+ /* Update error traceback information */\n+ update_vacuum_error_info(vacrelstats,\n&loc_saved_err_info, VACUUM_ERRCB_PHASE_VACUUM_HEAP,\n+ blkno, toff);\n itemid = PageGetItemId(page, toff);\n ItemIdSetUnused(itemid);\n unused[uncnt++] = toff;\n+\n+ /* Revert to the previous phase information for error\ntraceback */\n+ restore_vacuum_error_info(vacrelstats, &loc_saved_err_info);\n }\n\n PageRepairFragmentation(page);\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 27 Jul 2020 16:34:34 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: display offset along with block number in vacuum errors" }, { "msg_contents": "Thanks Michael for looking into this.\n\nOn Sat, 25 Jul 2020 at 15:02, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Jul 24, 2020 at 11:18:43PM +0530, Mahendra Singh Thalor wrote:\n> > In commit b61d161(Introduce vacuum errcontext to display additional\n> > information), we added vacuum errcontext to display additional\n> > information(block number) so that in case of vacuum error, we can identify\n> > which block we are getting error. Addition to block number, if we can\n> > display offset, then it will be more helpful for users. So to display\n> > offset, here proposing two different methods(Thanks Robert for suggesting\n> > these 2 methods):\n>\n> new_rel_pages = count_nondeletable_pages(onerel, vacrelstats);\n> vacrelstats->blkno = new_rel_pages;\n> + vacrelstats->offnum = InvalidOffsetNumber;\n>\n> Adding more context would be interesting for some cases, but not all\n> contrary to what your patch does in some code paths like\n> lazy_truncate_heap() as you would show up an offset of 0 for an\n> invalid value. This could confuse some users. Note that we are\n> careful enough to not print a context message if the block number is\n> invalid.\n\nOkay. I agree with you. In case of inavlid offset, we can skip offset\nprinting. I will do this change in the next patch.\n\nThanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 27 Jul 2020 13:15:09 +0530", "msg_from": "Mahendra Singh Thalor <mahi6run@gmail.com>", "msg_from_op": true, "msg_subject": "Re: display offset along with block number in vacuum errors" }, { "msg_contents": "On Fri, Jul 24, 2020 at 11:18:43PM +0530, Mahendra Singh Thalor wrote:\n> Hi hackers,\n> We discussed in another email thread[1], that it will be helpful if we can\n> display offset along with block number in vacuum error. Here, proposing a\n> patch to add offset along with block number in vacuum errors.\n\nThanks. I happened to see both threads, only by chance.\n\nI'd already started writing the same as your 0001, which is essentially the\nsame as yours.\n\nHere:\n\n@@ -1924,14 +1932,22 @@ lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,\n \t\tBlockNumber tblk;\n \t\tOffsetNumber toff;\n \t\tItemId\t\titemid;\n+\t\tLVSavedErrInfo loc_saved_err_info;\n \n \t\ttblk = ItemPointerGetBlockNumber(&dead_tuples->itemptrs[tupindex]);\n \t\tif (tblk != blkno)\n \t\t\tbreak;\t\t\t\t/* past end of tuples for this block */\n \t\ttoff = ItemPointerGetOffsetNumber(&dead_tuples->itemptrs[tupindex]);\n+\n+\t\t/* Update error traceback information */\n+\t\tupdate_vacuum_error_info(vacrelstats, &loc_saved_err_info, VACUUM_ERRCB_PHASE_VACUUM_HEAP,\n+\t\t\t\t\t\t\t\t blkno, toff);\n \t\titemid = PageGetItemId(page, toff);\n \t\tItemIdSetUnused(itemid);\n \t\tunused[uncnt++] = toff;\n+\n+\t\t/* Revert to the previous phase information for error traceback */\n+\t\trestore_vacuum_error_info(vacrelstats, &loc_saved_err_info);\n \t}\n\nI'm not sure why you use restore_vacuum_error_info() at all. It's already\ncalled at the end of lazy_vacuum_page() (and others) to allow functions to\nclean up after their own state changes, rather than requiring callers to do it.\nI don't think you should use it in a loop, nor introduce another\nLVSavedErrInfo.\n\nSince phase and blkno are already set, I think you should just set\nvacrelstats->offnum = toff, rather than calling update_vacuum_error_info().\nSimilar to whats done in lazy_vacuum_heap():\n\n tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples->itemptrs[tupindex]);\n vacrelstats->blkno = tblk;\n\nI think you should also do:\n\n@@ -2976,6 +2984,7 @@ heap_page_is_all_visible(Relation rel, Buffer buf,\n ItemId itemid;\n HeapTupleData tuple;\n \n+ vacrelstats->offset = offnum;\n\nI'm not sure, but maybe you'd also want to do the same in more places:\n\n@@ -2024,6 +2030,7 @@ lazy_check_needs_freeze(Buffer buf, bool *hastup)\n@@ -2790,6 +2797,7 @@ count_nondeletable_pages(Relation onerel, LVRelStats *vacrelstats)\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 27 Jul 2020 06:13:52 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: display offset along with block number in vacuum errors" }, { "msg_contents": "Thanks Justin, Sawada and Michael for reviewing.\n\nOn Mon, 27 Jul 2020 at 16:43, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Fri, Jul 24, 2020 at 11:18:43PM +0530, Mahendra Singh Thalor wrote:\n> > Hi hackers,\n> > We discussed in another email thread[1], that it will be helpful if we can\n> > display offset along with block number in vacuum error. Here, proposing a\n> > patch to add offset along with block number in vacuum errors.\n>\n> Thanks. I happened to see both threads, only by chance.\n>\n> I'd already started writing the same as your 0001, which is essentially the\n> same as yours.\n>\n> Here:\n>\n> @@ -1924,14 +1932,22 @@ lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,\n> BlockNumber tblk;\n> OffsetNumber toff;\n> ItemId itemid;\n> + LVSavedErrInfo loc_saved_err_info;\n>\n> tblk = ItemPointerGetBlockNumber(&dead_tuples->itemptrs[tupindex]);\n> if (tblk != blkno)\n> break; /* past end of tuples for this block */\n> toff = ItemPointerGetOffsetNumber(&dead_tuples->itemptrs[tupindex]);\n> +\n> + /* Update error traceback information */\n> + update_vacuum_error_info(vacrelstats, &loc_saved_err_info, VACUUM_ERRCB_PHASE_VACUUM_HEAP,\n> + blkno, toff);\n> itemid = PageGetItemId(page, toff);\n> ItemIdSetUnused(itemid);\n> unused[uncnt++] = toff;\n> +\n> + /* Revert to the previous phase information for error traceback */\n> + restore_vacuum_error_info(vacrelstats, &loc_saved_err_info);\n> }\n>\n> I'm not sure why you use restore_vacuum_error_info() at all. It's already\n> called at the end of lazy_vacuum_page() (and others) to allow functions to\n> clean up after their own state changes, rather than requiring callers to do it.\n> I don't think you should use it in a loop, nor introduce another\n> LVSavedErrInfo.\n>\n> Since phase and blkno are already set, I think you should just set\n> vacrelstats->offnum = toff, rather than calling update_vacuum_error_info().\n> Similar to whats done in lazy_vacuum_heap():\n>\n> tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples->itemptrs[tupindex]);\n> vacrelstats->blkno = tblk;\n\nFixed.\n\n>\n> I think you should also do:\n>\n> @@ -2976,6 +2984,7 @@ heap_page_is_all_visible(Relation rel, Buffer buf,\n> ItemId itemid;\n> HeapTupleData tuple;\n>\n> + vacrelstats->offset = offnum;\n\nAgreed and fixed.\n\n>\n> I'm not sure, but maybe you'd also want to do the same in more places:\n>\n> @@ -2024,6 +2030,7 @@ lazy_check_needs_freeze(Buffer buf, bool *hastup)\n\nFixed.\n\n> @@ -2790,6 +2797,7 @@ count_nondeletable_pages(Relation onerel, LVRelStats *vacrelstats)\n\nI checked the code and I think there is no need to do similar changes\nin count_nondeletable_pages. I will look again and will verify again.\n\nApart from these, I fixed comments given by Sawada and Michael in the\nlatest patch. Attaching v2 patch for review.\n\n-- \nThanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 29 Jul 2020 00:35:17 +0530", "msg_from": "Mahendra Singh Thalor <mahi6run@gmail.com>", "msg_from_op": true, "msg_subject": "Re: display offset along with block number in vacuum errors" }, { "msg_contents": "On Wed, Jul 29, 2020 at 12:35:17AM +0530, Mahendra Singh Thalor wrote:\n> Apart from these, I fixed comments given by Sawada and Michael in the\n> latest patch. Attaching v2 patch for review.\n\nThanks.\n\nlazy_check_needs_freeze iterates over blocks and this patch changes it to\nupdate vacrelstats. I think it should do what\nlazy_{vacuum/cleanup}_heap/page/index do and call update_vacuum_error_info() at\nits beginning (even though only the offset is changed), and then\nrestore_vacuum_error_info() at its end (to \"revert back\" to the item number it\nstarted with).\n\nThe same is true of heap_page_is_all_visible(), except it's only called by\nlazy_vacuum_page(), which already calls restore_vacuum_error_info() a few lines\nlater.\n\nAs for the message:\n\n+ if (OffsetNumberIsValid(errinfo->offnum))\n+ errcontext(\"while scanning block %u and offset %u of relation \\\"%s.%s\\\"\",\n+ errinfo->blkno, errinfo->offnum, errinfo->relnamespace, errinfo->relname);\n+ else\n+ errcontext(\"while scanning block %u of relation \\\"%s.%s\\\"\",\n+ errinfo->blkno, errinfo->relnamespace, errinfo->relname);\n\nI think that may be confusing. A DBA should know what a \"block\" is, but\n\"offset\" sounds like a byte offset, rather than an item number. Here's what\nI'd written. Maybe it should say \"offset number\".\n\n+ errcontext(\"while vacuuming block %u of relation \\\"%s.%s\\\", item offset %u\",\n\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 28 Jul 2020 14:39:19 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: display offset along with block number in vacuum errors" }, { "msg_contents": "Bcc: \nSubject: Re: display offset along with block number in vacuum errors\nReply-To: \nIn-Reply-To: <CAKYtNApLJjAaRw0UEBBY6G1o0LRZKS7rA5n46BFh+NfwSOycdg@mail.gmail.com>\n\nOn Wed, Jul 29, 2020 at 12:35:17AM +0530, Mahendra Singh Thalor wrote:\n> > Here:\n> >\n> > @@ -1924,14 +1932,22 @@ lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,\n> > BlockNumber tblk;\n> > OffsetNumber toff;\n> > ItemId itemid;\n> > + LVSavedErrInfo loc_saved_err_info;\n> >\n> > tblk = ItemPointerGetBlockNumber(&dead_tuples->itemptrs[tupindex]);\n> > if (tblk != blkno)\n> > break; /* past end of tuples for this block */\n> > toff = ItemPointerGetOffsetNumber(&dead_tuples->itemptrs[tupindex]);\n> > +\n> > + /* Update error traceback information */\n> > + update_vacuum_error_info(vacrelstats, &loc_saved_err_info, VACUUM_ERRCB_PHASE_VACUUM_HEAP,\n> > + blkno, toff);\n> > itemid = PageGetItemId(page, toff);\n> > ItemIdSetUnused(itemid);\n> > unused[uncnt++] = toff;\n> > +\n> > + /* Revert to the previous phase information for error traceback */\n> > + restore_vacuum_error_info(vacrelstats, &loc_saved_err_info);\n> > }\n> >\n> > I'm not sure why you use restore_vacuum_error_info() at all. It's already\n> > called at the end of lazy_vacuum_page() (and others) to allow functions to\n> > clean up after their own state changes, rather than requiring callers to do it.\n> > I don't think you should use it in a loop, nor introduce another\n> > LVSavedErrInfo.\n> >\n> > Since phase and blkno are already set, I think you should just set\n> > vacrelstats->offnum = toff, rather than calling update_vacuum_error_info().\n> > Similar to whats done in lazy_vacuum_heap():\n> >\n> > tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples->itemptrs[tupindex]);\n> > vacrelstats->blkno = tblk;\n> \n> Fixed.\n\nI rearead this thread and I think the earlier suggestion from Masahiko was\nright. The loop around dead_tuples only does ItemIdSetUnused() which updates\nthe page, which has already been read from disk. On my suggestion, your v2\npatch sets offnum directly, but now I think it's not useful to set at all,\nsince the whole page is manipulated by PageRepairFragmentation() and\nlog_heap_clean(). An error there would misleadingly say \"..at offset number\nMM\", but would always show the page's last offset, and not the offset where an\nerror occured.\n\nI added this at:\nhttps://commitfest.postgresql.org/29/2662/\n\nIf anyone is considering this patch for v13, I guess it should be completed by\nnext week.\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 31 Jul 2020 16:55:14 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: display offset along with block number in vacuum errors" }, { "msg_contents": "On Fri, Jul 31, 2020 at 04:55:14PM -0500, Justin Pryzby wrote:\n> Bcc: \n> Subject: Re: display offset along with block number in vacuum errors\n> Reply-To: \n> In-Reply-To: <CAKYtNApLJjAaRw0UEBBY6G1o0LRZKS7rA5n46BFh+NfwSOycdg@mail.gmail.com>\n\nwhoops\n\n> On Wed, Jul 29, 2020 at 12:35:17AM +0530, Mahendra Singh Thalor wrote:\n> > > Here:\n> > >\n> > > @@ -1924,14 +1932,22 @@ lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,\n> > > BlockNumber tblk;\n> > > OffsetNumber toff;\n> > > ItemId itemid;\n> > > + LVSavedErrInfo loc_saved_err_info;\n> > >\n> > > tblk = ItemPointerGetBlockNumber(&dead_tuples->itemptrs[tupindex]);\n> > > if (tblk != blkno)\n> > > break; /* past end of tuples for this block */\n> > > toff = ItemPointerGetOffsetNumber(&dead_tuples->itemptrs[tupindex]);\n> > > +\n> > > + /* Update error traceback information */\n> > > + update_vacuum_error_info(vacrelstats, &loc_saved_err_info, VACUUM_ERRCB_PHASE_VACUUM_HEAP,\n> > > + blkno, toff);\n> > > itemid = PageGetItemId(page, toff);\n> > > ItemIdSetUnused(itemid);\n> > > unused[uncnt++] = toff;\n> > > +\n> > > + /* Revert to the previous phase information for error traceback */\n> > > + restore_vacuum_error_info(vacrelstats, &loc_saved_err_info);\n> > > }\n> > >\n> > > I'm not sure why you use restore_vacuum_error_info() at all. It's already\n> > > called at the end of lazy_vacuum_page() (and others) to allow functions to\n> > > clean up after their own state changes, rather than requiring callers to do it.\n> > > I don't think you should use it in a loop, nor introduce another\n> > > LVSavedErrInfo.\n> > >\n> > > Since phase and blkno are already set, I think you should just set\n> > > vacrelstats->offnum = toff, rather than calling update_vacuum_error_info().\n> > > Similar to whats done in lazy_vacuum_heap():\n> > >\n> > > tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples->itemptrs[tupindex]);\n> > > vacrelstats->blkno = tblk;\n> > \n> > Fixed.\n> \n> I rearead this thread and I think the earlier suggestion from Masahiko was\n> right. The loop around dead_tuples only does ItemIdSetUnused() which updates\n> the page, which has already been read from disk. On my suggestion, your v2\n> patch sets offnum directly, but now I think it's not useful to set at all,\n> since the whole page is manipulated by PageRepairFragmentation() and\n> log_heap_clean(). An error there would misleadingly say \"..at offset number\n> MM\", but would always show the page's last offset, and not the offset where an\n> error occured.\n\nThis makes me question whether offset numbers are ever useful during\nVACUUM_HEAP, since the real work is done a page at a time (not tuple) or by\ninternal functions that don't update vacrelstats->offno. Note that my initial\nproblem report that led to the errcontext implementation was an ERROR in heap\n*scan* (not vacuum). So an offset number at that point would've been\nsufficient.\nhttps://www.postgresql.org/message-id/20190808012436.GG11185@telsasoft.com\n\nI mentioned that lazy_check_needs_freeze() should save and restore the errinfo,\nso an error in heap_page_prune() (for example) doesn't get the wrong offset\nassociated with it.\n\nSo see the attached changes on top of your v2 patch.\n\npostgres=# DROP TABLE tt; CREATE TABLE tt(a int) WITH (fillfactor=90); INSERT INTO tt SELECT generate_series(1,99999); VACUUM tt; UPDATE tt SET a=1 WHERE ctid='(345,10)'; UPDATE pg_class SET relfrozenxid=(relfrozenxid::text::int + (1<<30))::int::text::xid WHERE oid='tt'::regclass; VACUUM tt;\nERROR: found xmin 1961 from before relfrozenxid 1073743785\nCONTEXT: while scanning block 345 of relation \"public.tt\", item offset 205\n\nHmm.. is it confusing that the block number is 0-indexed but the offset is\n1-indexed ?\n\n-- \nJustin", "msg_date": "Sat, 1 Aug 2020 01:17:21 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: display offset along with block number in vacuum errors" }, { "msg_contents": "Thanks Justin.\n\nOn Sat, 1 Aug 2020 at 11:47, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Fri, Jul 31, 2020 at 04:55:14PM -0500, Justin Pryzby wrote:\n> > Bcc:\n> > Subject: Re: display offset along with block number in vacuum errors\n> > Reply-To:\n> > In-Reply-To: <CAKYtNApLJjAaRw0UEBBY6G1o0LRZKS7rA5n46BFh+NfwSOycdg@mail.gmail.com>\n>\n> whoops\n>\n> > On Wed, Jul 29, 2020 at 12:35:17AM +0530, Mahendra Singh Thalor wrote:\n> > > > Here:\n> > > >\n> > > > @@ -1924,14 +1932,22 @@ lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,\n> > > > BlockNumber tblk;\n> > > > OffsetNumber toff;\n> > > > ItemId itemid;\n> > > > + LVSavedErrInfo loc_saved_err_info;\n> > > >\n> > > > tblk = ItemPointerGetBlockNumber(&dead_tuples->itemptrs[tupindex]);\n> > > > if (tblk != blkno)\n> > > > break; /* past end of tuples for this block */\n> > > > toff = ItemPointerGetOffsetNumber(&dead_tuples->itemptrs[tupindex]);\n> > > > +\n> > > > + /* Update error traceback information */\n> > > > + update_vacuum_error_info(vacrelstats, &loc_saved_err_info, VACUUM_ERRCB_PHASE_VACUUM_HEAP,\n> > > > + blkno, toff);\n> > > > itemid = PageGetItemId(page, toff);\n> > > > ItemIdSetUnused(itemid);\n> > > > unused[uncnt++] = toff;\n> > > > +\n> > > > + /* Revert to the previous phase information for error traceback */\n> > > > + restore_vacuum_error_info(vacrelstats, &loc_saved_err_info);\n> > > > }\n> > > >\n> > > > I'm not sure why you use restore_vacuum_error_info() at all. It's already\n> > > > called at the end of lazy_vacuum_page() (and others) to allow functions to\n> > > > clean up after their own state changes, rather than requiring callers to do it.\n> > > > I don't think you should use it in a loop, nor introduce another\n> > > > LVSavedErrInfo.\n> > > >\n> > > > Since phase and blkno are already set, I think you should just set\n> > > > vacrelstats->offnum = toff, rather than calling update_vacuum_error_info().\n> > > > Similar to whats done in lazy_vacuum_heap():\n> > > >\n> > > > tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples->itemptrs[tupindex]);\n> > > > vacrelstats->blkno = tblk;\n> > >\n> > > Fixed.\n> >\n> > I rearead this thread and I think the earlier suggestion from Masahiko was\n> > right. The loop around dead_tuples only does ItemIdSetUnused() which updates\n> > the page, which has already been read from disk. On my suggestion, your v2\n> > patch sets offnum directly, but now I think it's not useful to set at all,\n> > since the whole page is manipulated by PageRepairFragmentation() and\n> > log_heap_clean(). An error there would misleadingly say \"..at offset number\n> > MM\", but would always show the page's last offset, and not the offset where an\n> > error occured.\n>\n> This makes me question whether offset numbers are ever useful during\n> VACUUM_HEAP, since the real work is done a page at a time (not tuple) or by\n> internal functions that don't update vacrelstats->offno. Note that my initial\n> problem report that led to the errcontext implementation was an ERROR in heap\n> *scan* (not vacuum). So an offset number at that point would've been\n> sufficient.\n> https://www.postgresql.org/message-id/20190808012436.GG11185@telsasoft.com\n>\n> I mentioned that lazy_check_needs_freeze() should save and restore the errinfo,\n> so an error in heap_page_prune() (for example) doesn't get the wrong offset\n> associated with it.\n>\n> So see the attached changes on top of your v2 patch.\n\nActually I was waiting for review comments from committer and other\npeople also and was planning to send a patch after that. I already\nfixed your comments in my offline patch and was waiting for more\ncomments. Anyway, thanks for delta patch.\n\nHere, attaching v3 patch for review.\n\n>\n> postgres=# DROP TABLE tt; CREATE TABLE tt(a int) WITH (fillfactor=90); INSERT INTO tt SELECT generate_series(1,99999); VACUUM tt; UPDATE tt SET a=1 WHERE ctid='(345,10)'; UPDATE pg_class SET relfrozenxid=(relfrozenxid::text::int + (1<<30))::int::text::xid WHERE oid='tt'::regclass; VACUUM tt;\n> ERROR: found xmin 1961 from before relfrozenxid 1073743785\n> CONTEXT: while scanning block 345 of relation \"public.tt\", item offset 205\n>\n> Hmm.. is it confusing that the block number is 0-indexed but the offset is\n> 1-indexed ?\n>\n> --\n> Justin\n\n\n-- \nThanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.c", "msg_date": "Sat, 1 Aug 2020 12:31:53 +0530", "msg_from": "Mahendra Singh Thalor <mahi6run@gmail.com>", "msg_from_op": true, "msg_subject": "Re: display offset along with block number in vacuum errors" }, { "msg_contents": "On Sat, Aug 01, 2020 at 12:31:53PM +0530, Mahendra Singh Thalor wrote:\n> Actually I was waiting for review comments from committer and other\n> people also and was planning to send a patch after that. I already\n> fixed your comments in my offline patch and was waiting for more\n> comments. Anyway, thanks for delta patch.\n> \n> Here, attaching v3 patch for review.\n\nI wasn't being impatient but I spent enough time thinking about this that it\nmade sense to put it in patch form. Your patch has a couple extaneous changes:\n\n case VACUUM_ERRCB_PHASE_VACUUM_HEAP:\n if (BlockNumberIsValid(errinfo->blkno))\n+ {\n errcontext(\"while vacuuming block %u of relation \\\"%s.%s\\\"\",\n errinfo->blkno, errinfo->relnamespace, errinfo->relname);\n+ }\n break;\n \n case VACUUM_ERRCB_PHASE_VACUUM_INDEX:\n@@ -3589,6 +3618,7 @@ vacuum_error_callback(void *arg)\n errinfo->indname, errinfo->relnamespace, errinfo->relname);\n break;\n \n+\n case VACUUM_ERRCB_PHASE_INDEX_CLEANUP:\n errcontext(\"while cleaning up index \\\"%s\\\" of relation \\\"%s.%s\\\"\",\n\nI would get rid of these by doing like: git reset -p HEAD~1 (say \"n\" to most\nhunks and \"y\" to reset just the two, above), then git commit --amend (without\n-a and without pathnames), then git diff will show local changes (including\nthose no-longer-committed hunks), which you can git checkout -p (or similar).\nI'd be interested to hear if there's a better way.\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 1 Aug 2020 08:49:10 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: display offset along with block number in vacuum errors" }, { "msg_contents": "On Sat, 1 Aug 2020 at 16:02, Mahendra Singh Thalor <mahi6run@gmail.com> wrote:\n>\n> Thanks Justin.\n>\n> On Sat, 1 Aug 2020 at 11:47, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > On Fri, Jul 31, 2020 at 04:55:14PM -0500, Justin Pryzby wrote:\n> > > Bcc:\n> > > Subject: Re: display offset along with block number in vacuum errors\n> > > Reply-To:\n> > > In-Reply-To: <CAKYtNApLJjAaRw0UEBBY6G1o0LRZKS7rA5n46BFh+NfwSOycdg@mail.gmail.com>\n> >\n> > whoops\n> >\n> > > On Wed, Jul 29, 2020 at 12:35:17AM +0530, Mahendra Singh Thalor wrote:\n> > > > > Here:\n> > > > >\n> > > > > @@ -1924,14 +1932,22 @@ lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,\n> > > > > BlockNumber tblk;\n> > > > > OffsetNumber toff;\n> > > > > ItemId itemid;\n> > > > > + LVSavedErrInfo loc_saved_err_info;\n> > > > >\n> > > > > tblk = ItemPointerGetBlockNumber(&dead_tuples->itemptrs[tupindex]);\n> > > > > if (tblk != blkno)\n> > > > > break; /* past end of tuples for this block */\n> > > > > toff = ItemPointerGetOffsetNumber(&dead_tuples->itemptrs[tupindex]);\n> > > > > +\n> > > > > + /* Update error traceback information */\n> > > > > + update_vacuum_error_info(vacrelstats, &loc_saved_err_info, VACUUM_ERRCB_PHASE_VACUUM_HEAP,\n> > > > > + blkno, toff);\n> > > > > itemid = PageGetItemId(page, toff);\n> > > > > ItemIdSetUnused(itemid);\n> > > > > unused[uncnt++] = toff;\n> > > > > +\n> > > > > + /* Revert to the previous phase information for error traceback */\n> > > > > + restore_vacuum_error_info(vacrelstats, &loc_saved_err_info);\n> > > > > }\n> > > > >\n> > > > > I'm not sure why you use restore_vacuum_error_info() at all. It's already\n> > > > > called at the end of lazy_vacuum_page() (and others) to allow functions to\n> > > > > clean up after their own state changes, rather than requiring callers to do it.\n> > > > > I don't think you should use it in a loop, nor introduce another\n> > > > > LVSavedErrInfo.\n> > > > >\n> > > > > Since phase and blkno are already set, I think you should just set\n> > > > > vacrelstats->offnum = toff, rather than calling update_vacuum_error_info().\n> > > > > Similar to whats done in lazy_vacuum_heap():\n> > > > >\n> > > > > tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples->itemptrs[tupindex]);\n> > > > > vacrelstats->blkno = tblk;\n> > > >\n> > > > Fixed.\n> > >\n> > > I rearead this thread and I think the earlier suggestion from Masahiko was\n> > > right. The loop around dead_tuples only does ItemIdSetUnused() which updates\n> > > the page, which has already been read from disk. On my suggestion, your v2\n> > > patch sets offnum directly, but now I think it's not useful to set at all,\n> > > since the whole page is manipulated by PageRepairFragmentation() and\n> > > log_heap_clean(). An error there would misleadingly say \"..at offset number\n> > > MM\", but would always show the page's last offset, and not the offset where an\n> > > error occured.\n> >\n> > This makes me question whether offset numbers are ever useful during\n> > VACUUM_HEAP, since the real work is done a page at a time (not tuple) or by\n> > internal functions that don't update vacrelstats->offno. Note that my initial\n> > problem report that led to the errcontext implementation was an ERROR in heap\n> > *scan* (not vacuum). So an offset number at that point would've been\n> > sufficient.\n> > https://www.postgresql.org/message-id/20190808012436.GG11185@telsasoft.com\n> >\n> > I mentioned that lazy_check_needs_freeze() should save and restore the errinfo,\n> > so an error in heap_page_prune() (for example) doesn't get the wrong offset\n> > associated with it.\n> >\n> > So see the attached changes on top of your v2 patch.\n>\n> Actually I was waiting for review comments from committer and other\n> people also and was planning to send a patch after that. I already\n> fixed your comments in my offline patch and was waiting for more\n> comments. Anyway, thanks for delta patch.\n>\n> Here, attaching v3 patch for review.\n\nThank you for updating the patch!\n\nHere are my comments on v3 patch:\n\n@@ -2024,6 +2033,11 @@ lazy_check_needs_freeze(Buffer buf, bool *hastup)\n if (PageIsNew(page) || PageIsEmpty(page))\n return false;\n\n+ /* Update error traceback information */\n+ update_vacuum_error_info(vacrelstats, &saved_err_info,\n+ VACUUM_ERRCB_PHASE_SCAN_HEAP, vacrelstats->blkno,\n+ InvalidOffsetNumber);\n+\n maxoff = PageGetMaxOffsetNumber(page);\n for (offnum = FirstOffsetNumber;\n offnum <= maxoff;\n\nYou update the error callback phase to VACUUM_ERRCB_PHASE_SCAN_HEAP\nbut I think we're already in that phase. I'm okay with explicitly\nsetting it but on the other hand, we don't set the phase in\nheap_page_is_all_visible(). Is there any reason for that?\n\nAlso, since we don't reset vacrelstats->offnum at the end of\nheap_page_is_all_visible(), if an error occurs by the end of\nlazy_vacuum_page(), the caller of heap_page_is_all_visible(), we\nreport the error context with the last offset number in the page,\nmaking the users confused.\n\n---\n@@ -2045,10 +2060,13 @@ lazy_check_needs_freeze(Buffer buf, bool *hastup)\n\n if (heap_tuple_needs_freeze(tupleheader, FreezeLimit,\n MultiXactCutoff, buf))\n- return true;\n+ break;\n } /* scan along page */\n\n- return false;\n+ /* Revert to the previous phase information for error traceback */\n+ restore_vacuum_error_info(vacrelstats, &saved_err_info);\n+\n+ return offnum <= maxoff ? true : false;\n }\n\nI think we can write just \"return (offnum <= maxoff)\".\n\n---\n- /* Revert back to the old phase information for error traceback */\n+ /* Revert to the old phase information for error traceback */\n\nIf we want to modify this comment how about the following phrase for\nconsistency with other places?\n\n/* Revert to the previous phase information for error traceback */\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 2 Aug 2020 13:02:42 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: display offset along with block number in vacuum errors" }, { "msg_contents": "On Sun, Aug 02, 2020 at 01:02:42PM +0900, Masahiko Sawada wrote:\n> Thank you for updating the patch!\n> \n> Here are my comments on v3 patch:\n> \n> @@ -2024,6 +2033,11 @@ lazy_check_needs_freeze(Buffer buf, bool *hastup)\n> if (PageIsNew(page) || PageIsEmpty(page))\n> return false;\n> \n> + /* Update error traceback information */\n> + update_vacuum_error_info(vacrelstats, &saved_err_info,\n> + VACUUM_ERRCB_PHASE_SCAN_HEAP, vacrelstats->blkno,\n> + InvalidOffsetNumber);\n> +\n> maxoff = PageGetMaxOffsetNumber(page);\n> for (offnum = FirstOffsetNumber;\n> offnum <= maxoff;\n> \n> You update the error callback phase to VACUUM_ERRCB_PHASE_SCAN_HEAP\n> but I think we're already in that phase. I'm okay with explicitly\n> setting it but on the other hand, we don't set the phase in\n> heap_page_is_all_visible(). Is there any reason for that?\n\nThat part was my suggestion, so I can answer that. I added\nupdate_vacuum_error_info() to lazy_check_needs_freeze() to allow it to later\ncall restore_vacuum_error_info().\n\n> Also, since we don't reset vacrelstats->offnum at the end of\n> heap_page_is_all_visible(), if an error occurs by the end of\n> lazy_vacuum_page(), the caller of heap_page_is_all_visible(), we\n> report the error context with the last offset number in the page,\n> making the users confused.\n\nSo it looks like heap_page_is_all_visible() should also call the update and\nrestore functions.\n\nDo you agree with my suggestion that the VACUUM phase should never try to\nreport an offset ?\n\nHow do you think we can phrase the message to avoid confusion due to 0-based\nblock number and 1-based offset ?\n\nThanks,\n-- \nJustin\n\n\n", "msg_date": "Sat, 1 Aug 2020 23:24:51 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: display offset along with block number in vacuum errors" }, { "msg_contents": "On Sun, 2 Aug 2020 at 13:24, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Sun, Aug 02, 2020 at 01:02:42PM +0900, Masahiko Sawada wrote:\n> > Thank you for updating the patch!\n> >\n> > Here are my comments on v3 patch:\n> >\n> > @@ -2024,6 +2033,11 @@ lazy_check_needs_freeze(Buffer buf, bool *hastup)\n> > if (PageIsNew(page) || PageIsEmpty(page))\n> > return false;\n> >\n> > + /* Update error traceback information */\n> > + update_vacuum_error_info(vacrelstats, &saved_err_info,\n> > + VACUUM_ERRCB_PHASE_SCAN_HEAP, vacrelstats->blkno,\n> > + InvalidOffsetNumber);\n> > +\n> > maxoff = PageGetMaxOffsetNumber(page);\n> > for (offnum = FirstOffsetNumber;\n> > offnum <= maxoff;\n> >\n> > You update the error callback phase to VACUUM_ERRCB_PHASE_SCAN_HEAP\n> > but I think we're already in that phase. I'm okay with explicitly\n> > setting it but on the other hand, we don't set the phase in\n> > heap_page_is_all_visible(). Is there any reason for that?\n>\n> That part was my suggestion, so I can answer that. I added\n> update_vacuum_error_info() to lazy_check_needs_freeze() to allow it to later\n> call restore_vacuum_error_info().\n>\n> > Also, since we don't reset vacrelstats->offnum at the end of\n> > heap_page_is_all_visible(), if an error occurs by the end of\n> > lazy_vacuum_page(), the caller of heap_page_is_all_visible(), we\n> > report the error context with the last offset number in the page,\n> > making the users confused.\n>\n> So it looks like heap_page_is_all_visible() should also call the update and\n> restore functions.\n>\n> Do you agree with my suggestion that the VACUUM phase should never try to\n> report an offset ?\n\nYeah, given the current heap vacuum implementation, I agree that\nsetting the offset number during VACUUM_HEAP phase doesn't help\nanything. But setting the offset number during checking tuples'\nvisibility in heap_page_is_all_visible() might be useful, although it\nmight be unlikely to find a problem in heap_page_is_all_visible() as\nthe tuple visibility checking is already done in lazy_scan_heap(). I\nwonder if we can set SCAN_HEAP phase and update the offset number in\nheap_page_is_all_visible().\n\n> How do you think we can phrase the message to avoid confusion due to 0-based\n> block number and 1-based offset ?\n\nI think that since the user who uses this errcontext information is\nlikely to know more or less the internal of PostgreSQL I think 0-based\nblock number and 1-based offset will not be a problem. However, I\nexpected these are documented but couldn't find. If not yet, I think\nit’s a good time to document that.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 3 Aug 2020 11:42:10 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: display offset along with block number in vacuum errors" }, { "msg_contents": "Thanks Sawada and Justin.\n\nOn Sun, 2 Aug 2020 at 09:33, Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Sat, 1 Aug 2020 at 16:02, Mahendra Singh Thalor <mahi6run@gmail.com> wrote:\n> >\n> > Thanks Justin.\n> >\n> > On Sat, 1 Aug 2020 at 11:47, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > >\n> > > On Fri, Jul 31, 2020 at 04:55:14PM -0500, Justin Pryzby wrote:\n> > > > Bcc:\n> > > > Subject: Re: display offset along with block number in vacuum errors\n> > > > Reply-To:\n> > > > In-Reply-To: <CAKYtNApLJjAaRw0UEBBY6G1o0LRZKS7rA5n46BFh+NfwSOycdg@mail.gmail.com>\n> > >\n> > > whoops\n> > >\n> > > > On Wed, Jul 29, 2020 at 12:35:17AM +0530, Mahendra Singh Thalor wrote:\n> > > > > > Here:\n> > > > > >\n> > > > > > @@ -1924,14 +1932,22 @@ lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,\n> > > > > > BlockNumber tblk;\n> > > > > > OffsetNumber toff;\n> > > > > > ItemId itemid;\n> > > > > > + LVSavedErrInfo loc_saved_err_info;\n> > > > > >\n> > > > > > tblk = ItemPointerGetBlockNumber(&dead_tuples->itemptrs[tupindex]);\n> > > > > > if (tblk != blkno)\n> > > > > > break; /* past end of tuples for this block */\n> > > > > > toff = ItemPointerGetOffsetNumber(&dead_tuples->itemptrs[tupindex]);\n> > > > > > +\n> > > > > > + /* Update error traceback information */\n> > > > > > + update_vacuum_error_info(vacrelstats, &loc_saved_err_info, VACUUM_ERRCB_PHASE_VACUUM_HEAP,\n> > > > > > + blkno, toff);\n> > > > > > itemid = PageGetItemId(page, toff);\n> > > > > > ItemIdSetUnused(itemid);\n> > > > > > unused[uncnt++] = toff;\n> > > > > > +\n> > > > > > + /* Revert to the previous phase information for error traceback */\n> > > > > > + restore_vacuum_error_info(vacrelstats, &loc_saved_err_info);\n> > > > > > }\n> > > > > >\n> > > > > > I'm not sure why you use restore_vacuum_error_info() at all. It's already\n> > > > > > called at the end of lazy_vacuum_page() (and others) to allow functions to\n> > > > > > clean up after their own state changes, rather than requiring callers to do it.\n> > > > > > I don't think you should use it in a loop, nor introduce another\n> > > > > > LVSavedErrInfo.\n> > > > > >\n> > > > > > Since phase and blkno are already set, I think you should just set\n> > > > > > vacrelstats->offnum = toff, rather than calling update_vacuum_error_info().\n> > > > > > Similar to whats done in lazy_vacuum_heap():\n> > > > > >\n> > > > > > tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples->itemptrs[tupindex]);\n> > > > > > vacrelstats->blkno = tblk;\n> > > > >\n> > > > > Fixed.\n> > > >\n> > > > I rearead this thread and I think the earlier suggestion from Masahiko was\n> > > > right. The loop around dead_tuples only does ItemIdSetUnused() which updates\n> > > > the page, which has already been read from disk. On my suggestion, your v2\n> > > > patch sets offnum directly, but now I think it's not useful to set at all,\n> > > > since the whole page is manipulated by PageRepairFragmentation() and\n> > > > log_heap_clean(). An error there would misleadingly say \"..at offset number\n> > > > MM\", but would always show the page's last offset, and not the offset where an\n> > > > error occured.\n> > >\n> > > This makes me question whether offset numbers are ever useful during\n> > > VACUUM_HEAP, since the real work is done a page at a time (not tuple) or by\n> > > internal functions that don't update vacrelstats->offno. Note that my initial\n> > > problem report that led to the errcontext implementation was an ERROR in heap\n> > > *scan* (not vacuum). So an offset number at that point would've been\n> > > sufficient.\n> > > https://www.postgresql.org/message-id/20190808012436.GG11185@telsasoft.com\n> > >\n> > > I mentioned that lazy_check_needs_freeze() should save and restore the errinfo,\n> > > so an error in heap_page_prune() (for example) doesn't get the wrong offset\n> > > associated with it.\n> > >\n> > > So see the attached changes on top of your v2 patch.\n> >\n> > Actually I was waiting for review comments from committer and other\n> > people also and was planning to send a patch after that. I already\n> > fixed your comments in my offline patch and was waiting for more\n> > comments. Anyway, thanks for delta patch.\n> >\n> > Here, attaching v3 patch for review.\n>\n> Thank you for updating the patch!\n>\n> Here are my comments on v3 patch:\n>\n> @@ -2024,6 +2033,11 @@ lazy_check_needs_freeze(Buffer buf, bool *hastup)\n> if (PageIsNew(page) || PageIsEmpty(page))\n> return false;\n>\n> + /* Update error traceback information */\n> + update_vacuum_error_info(vacrelstats, &saved_err_info,\n> + VACUUM_ERRCB_PHASE_SCAN_HEAP, vacrelstats->blkno,\n> + InvalidOffsetNumber);\n> +\n> maxoff = PageGetMaxOffsetNumber(page);\n> for (offnum = FirstOffsetNumber;\n> offnum <= maxoff;\n>\n> You update the error callback phase to VACUUM_ERRCB_PHASE_SCAN_HEAP\n> but I think we're already in that phase. I'm okay with explicitly\n> setting it but on the other hand, we don't set the phase in\n> heap_page_is_all_visible(). Is there any reason for that?\n>\n> Also, since we don't reset vacrelstats->offnum at the end of\n> heap_page_is_all_visible(), if an error occurs by the end of\n> lazy_vacuum_page(), the caller of heap_page_is_all_visible(), we\n> report the error context with the last offset number in the page,\n> making the users confused.\n\nYour point is valid. Added update and restore functions in\nheap_page_is_all_visible in the latest patch.\n\n>\n> ---\n> @@ -2045,10 +2060,13 @@ lazy_check_needs_freeze(Buffer buf, bool *hastup)\n>\n> if (heap_tuple_needs_freeze(tupleheader, FreezeLimit,\n> MultiXactCutoff, buf))\n> - return true;\n> + break;\n> } /* scan along page */\n>\n> - return false;\n> + /* Revert to the previous phase information for error traceback */\n> + restore_vacuum_error_info(vacrelstats, &saved_err_info);\n> +\n> + return offnum <= maxoff ? true : false;\n> }\n>\n> I think we can write just \"return (offnum <= maxoff)\".\n\nFixed this.\n\n>\n> ---\n> - /* Revert back to the old phase information for error traceback */\n> + /* Revert to the old phase information for error traceback */\n>\n> If we want to modify this comment how about the following phrase for\n> consistency with other places?\n\nFixed this.\n\n>\n> /* Revert to the previous phase information for error traceback */\n>\n> Regards,\n>\n> --\n> Masahiko Sawada http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nApart from these, I fixed Justin's comment of extra brackets(That was\ndue to \"patch -p 1 < file\", as 002_fix was not applying directly). I\nhaven't updated the document for this(Sawada's comment). I will try in\nthe next patch.\nAttaching v4 patch for review.\n\n-- \nThanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 5 Aug 2020 00:46:38 +0530", "msg_from": "Mahendra Singh Thalor <mahi6run@gmail.com>", "msg_from_op": true, "msg_subject": "Re: display offset along with block number in vacuum errors" }, { "msg_contents": "On Sun, Aug 2, 2020 at 10:43 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n> I think that since the user who uses this errcontext information is\n> likely to know more or less the internal of PostgreSQL I think 0-based\n> block number and 1-based offset will not be a problem. However, I\n> expected these are documented but couldn't find. If not yet, I think\n> it’s a good time to document that.\n\nI agree. That's just how TIDs are.\n\n\n--\nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 5 Aug 2020 07:19:15 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: display offset along with block number in vacuum errors" }, { "msg_contents": "On Wed, Jul 29, 2020 at 1:09 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Wed, Jul 29, 2020 at 12:35:17AM +0530, Mahendra Singh Thalor wrote:\n> > Apart from these, I fixed comments given by Sawada and Michael in the\n> > latest patch. Attaching v2 patch for review.\n>\n> Thanks.\n>\n> lazy_check_needs_freeze iterates over blocks and this patch changes it to\n> update vacrelstats. I think it should do what\n> lazy_{vacuum/cleanup}_heap/page/index do and call update_vacuum_error_info() at\n> its beginning (even though only the offset is changed), and then\n> restore_vacuum_error_info() at its end (to \"revert back\" to the item number it\n> started with).\n>\n\nI see that Mahendra has changed patch as per this suggestion but I am\nnot convinced that it is a good idea to sprinkle\nupdate_vacuum_error_info()/restore_vacuum_error_info() at places more\nthan required. I see that it might look a bit clean from the\nperspective that if tomorrow we use the function\nlazy_check_needs_freeze() for a different purpose then we don't need\nto worry about the wrong phase information. If we are worried about\nthat then we should have an assert in that function to ensure that the\ncurrent phase is VACUUM_ERRCB_PHASE_SCAN_HEAP.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 6 Aug 2020 19:39:21 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: display offset along with block number in vacuum errors" }, { "msg_contents": "On Wed, Aug 5, 2020 at 12:47 AM Mahendra Singh Thalor\n<mahi6run@gmail.com> wrote:\n>\n> Apart from these, I fixed Justin's comment of extra brackets(That was\n> due to \"patch -p 1 < file\", as 002_fix was not applying directly). I\n> haven't updated the document for this(Sawada's comment). I will try in\n> the next patch.\n> Attaching v4 patch for review.\n>\n\nFew comments on the latest patch:\n1.\n@@ -2640,6 +2659,7 @@ lazy_truncate_heap(Relation onerel, LVRelStats\n*vacrelstats)\n */\n new_rel_pages = count_nondeletable_pages(onerel, vacrelstats);\n vacrelstats->blkno = new_rel_pages;\n+ vacrelstats->offnum = InvalidOffsetNumber;\n\nDo we really need to update the 'vacrelstats->offnum' here when we\nhave already set it to InvalidOffsetNumber in the caller?\n\n2.\n@@ -3574,8 +3605,14 @@ vacuum_error_callback(void *arg)\n {\n case VACUUM_ERRCB_PHASE_SCAN_HEAP:\n if (BlockNumberIsValid(errinfo->blkno))\n- errcontext(\"while scanning block %u of relation \\\"%s.%s\\\"\",\n- errinfo->blkno, errinfo->relnamespace, errinfo->relname);\n+ {\n+ if (OffsetNumberIsValid(errinfo->offnum))\n+ errcontext(\"while scanning block %u of relation \\\"%s.%s\\\", item offset %u\",\n+\n\nI am not completely sure if this error message is an improvement over\nwhat you have in the initial version of patch \"while scanning block %u\nand offset %u of relation \\\"%s.%s\\\"\",...\". I see that Justin has\nraised a concern above that whether users will be aware of 'offset'\nbut I also see that we have used it in a few other messages in the\ncode. For example:\n\nPageIndexTupleDeleteNoCompact()\n{\n..\nnline = PageGetMaxOffsetNumber(page);\nif ((int) offnum <= 0 || (int) offnum > nline)\nelog(ERROR, \"invalid index offnum: %u\", offnum);\n..\n}\n\nhash_desc\n{\n..\ncase XLOG_HASH_INSERT:\n{\nxl_hash_insert *xlrec = (xl_hash_insert *) rec;\n\nappendStringInfo(buf, \"off %u\", xlrec->offnum);\nbreak;\n}\n\nSimilarly in other desc functions, we have used off or offnum.\n\nI find the message in your initial patch better.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 6 Aug 2020 19:41:58 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: display offset along with block number in vacuum errors" }, { "msg_contents": "On Thu, Aug 06, 2020 at 07:39:21PM +0530, Amit Kapila wrote:\n> On Wed, Jul 29, 2020 at 1:09 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > On Wed, Jul 29, 2020 at 12:35:17AM +0530, Mahendra Singh Thalor wrote:\n> > > Apart from these, I fixed comments given by Sawada and Michael in the\n> > > latest patch. Attaching v2 patch for review.\n> >\n> > Thanks.\n> >\n> > lazy_check_needs_freeze iterates over blocks and this patch changes it to\n> > update vacrelstats. I think it should do what\n> > lazy_{vacuum/cleanup}_heap/page/index do and call update_vacuum_error_info() at\n> > its beginning (even though only the offset is changed), and then\n> > restore_vacuum_error_info() at its end (to \"revert back\" to the item number it\n> > started with).\n> >\n> \n> I see that Mahendra has changed patch as per this suggestion but I am\n> not convinced that it is a good idea to sprinkle\n> update_vacuum_error_info()/restore_vacuum_error_info() at places more\n> than required. I see that it might look a bit clean from the\n> perspective that if tomorrow we use the function\n> lazy_check_needs_freeze() for a different purpose then we don't need\n> to worry about the wrong phase information. If we are worried about\n> that then we should have an assert in that function to ensure that the\n> current phase is VACUUM_ERRCB_PHASE_SCAN_HEAP.\n\nThe motivation was to restore the offnum, which is set to Invalid at the start\nof lazy_scan_heap(), and then set valid within lazy_check_needs_freeze, but\nshould be restored or re-set to Invalid when returns to lazy_scan_heap(). If\nyou think it's important, we could just set vacrelstats->offnum = Invalid\nbefore returning, but that's what the restore function was built for. We do\ndirect assignment in 2 places to avoid a function call within a loop.\n\nlazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,\n Relation *Irel, int nindexes, bool aggressive)\n\n...\n for (blkno = 0; blkno < nblocks; blkno++)\n {\n...\n update_vacuum_error_info(vacrelstats, NULL, VACUUM_ERRCB_PHASE_SCAN_HEAP,\n blkno, InvalidOffsetNumber);\n if (!ConditionalLockBufferForCleanup(buf))\n {\n...\n if (!lazy_check_needs_freeze(buf, &hastup, vacrelstats))\n {\n...\n for (offnum = FirstOffsetNumber;\n offnum <= maxoff;\n offnum = OffsetNumberNext(offnum))\n\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 6 Aug 2020 09:21:16 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: display offset along with block number in vacuum errors" }, { "msg_contents": "On Thu, Aug 6, 2020 at 7:51 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Thu, Aug 06, 2020 at 07:39:21PM +0530, Amit Kapila wrote:\n> > On Wed, Jul 29, 2020 at 1:09 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > >\n> > >\n> > > lazy_check_needs_freeze iterates over blocks and this patch changes it to\n> > > update vacrelstats. I think it should do what\n> > > lazy_{vacuum/cleanup}_heap/page/index do and call update_vacuum_error_info() at\n> > > its beginning (even though only the offset is changed), and then\n> > > restore_vacuum_error_info() at its end (to \"revert back\" to the item number it\n> > > started with).\n> > >\n> >\n> > I see that Mahendra has changed patch as per this suggestion but I am\n> > not convinced that it is a good idea to sprinkle\n> > update_vacuum_error_info()/restore_vacuum_error_info() at places more\n> > than required. I see that it might look a bit clean from the\n> > perspective that if tomorrow we use the function\n> > lazy_check_needs_freeze() for a different purpose then we don't need\n> > to worry about the wrong phase information. If we are worried about\n> > that then we should have an assert in that function to ensure that the\n> > current phase is VACUUM_ERRCB_PHASE_SCAN_HEAP.\n>\n> The motivation was to restore the offnum, which is set to Invalid at the start\n> of lazy_scan_heap(), and then set valid within lazy_check_needs_freeze, but\n> should be restored or re-set to Invalid when returns to lazy_scan_heap(). If\n> you think it's important, we could just set vacrelstats->offnum = Invalid\n> before returning,\n>\n\nYeah, I would prefer that and probably a comment to indicate why we\nare doing that.\n\n> but that's what the restore function was built for.\n>\n\nI think it would be better to call restore wherever we call update. I\nsee your point that there is some value doing it via update/restore\nbut I think we should try to avoid that at many places unless it is\nrequired and we already update blockno information without\nupdate/restore at few places.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 7 Aug 2020 07:18:55 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: display offset along with block number in vacuum errors" }, { "msg_contents": "On Fri, 7 Aug 2020 at 10:49, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Aug 6, 2020 at 7:51 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > On Thu, Aug 06, 2020 at 07:39:21PM +0530, Amit Kapila wrote:\n> > > On Wed, Jul 29, 2020 at 1:09 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > >\n> > > >\n> > > > lazy_check_needs_freeze iterates over blocks and this patch changes it to\n> > > > update vacrelstats. I think it should do what\n> > > > lazy_{vacuum/cleanup}_heap/page/index do and call update_vacuum_error_info() at\n> > > > its beginning (even though only the offset is changed), and then\n> > > > restore_vacuum_error_info() at its end (to \"revert back\" to the item number it\n> > > > started with).\n> > > >\n> > >\n> > > I see that Mahendra has changed patch as per this suggestion but I am\n> > > not convinced that it is a good idea to sprinkle\n> > > update_vacuum_error_info()/restore_vacuum_error_info() at places more\n> > > than required. I see that it might look a bit clean from the\n> > > perspective that if tomorrow we use the function\n> > > lazy_check_needs_freeze() for a different purpose then we don't need\n> > > to worry about the wrong phase information. If we are worried about\n> > > that then we should have an assert in that function to ensure that the\n> > > current phase is VACUUM_ERRCB_PHASE_SCAN_HEAP.\n> >\n> > The motivation was to restore the offnum, which is set to Invalid at the start\n> > of lazy_scan_heap(), and then set valid within lazy_check_needs_freeze, but\n> > should be restored or re-set to Invalid when returns to lazy_scan_heap(). If\n> > you think it's important, we could just set vacrelstats->offnum = Invalid\n> > before returning,\n> >\n>\n> Yeah, I would prefer that and probably a comment to indicate why we\n> are doing that.\n\n+1\n\nI'm concerned that we call the update and restore in\nheap_page_is_all_visible(). Unlike lazy_check_needs_freeze(), this\nfunction is called for every vacuumed page. I commented that if we\nwant to update the offset number during iterating tuples in the\nfunction we should change the phase to SCAN_HEAP at the beginning of\nthe function because it's actually not vacuuming. But if the error is\nunlikely to happen within the function I think we can avoid updating\nthe offset number and phase to avoid performance overhead.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 7 Aug 2020 11:40:23 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: display offset along with block number in vacuum errors" }, { "msg_contents": "On Fri, Aug 7, 2020 at 8:10 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Fri, 7 Aug 2020 at 10:49, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Aug 6, 2020 at 7:51 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > >\n> > > On Thu, Aug 06, 2020 at 07:39:21PM +0530, Amit Kapila wrote:\n> > > > On Wed, Jul 29, 2020 at 1:09 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > > >\n> > > > >\n> > > > > lazy_check_needs_freeze iterates over blocks and this patch changes it to\n> > > > > update vacrelstats. I think it should do what\n> > > > > lazy_{vacuum/cleanup}_heap/page/index do and call update_vacuum_error_info() at\n> > > > > its beginning (even though only the offset is changed), and then\n> > > > > restore_vacuum_error_info() at its end (to \"revert back\" to the item number it\n> > > > > started with).\n> > > > >\n> > > >\n> > > > I see that Mahendra has changed patch as per this suggestion but I am\n> > > > not convinced that it is a good idea to sprinkle\n> > > > update_vacuum_error_info()/restore_vacuum_error_info() at places more\n> > > > than required. I see that it might look a bit clean from the\n> > > > perspective that if tomorrow we use the function\n> > > > lazy_check_needs_freeze() for a different purpose then we don't need\n> > > > to worry about the wrong phase information. If we are worried about\n> > > > that then we should have an assert in that function to ensure that the\n> > > > current phase is VACUUM_ERRCB_PHASE_SCAN_HEAP.\n> > >\n> > > The motivation was to restore the offnum, which is set to Invalid at the start\n> > > of lazy_scan_heap(), and then set valid within lazy_check_needs_freeze, but\n> > > should be restored or re-set to Invalid when returns to lazy_scan_heap(). If\n> > > you think it's important, we could just set vacrelstats->offnum = Invalid\n> > > before returning,\n> > >\n> >\n> > Yeah, I would prefer that and probably a comment to indicate why we\n> > are doing that.\n>\n> +1\n>\n> I'm concerned that we call the update and restore in\n> heap_page_is_all_visible(). Unlike lazy_check_needs_freeze(), this\n> function is called for every vacuumed page. I commented that if we\n> want to update the offset number during iterating tuples in the\n> function we should change the phase to SCAN_HEAP at the beginning of\n> the function because it's actually not vacuuming.\n>\n\nAFAICS, heap_page_is_all_visible() is called from only one place and\nthat is lazy_vacuum_page(), so I think if there is any error in\nheap_page_is_all_visible(), it should be considered as VACUUM_HEAP\nphase error.\n\n But if the error is\n> unlikely to happen within the function I think we can avoid updating\n> the offset number and phase to avoid performance overhead.\n>\n\nI am not sure we can guarantee that and even if it is true today one\ncan add an error in that path in future. But I feel we can keep the\nphase as VACUUM_HEAP.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 7 Aug 2020 09:33:47 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: display offset along with block number in vacuum errors" }, { "msg_contents": "On Fri, 7 Aug 2020 at 13:04, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Aug 7, 2020 at 8:10 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Fri, 7 Aug 2020 at 10:49, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Thu, Aug 6, 2020 at 7:51 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > >\n> > > > On Thu, Aug 06, 2020 at 07:39:21PM +0530, Amit Kapila wrote:\n> > > > > On Wed, Jul 29, 2020 at 1:09 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > > > >\n> > > > > >\n> > > > > > lazy_check_needs_freeze iterates over blocks and this patch changes it to\n> > > > > > update vacrelstats. I think it should do what\n> > > > > > lazy_{vacuum/cleanup}_heap/page/index do and call update_vacuum_error_info() at\n> > > > > > its beginning (even though only the offset is changed), and then\n> > > > > > restore_vacuum_error_info() at its end (to \"revert back\" to the item number it\n> > > > > > started with).\n> > > > > >\n> > > > >\n> > > > > I see that Mahendra has changed patch as per this suggestion but I am\n> > > > > not convinced that it is a good idea to sprinkle\n> > > > > update_vacuum_error_info()/restore_vacuum_error_info() at places more\n> > > > > than required. I see that it might look a bit clean from the\n> > > > > perspective that if tomorrow we use the function\n> > > > > lazy_check_needs_freeze() for a different purpose then we don't need\n> > > > > to worry about the wrong phase information. If we are worried about\n> > > > > that then we should have an assert in that function to ensure that the\n> > > > > current phase is VACUUM_ERRCB_PHASE_SCAN_HEAP.\n> > > >\n> > > > The motivation was to restore the offnum, which is set to Invalid at the start\n> > > > of lazy_scan_heap(), and then set valid within lazy_check_needs_freeze, but\n> > > > should be restored or re-set to Invalid when returns to lazy_scan_heap(). If\n> > > > you think it's important, we could just set vacrelstats->offnum = Invalid\n> > > > before returning,\n> > > >\n> > >\n> > > Yeah, I would prefer that and probably a comment to indicate why we\n> > > are doing that.\n> >\n> > +1\n> >\n> > I'm concerned that we call the update and restore in\n> > heap_page_is_all_visible(). Unlike lazy_check_needs_freeze(), this\n> > function is called for every vacuumed page. I commented that if we\n> > want to update the offset number during iterating tuples in the\n> > function we should change the phase to SCAN_HEAP at the beginning of\n> > the function because it's actually not vacuuming.\n> >\n>\n> AFAICS, heap_page_is_all_visible() is called from only one place and\n> that is lazy_vacuum_page(), so I think if there is any error in\n> heap_page_is_all_visible(), it should be considered as VACUUM_HEAP\n> phase error.\n\nIt's true that heap_page_is_all_visible() is called from only\nlazy_vacuum_page() but I'm concerned it would lead misleading since\nit's not actually removing tuples but just checking after vacuum. I\nguess that the errcontext should show what the process is actually\ndoing and therefore help the investigation, so I thought VACUUM_HEAP\nmight not be appropriate for this case. But I see also your point.\nOther vacuum error context phases match with vacuum progress\ninformation phrases. So in heap_page_is_all_visible (), I agree it's\nbetter to update the offset number and keep the phase VACUUM_HEAP\nrather than do nothing.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 10 Aug 2020 13:54:23 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: display offset along with block number in vacuum errors" }, { "msg_contents": "On Mon, Aug 10, 2020 at 10:24 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> It's true that heap_page_is_all_visible() is called from only\n> lazy_vacuum_page() but I'm concerned it would lead misleading since\n> it's not actually removing tuples but just checking after vacuum. I\n> guess that the errcontext should show what the process is actually\n> doing and therefore help the investigation, so I thought VACUUM_HEAP\n> might not be appropriate for this case. But I see also your point.\n> Other vacuum error context phases match with vacuum progress\n> information phrases. So in heap_page_is_all_visible (), I agree it's\n> better to update the offset number and keep the phase VACUUM_HEAP\n> rather than do nothing.\n>\n\nOkay, I have changed accordingly and this means that the offset will\nbe displayed for the vacuum phase as well. Apart from this, I have\nfixed all the comments raised by me in the attached patch. One thing\nwe need to think is do we want to set offset during heap_page_prune\nwhen called from lazy_scan_heap? I think the code path for\nheap_prune_chain is quite deep, so an error can occur in that path.\nWhat do you think?\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Fri, 14 Aug 2020 16:06:31 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: display offset along with block number in vacuum errors" }, { "msg_contents": "On Fri, Aug 7, 2020 at 7:18 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Aug 6, 2020 at 7:51 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > On Thu, Aug 06, 2020 at 07:39:21PM +0530, Amit Kapila wrote:\n> > > On Wed, Jul 29, 2020 at 1:09 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > >\n> > > >\n> > > > lazy_check_needs_freeze iterates over blocks and this patch changes it to\n> > > > update vacrelstats. I think it should do what\n> > > > lazy_{vacuum/cleanup}_heap/page/index do and call update_vacuum_error_info() at\n> > > > its beginning (even though only the offset is changed), and then\n> > > > restore_vacuum_error_info() at its end (to \"revert back\" to the item number it\n> > > > started with).\n> > > >\n> > >\n> > > I see that Mahendra has changed patch as per this suggestion but I am\n> > > not convinced that it is a good idea to sprinkle\n> > > update_vacuum_error_info()/restore_vacuum_error_info() at places more\n> > > than required. I see that it might look a bit clean from the\n> > > perspective that if tomorrow we use the function\n> > > lazy_check_needs_freeze() for a different purpose then we don't need\n> > > to worry about the wrong phase information. If we are worried about\n> > > that then we should have an assert in that function to ensure that the\n> > > current phase is VACUUM_ERRCB_PHASE_SCAN_HEAP.\n> >\n> > The motivation was to restore the offnum, which is set to Invalid at the start\n> > of lazy_scan_heap(), and then set valid within lazy_check_needs_freeze, but\n> > should be restored or re-set to Invalid when returns to lazy_scan_heap(). If\n> > you think it's important, we could just set vacrelstats->offnum = Invalid\n> > before returning,\n> >\n>\n> Yeah, I would prefer that and probably a comment to indicate why we\n> are doing that.\n>\n\nChanged accordingly in the updated patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 14 Aug 2020 16:08:05 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: display offset along with block number in vacuum errors" }, { "msg_contents": "On Thu, Aug 6, 2020 at 7:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Aug 5, 2020 at 12:47 AM Mahendra Singh Thalor\n> <mahi6run@gmail.com> wrote:\n> >\n> > Apart from these, I fixed Justin's comment of extra brackets(That was\n> > due to \"patch -p 1 < file\", as 002_fix was not applying directly). I\n> > haven't updated the document for this(Sawada's comment). I will try in\n> > the next patch.\n> > Attaching v4 patch for review.\n> >\n>\n> Few comments on the latest patch:\n> 1.\n> @@ -2640,6 +2659,7 @@ lazy_truncate_heap(Relation onerel, LVRelStats\n> *vacrelstats)\n> */\n> new_rel_pages = count_nondeletable_pages(onerel, vacrelstats);\n> vacrelstats->blkno = new_rel_pages;\n> + vacrelstats->offnum = InvalidOffsetNumber;\n>\n> Do we really need to update the 'vacrelstats->offnum' here when we\n> have already set it to InvalidOffsetNumber in the caller?\n>\n\nI have removed this change.\n\n> 2.\n> @@ -3574,8 +3605,14 @@ vacuum_error_callback(void *arg)\n> {\n> case VACUUM_ERRCB_PHASE_SCAN_HEAP:\n> if (BlockNumberIsValid(errinfo->blkno))\n> - errcontext(\"while scanning block %u of relation \\\"%s.%s\\\"\",\n> - errinfo->blkno, errinfo->relnamespace, errinfo->relname);\n> + {\n> + if (OffsetNumberIsValid(errinfo->offnum))\n> + errcontext(\"while scanning block %u of relation \\\"%s.%s\\\", item offset %u\",\n> +\n>\n> I am not completely sure if this error message is an improvement over\n> what you have in the initial version of patch \"while scanning block %u\n> and offset %u of relation \\\"%s.%s\\\"\",...\". I see that Justin has\n> raised a concern above that whether users will be aware of 'offset'\n> but I also see that we have used it in a few other messages in the\n> code.\n\nI have changed the message to what you have in the original patch.\n\nApart from above, I have also reset the offset number back to\nInvalidOffsetNumber in lazy_scan_heap after processing all the tuples,\notherwise, it will erroneously display wring offset if any error\noccurred afterward.\n\nLet me know what you think of the changes done in the latest patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 14 Aug 2020 16:11:22 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: display offset along with block number in vacuum errors" }, { "msg_contents": "On Fri, Aug 14, 2020 at 4:06 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Aug 10, 2020 at 10:24 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > It's true that heap_page_is_all_visible() is called from only\n> > lazy_vacuum_page() but I'm concerned it would lead misleading since\n> > it's not actually removing tuples but just checking after vacuum. I\n> > guess that the errcontext should show what the process is actually\n> > doing and therefore help the investigation, so I thought VACUUM_HEAP\n> > might not be appropriate for this case. But I see also your point.\n> > Other vacuum error context phases match with vacuum progress\n> > information phrases. So in heap_page_is_all_visible (), I agree it's\n> > better to update the offset number and keep the phase VACUUM_HEAP\n> > rather than do nothing.\n> >\n>\n> Okay, I have changed accordingly and this means that the offset will\n> be displayed for the vacuum phase as well. Apart from this, I have\n> fixed all the comments raised by me in the attached patch. One thing\n> we need to think is do we want to set offset during heap_page_prune\n> when called from lazy_scan_heap? I think the code path for\n> heap_prune_chain is quite deep, so an error can occur in that path.\n> What do you think?\n>\n\nThe reason why I have not included heap_page_prune related change in\nthe patch is that I don't want to sprinkle this in every possible\nfunction (code path) called via vacuum especially if the probability\nof an error in that code path is low. But, I am fine if you and or\nothers think that it is a good idea to update offset in\nheap_page_prune as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 15 Aug 2020 08:49:41 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: display offset along with block number in vacuum errors" }, { "msg_contents": "On Sat, 15 Aug 2020 at 12:19, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Aug 14, 2020 at 4:06 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Aug 10, 2020 at 10:24 AM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > > It's true that heap_page_is_all_visible() is called from only\n> > > lazy_vacuum_page() but I'm concerned it would lead misleading since\n> > > it's not actually removing tuples but just checking after vacuum. I\n> > > guess that the errcontext should show what the process is actually\n> > > doing and therefore help the investigation, so I thought VACUUM_HEAP\n> > > might not be appropriate for this case. But I see also your point.\n> > > Other vacuum error context phases match with vacuum progress\n> > > information phrases. So in heap_page_is_all_visible (), I agree it's\n> > > better to update the offset number and keep the phase VACUUM_HEAP\n> > > rather than do nothing.\n> > >\n> >\n> > Okay, I have changed accordingly and this means that the offset will\n> > be displayed for the vacuum phase as well. Apart from this, I have\n> > fixed all the comments raised by me in the attached patch. One thing\n> > we need to think is do we want to set offset during heap_page_prune\n> > when called from lazy_scan_heap? I think the code path for\n> > heap_prune_chain is quite deep, so an error can occur in that path.\n> > What do you think?\n> >\n>\n> The reason why I have not included heap_page_prune related change in\n> the patch is that I don't want to sprinkle this in every possible\n> function (code path) called via vacuum especially if the probability\n> of an error in that code path is low. But, I am fine if you and or\n> others think that it is a good idea to update offset in\n> heap_page_prune as well.\n\nI agree to not try sprinkling it many places than necessity.\n\nRegarding heap_page_prune(), I'm concerned a bit that\nheap_page_prune() is typically the first function to check the tuple\nvisibility within the vacuum code. I've sometimes observed an error\nwith the message like \"DETAIL: could not open file “pg_xact/00AB”: No\nsuch file or directory\" due to a tuple header corruption. I suspect\nthis message was emitted while checking tuple visibility in\nheap_page_prune(). So I guess the likelihood of an error in that code\nis not so low.\n\nOn the other hand, if we want to update the offset number in\nheap_page_prune() we will need to expose some vacuum structs defined\nas a static including LVRelStats. But I don't think it's a good idea.\nThe second idea I came up with is that we set another errcontext for\nheap_page_prune(). Since heap_page_prune() could be called also by a\nregular page scanning it would work fine for both cases, although\nthere will be extra overheads for both. What do you think?\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 17 Aug 2020 15:07:39 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: display offset along with block number in vacuum errors" }, { "msg_contents": "On Mon, Aug 17, 2020 at 11:38 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Sat, 15 Aug 2020 at 12:19, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > The reason why I have not included heap_page_prune related change in\n> > the patch is that I don't want to sprinkle this in every possible\n> > function (code path) called via vacuum especially if the probability\n> > of an error in that code path is low. But, I am fine if you and or\n> > others think that it is a good idea to update offset in\n> > heap_page_prune as well.\n>\n> I agree to not try sprinkling it many places than necessity.\n>\n> Regarding heap_page_prune(), I'm concerned a bit that\n> heap_page_prune() is typically the first function to check the tuple\n> visibility within the vacuum code. I've sometimes observed an error\n> with the message like \"DETAIL: could not open file “pg_xact/00AB”: No\n> such file or directory\" due to a tuple header corruption. I suspect\n> this message was emitted while checking tuple visibility in\n> heap_page_prune(). So I guess the likelihood of an error in that code\n> is not so low.\n>\n\nFair point.\n\n> On the other hand, if we want to update the offset number in\n> heap_page_prune() we will need to expose some vacuum structs defined\n> as a static including LVRelStats.\n>\n\nI don't think we need to expose LVRelStats. We can just pass the\naddress of vacrelstats->offset_num to achieve what we want. I have\ntried that and it works, see the\nv6-0002-additinal-error-context-information-in-heap_page_ patch\nattached. Do you see any problem with it?\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Tue, 18 Aug 2020 09:36:21 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: display offset along with block number in vacuum errors" }, { "msg_contents": "On Tue, 18 Aug 2020 at 13:06, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Aug 17, 2020 at 11:38 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Sat, 15 Aug 2020 at 12:19, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > The reason why I have not included heap_page_prune related change in\n> > > the patch is that I don't want to sprinkle this in every possible\n> > > function (code path) called via vacuum especially if the probability\n> > > of an error in that code path is low. But, I am fine if you and or\n> > > others think that it is a good idea to update offset in\n> > > heap_page_prune as well.\n> >\n> > I agree to not try sprinkling it many places than necessity.\n> >\n> > Regarding heap_page_prune(), I'm concerned a bit that\n> > heap_page_prune() is typically the first function to check the tuple\n> > visibility within the vacuum code. I've sometimes observed an error\n> > with the message like \"DETAIL: could not open file “pg_xact/00AB”: No\n> > such file or directory\" due to a tuple header corruption. I suspect\n> > this message was emitted while checking tuple visibility in\n> > heap_page_prune(). So I guess the likelihood of an error in that code\n> > is not so low.\n> >\n>\n> Fair point.\n>\n> > On the other hand, if we want to update the offset number in\n> > heap_page_prune() we will need to expose some vacuum structs defined\n> > as a static including LVRelStats.\n> >\n>\n> I don't think we need to expose LVRelStats. We can just pass the\n> address of vacrelstats->offset_num to achieve what we want. I have\n> tried that and it works, see the\n> v6-0002-additinal-error-context-information-in-heap_page_ patch\n> attached. Do you see any problem with it?\n\nYes, you're right. I'm concerned a bit the number of arguments passing\nto heap_page_prune() might get higher when we need other values to\nupdate for errcontext, but I'm okay with the current patch.\n\nCurrently, we're in SCAN_HEAP phase in heap_page_prune() but should it\nbe VACUUM_HEAP instead?\n\nAlso, I've tested the patch with log_min_messages = 'info' and get the\nfollowing sever logs:\n\n2020-08-19 14:28:09.917 JST [72912] INFO: launched 1 parallel vacuum\nworker for index vacuuming (planned: 1)\n2020-08-19 14:28:09.917 JST [72912] CONTEXT: while scanning block 973\nof relation \"public.tbl\"\n2020-08-19 14:28:09.959 JST [72912] INFO: scanned index \"i1\" to\nremove 109872 row versions\n2020-08-19 14:28:09.959 JST [72912] DETAIL: CPU: user: 0.04 s,\nsystem: 0.00 s, elapsed: 0.04 s\n2020-08-19 14:28:09.959 JST [72912] CONTEXT: while vacuuming index\n\"i1\" of relation \"public.tbl\"\n2020-08-19 14:28:09.967 JST [72936] INFO: scanned index \"i2\" to\nremove 109872 row versions by parallel vacuum worker\n2020-08-19 14:28:09.967 JST [72936] DETAIL: CPU: user: 0.03 s,\nsystem: 0.00 s, elapsed: 0.04 s\n2020-08-19 14:28:09.967 JST [72936] CONTEXT: while vacuuming index\n\"i2\" of relation \"public.tbl\"\n2020-08-19 14:28:09.967 JST [72912] INFO: scanned index \"i2\" to\nremove 109872 row versions by parallel vacuum worker\n2020-08-19 14:28:09.967 JST [72912] DETAIL: CPU: user: 0.03 s,\nsystem: 0.00 s, elapsed: 0.04 s\n2020-08-19 14:28:09.967 JST [72912] CONTEXT: while vacuuming index\n\"i2\" of relation \"public.tbl\"\n parallel worker\n while scanning block 973 of relation \"public.tbl\"\n2020-08-19 14:28:09.968 JST [72912] INFO: \"tbl\": removed 109872 row\nversions in 487 pages\n2020-08-19 14:28:09.968 JST [72912] DETAIL: CPU: user: 0.00 s,\nsystem: 0.00 s, elapsed: 0.00 s\n2020-08-19 14:28:09.968 JST [72912] CONTEXT: while vacuuming block\n973 of relation \"public.tbl\"\n2020-08-19 14:28:09.968 JST [72912] INFO: index \"i1\" now contains\n110000 row versions in 578 pages\n2020-08-19 14:28:09.968 JST [72912] DETAIL: 109872 index row versions\nwere removed.\n 0 index pages have been deleted, 0 are currently reusable.\n CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.\n2020-08-19 14:28:09.968 JST [72912] CONTEXT: while scanning block 973\nof relation \"public.tbl\"\n2020-08-19 14:28:09.968 JST [72912] INFO: index \"i2\" now contains\n110000 row versions in 578 pages\n2020-08-19 14:28:09.968 JST [72912] DETAIL: 109872 index row versions\nwere removed.\n 0 index pages have been deleted, 0 are currently reusable.\n CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.\n2020-08-19 14:28:09.968 JST [72912] CONTEXT: while scanning block 973\nof relation \"public.tbl\"\n2020-08-19 14:28:09.969 JST [72912] INFO: \"tbl\": found 110000\nremovable, 110000 nonremovable row versions in 974 out of 974 pages\n2020-08-19 14:28:09.969 JST [72912] DETAIL: 0 dead row versions\ncannot be removed yet, oldest xmin: 519\n There were 372 unused item identifiers.\n Skipped 0 pages due to buffer pins, 0 frozen pages.\n 0 pages are entirely empty.\n CPU: user: 0.05 s, system: 0.00 s, elapsed: 0.06 s.\n2020-08-19 14:28:09.969 JST [72912] CONTEXT: while scanning block 973\nof relation \"public.tbl\"\n\nThis is not directly related to the patch but it looks like we can\nimprove the current errcontext settings. For instance, the message\nfrom lazy_vacuum_index(): there are two messages reporting the phases.\nI've attached the patch that improves the current errcontext setting,\nwhich can be applied before the patch adding offset number.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 19 Aug 2020 16:23:51 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: display offset along with block number in vacuum errors" }, { "msg_contents": "On Wed, Aug 19, 2020 at 12:54 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Tue, 18 Aug 2020 at 13:06, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > I don't think we need to expose LVRelStats. We can just pass the\n> > address of vacrelstats->offset_num to achieve what we want. I have\n> > tried that and it works, see the\n> > v6-0002-additinal-error-context-information-in-heap_page_ patch\n> > attached. Do you see any problem with it?\n>\n> Yes, you're right. I'm concerned a bit the number of arguments passing\n> to heap_page_prune() might get higher when we need other values to\n> update for errcontext, but I'm okay with the current patch.\n>\n\nYeah, we might need to think if we want to increase the number of\nparameters but not sure we need to worry at this stage. If required, I\nthink we can either expose LVRelStats or extract a few parameters from\nit and form a separate structure that could be passed to\nheap_page_prune.\n\n> Currently, we're in SCAN_HEAP phase in heap_page_prune() but should it\n> be VACUUM_HEAP instead?\n>\n\nI think it is currently similar to what we do in progress reporting.\nWe set to VACUUM_HEAP phase where the progress reporting is also set\nto *HEAP_BLKS_VACUUMED. Currently, heap_page_prune() is covered under\n*HEAP_BLKS_SCANNED, so I don't see a pressing need to change the error\ncontext phase for heap_page_prune(). And also, we need to add some\nmore smarts in heap_page_prune() for this which I want to avoid.\n\n> Also, I've tested the patch with log_min_messages = 'info' and get the\n> following sever logs:\n>\n..\n>\n> This is not directly related to the patch but it looks like we can\n> improve the current errcontext settings. For instance, the message\n> from lazy_vacuum_index(): there are two messages reporting the phases.\n> I've attached the patch that improves the current errcontext setting,\n> which can be applied before the patch adding offset number.\n>\n\nAfter your patch, I see output like below with log_min_messages=info,\n\n2020-08-20 10:11:46.769 IST [2640] INFO: scanned index \"idx_test_c1\"\nto remove 10000 row versions\n2020-08-20 10:11:46.769 IST [2640] DETAIL: CPU: user: 0.06 s, system:\n0.01 s, elapsed: 0.06 s\n2020-08-20 10:11:46.769 IST [2640] CONTEXT: while vacuuming index\n\"idx_test_c1\" of relation \"public.test_vac\"\n\n2020-08-20 10:11:46.901 IST [2640] INFO: scanned index \"idx_test_c2\"\nto remove 10000 row versions\n2020-08-20 10:11:46.901 IST [2640] DETAIL: CPU: user: 0.10 s, system:\n0.01 s, elapsed: 0.13 s\n2020-08-20 10:11:46.901 IST [2640] CONTEXT: while vacuuming index\n\"idx_test_c2\" of relation \"public.test_vac\"\n\n2020-08-20 10:11:46.917 IST [2640] INFO: \"test_vac\": removed 10000\nrow versions in 667 pages\n2020-08-20 10:11:46.917 IST [2640] DETAIL: CPU: user: 0.01 s, system:\n0.00 s, elapsed: 0.01 s\n\n2020-08-20 10:11:46.928 IST [2640] INFO: index \"idx_test_c1\" now\ncontains 50000 row versions in 276 pages\n2020-08-20 10:11:46.928 IST [2640] DETAIL: 10000 index row versions\nwere removed.\n 136 index pages have been deleted, 109 are currently reusable.\n CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.\n2020-08-20 10:11:46.928 IST [2640] CONTEXT: while cleaning up index\n\"idx_test_c1\" of relation \"public.test_vac\"\n\nHere, we can notice that for the index, we are getting context\ninformation but not for the heap. The reason is that in\nvacuum_error_callback, we are not printing additional information for\nphases VACUUM_ERRCB_PHASE_SCAN_HEAP and VACUUM_ERRCB_PHASE_VACUUM_HEAP\nwhen block number is invalid. If we want to cover the 'info' messages\nthen won't it be better if we print a message in those phases even\nblock number is invalid (something like 'while scanning relation\n\\\"%s.%s\\\"\")\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 20 Aug 2020 10:31:22 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: display offset along with block number in vacuum errors" }, { "msg_contents": "On Thu, 20 Aug 2020 at 14:01, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Aug 19, 2020 at 12:54 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Tue, 18 Aug 2020 at 13:06, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > >\n> > > I don't think we need to expose LVRelStats. We can just pass the\n> > > address of vacrelstats->offset_num to achieve what we want. I have\n> > > tried that and it works, see the\n> > > v6-0002-additinal-error-context-information-in-heap_page_ patch\n> > > attached. Do you see any problem with it?\n> >\n> > Yes, you're right. I'm concerned a bit the number of arguments passing\n> > to heap_page_prune() might get higher when we need other values to\n> > update for errcontext, but I'm okay with the current patch.\n> >\n>\n> Yeah, we might need to think if we want to increase the number of\n> parameters but not sure we need to worry at this stage. If required, I\n> think we can either expose LVRelStats or extract a few parameters from\n> it and form a separate structure that could be passed to\n> heap_page_prune.\n\nAgreed.\n\n>\n> > Currently, we're in SCAN_HEAP phase in heap_page_prune() but should it\n> > be VACUUM_HEAP instead?\n> >\n>\n> I think it is currently similar to what we do in progress reporting.\n> We set to VACUUM_HEAP phase where the progress reporting is also set\n> to *HEAP_BLKS_VACUUMED. Currently, heap_page_prune() is covered under\n> *HEAP_BLKS_SCANNED, so I don't see a pressing need to change the error\n> context phase for heap_page_prune(). And also, we need to add some\n> more smarts in heap_page_prune() for this which I want to avoid.\n\nAgreed.\n\n>\n> > Also, I've tested the patch with log_min_messages = 'info' and get the\n> > following sever logs:\n> >\n> ..\n> >\n> > This is not directly related to the patch but it looks like we can\n> > improve the current errcontext settings. For instance, the message\n> > from lazy_vacuum_index(): there are two messages reporting the phases.\n> > I've attached the patch that improves the current errcontext setting,\n> > which can be applied before the patch adding offset number.\n> >\n>\n> After your patch, I see output like below with log_min_messages=info,\n>\n> 2020-08-20 10:11:46.769 IST [2640] INFO: scanned index \"idx_test_c1\"\n> to remove 10000 row versions\n> 2020-08-20 10:11:46.769 IST [2640] DETAIL: CPU: user: 0.06 s, system:\n> 0.01 s, elapsed: 0.06 s\n> 2020-08-20 10:11:46.769 IST [2640] CONTEXT: while vacuuming index\n> \"idx_test_c1\" of relation \"public.test_vac\"\n>\n> 2020-08-20 10:11:46.901 IST [2640] INFO: scanned index \"idx_test_c2\"\n> to remove 10000 row versions\n> 2020-08-20 10:11:46.901 IST [2640] DETAIL: CPU: user: 0.10 s, system:\n> 0.01 s, elapsed: 0.13 s\n> 2020-08-20 10:11:46.901 IST [2640] CONTEXT: while vacuuming index\n> \"idx_test_c2\" of relation \"public.test_vac\"\n>\n> 2020-08-20 10:11:46.917 IST [2640] INFO: \"test_vac\": removed 10000\n> row versions in 667 pages\n> 2020-08-20 10:11:46.917 IST [2640] DETAIL: CPU: user: 0.01 s, system:\n> 0.00 s, elapsed: 0.01 s\n>\n> 2020-08-20 10:11:46.928 IST [2640] INFO: index \"idx_test_c1\" now\n> contains 50000 row versions in 276 pages\n> 2020-08-20 10:11:46.928 IST [2640] DETAIL: 10000 index row versions\n> were removed.\n> 136 index pages have been deleted, 109 are currently reusable.\n> CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.\n> 2020-08-20 10:11:46.928 IST [2640] CONTEXT: while cleaning up index\n> \"idx_test_c1\" of relation \"public.test_vac\"\n>\n> Here, we can notice that for the index, we are getting context\n> information but not for the heap. The reason is that in\n> vacuum_error_callback, we are not printing additional information for\n> phases VACUUM_ERRCB_PHASE_SCAN_HEAP and VACUUM_ERRCB_PHASE_VACUUM_HEAP\n> when block number is invalid. If we want to cover the 'info' messages\n> then won't it be better if we print a message in those phases even\n> block number is invalid (something like 'while scanning relation\n> \\\"%s.%s\\\"\")\n\nYeah, there is an inconsistency. I agree to print the message even\nwhen the block number is invalid. We're not actually doing any vacuum\njobs when printing the message but it would be less confusing than\nprinting the wrong phase and more consistent.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 20 Aug 2020 15:47:40 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: display offset along with block number in vacuum errors" }, { "msg_contents": "On Thu, Aug 20, 2020 at 12:18 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Thu, 20 Aug 2020 at 14:01, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Aug 19, 2020 at 12:54 PM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > Here, we can notice that for the index, we are getting context\n> > information but not for the heap. The reason is that in\n> > vacuum_error_callback, we are not printing additional information for\n> > phases VACUUM_ERRCB_PHASE_SCAN_HEAP and VACUUM_ERRCB_PHASE_VACUUM_HEAP\n> > when block number is invalid. If we want to cover the 'info' messages\n> > then won't it be better if we print a message in those phases even\n> > block number is invalid (something like 'while scanning relation\n> > \\\"%s.%s\\\"\")\n>\n> Yeah, there is an inconsistency. I agree to print the message even\n> when the block number is invalid.\n>\n\nOkay, I will update this and send this patch and rebased patch to\ndisplay offsets later today or tomorrow.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 20 Aug 2020 12:32:18 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: display offset along with block number in vacuum errors" }, { "msg_contents": "On Thu, Aug 20, 2020 at 12:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Aug 20, 2020 at 12:18 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Thu, 20 Aug 2020 at 14:01, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Wed, Aug 19, 2020 at 12:54 PM Masahiko Sawada\n> > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > > Here, we can notice that for the index, we are getting context\n> > > information but not for the heap. The reason is that in\n> > > vacuum_error_callback, we are not printing additional information for\n> > > phases VACUUM_ERRCB_PHASE_SCAN_HEAP and VACUUM_ERRCB_PHASE_VACUUM_HEAP\n> > > when block number is invalid. If we want to cover the 'info' messages\n> > > then won't it be better if we print a message in those phases even\n> > > block number is invalid (something like 'while scanning relation\n> > > \\\"%s.%s\\\"\")\n> >\n> > Yeah, there is an inconsistency. I agree to print the message even\n> > when the block number is invalid.\n> >\n>\n> Okay, I will update this and send this patch and rebased patch to\n> display offsets later today or tomorrow.\n>\n\nAttached are both the patches. The first one is to improve existing\nerror context information, so I think we should back-patch to 13. The\nsecond one is to add additional vacuum error context information, so\nthat is for only HEAD. Does that make sense? Also, let me know if you\nhave any more comments.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Thu, 20 Aug 2020 17:42:04 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: display offset along with block number in vacuum errors" }, { "msg_contents": "On Thu, 20 Aug 2020 at 21:12, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Aug 20, 2020 at 12:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Aug 20, 2020 at 12:18 PM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > > On Thu, 20 Aug 2020 at 14:01, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Wed, Aug 19, 2020 at 12:54 PM Masahiko Sawada\n> > > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > >\n> > > > Here, we can notice that for the index, we are getting context\n> > > > information but not for the heap. The reason is that in\n> > > > vacuum_error_callback, we are not printing additional information for\n> > > > phases VACUUM_ERRCB_PHASE_SCAN_HEAP and VACUUM_ERRCB_PHASE_VACUUM_HEAP\n> > > > when block number is invalid. If we want to cover the 'info' messages\n> > > > then won't it be better if we print a message in those phases even\n> > > > block number is invalid (something like 'while scanning relation\n> > > > \\\"%s.%s\\\"\")\n> > >\n> > > Yeah, there is an inconsistency. I agree to print the message even\n> > > when the block number is invalid.\n> > >\n> >\n> > Okay, I will update this and send this patch and rebased patch to\n> > display offsets later today or tomorrow.\n> >\n>\n> Attached are both the patches. The first one is to improve existing\n> error context information, so I think we should back-patch to 13. The\n> second one is to add additional vacuum error context information, so\n> that is for only HEAD. Does that make sense? Also, let me know if you\n> have any more comments.\n\nYes, makes sense to me.\n\nI don't have comments on both patches. They look good to me.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 21 Aug 2020 16:00:54 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: display offset along with block number in vacuum errors" }, { "msg_contents": "On Fri, Aug 21, 2020 at 12:31 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Thu, 20 Aug 2020 at 21:12, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > Attached are both the patches. The first one is to improve existing\n> > error context information, so I think we should back-patch to 13. The\n> > second one is to add additional vacuum error context information, so\n> > that is for only HEAD. Does that make sense? Also, let me know if you\n> > have any more comments.\n>\n> Yes, makes sense to me.\n>\n> I don't have comments on both patches. They look good to me.\n>\n\nThanks, I have pushed the first patch. I'll will push the second one\nin a day or two unless someone has comments on the same.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 24 Aug 2020 10:57:17 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: display offset along with block number in vacuum errors" }, { "msg_contents": "On Thu, 20 Aug 2020 at 17:42, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Aug 20, 2020 at 12:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Aug 20, 2020 at 12:18 PM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > > On Thu, 20 Aug 2020 at 14:01, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Wed, Aug 19, 2020 at 12:54 PM Masahiko Sawada\n> > > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > >\n> > > > Here, we can notice that for the index, we are getting context\n> > > > information but not for the heap. The reason is that in\n> > > > vacuum_error_callback, we are not printing additional information for\n> > > > phases VACUUM_ERRCB_PHASE_SCAN_HEAP and VACUUM_ERRCB_PHASE_VACUUM_HEAP\n> > > > when block number is invalid. If we want to cover the 'info' messages\n> > > > then won't it be better if we print a message in those phases even\n> > > > block number is invalid (something like 'while scanning relation\n> > > > \\\"%s.%s\\\"\")\n> > >\n> > > Yeah, there is an inconsistency. I agree to print the message even\n> > > when the block number is invalid.\n> > >\n> >\n> > Okay, I will update this and send this patch and rebased patch to\n> > display offsets later today or tomorrow.\n> >\n>\n> Attached are both the patches. The first one is to improve existing\n> error context information, so I think we should back-patch to 13. The\n> second one is to add additional vacuum error context information, so\n> that is for only HEAD. Does that make sense? Also, let me know if you\n> have any more comments.\n\nThanks Amit for updating the patch. All changes in v7-02 look fine to me.\n\nThanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.com\n>\n> --\n> With Regards,\n> Amit Kapila.\n\n\n\n-- \nThanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 26 Aug 2020 08:54:46 +0530", "msg_from": "Mahendra Singh Thalor <mahi6run@gmail.com>", "msg_from_op": true, "msg_subject": "Re: display offset along with block number in vacuum errors" }, { "msg_contents": "On Wed, Aug 26, 2020 at 8:54 AM Mahendra Singh Thalor\n<mahi6run@gmail.com> wrote:\n>\n> On Thu, 20 Aug 2020 at 17:42, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > Attached are both the patches. The first one is to improve existing\n> > error context information, so I think we should back-patch to 13. The\n> > second one is to add additional vacuum error context information, so\n> > that is for only HEAD. Does that make sense? Also, let me know if you\n> > have any more comments.\n>\n> Thanks Amit for updating the patch. All changes in v7-02 look fine to me.\n>\n\nOkay, pushed v7-02 as well. I have marked the entry for this in CF as committed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 26 Aug 2020 11:37:30 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: display offset along with block number in vacuum errors" }, { "msg_contents": "On Wed, 26 Aug 2020 at 15:07, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Aug 26, 2020 at 8:54 AM Mahendra Singh Thalor\n> <mahi6run@gmail.com> wrote:\n> >\n> > On Thu, 20 Aug 2020 at 17:42, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > Attached are both the patches. The first one is to improve existing\n> > > error context information, so I think we should back-patch to 13. The\n> > > second one is to add additional vacuum error context information, so\n> > > that is for only HEAD. Does that make sense? Also, let me know if you\n> > > have any more comments.\n> >\n> > Thanks Amit for updating the patch. All changes in v7-02 look fine to me.\n> >\n>\n> Okay, pushed v7-02 as well. I have marked the entry for this in CF as committed.\n\nThank you!\n\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 26 Aug 2020 15:54:58 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: display offset along with block number in vacuum errors" } ]
[ { "msg_contents": "I would like to propose a patch for enabling the parallelism for the\nbitmap index scan path.\n\nBackground:\nCurrently, we support only a parallel bitmap heap scan path. Therein,\nthe underlying bitmap index scan is done by a single worker called the\nleader. The leader creates a bitmap in shared memory and once the\nbitmap is ready it creates a shared iterator and after that, all the\nworkers process the shared iterator and scan the heap in parallel.\nWhile analyzing the TPCH plan we have observed that some of the\nqueries are spending significant time in preparing the bitmap. So the\nidea of this patch is to use the parallel index scan for preparing the\nunderlying bitmap in parallel.\n\nDesign:\nIf underlying index AM supports the parallel path (currently only\nBTREE support it), then we will create a parallel bitmap heap scan\npath on top of the parallel bitmap index scan path. So the idea of\nthis patch is that each worker will do the parallel index scan and\ngenerate their part of the bitmap. And, we will create a barrier so\nthat we can not start preparing the shared iterator until all the\nworker is ready with their bitmap. The first worker, which is ready\nwith the bitmap will keep a copy of its TBM and the page table in the\nshared memory. And, all the subsequent workers will merge their TBM\nwith the shared TBM. Once all the TBM are merged we will get one\ncommon shared TBM and after that stage, the worker can continue. The\nremaining part is the same, basically, again one worker will scan the\nshared TBM and prepare the shared iterator and once it is ready all\nthe workers will jointly scan the heap in parallel using shared\niterator.\n\nBitmapHeapNext\n{\n...\nBarrierAttach();\ntbm = MultiExecProcNode();\ntbm_merge(tbm); --Merge with common tbm using tbm_union\nBarrierArriveAndWait();\n\nif (BitmapShouldInitializeSharedState(pstate)). --> only one worker\ncome out of this\n{\n tbm_prepare_shared_iterate();\n BitmapDoneInitializingSharedState(). -->wakeup others\n}\ntbm_attach_shared_iterate(). --> all worker attach to shared iterator\n...\n}\n\nPerformance: With scale factor 10, I could see that Q6 is spending\nsignificant time in a bitmap index scan so I have taken the\nperformance with that query and I can see that the bitmap index scan\nnode is 3x faster by using 3 workers whereas overall plan got ~40%\nfaster.\n\nTPCH: S.F. 10, work_mem=512MB shared_buffers: 1GB\n\nHEAD:\n Limit (cost=1559777.02..1559777.03 rows=1 width=32) (actual\ntime=5260.121..5260.122 rows=1 loops=1)\n -> Finalize Aggregate (cost=1559777.02..1559777.03 rows=1\nwidth=32) (actual time=5260.119..5260.119 rows=1 loops=1)\n -> Gather (cost=1559776.69..1559777.00 rows=3 width=32)\n(actual time=5257.251..5289.595 rows=4 loops=1)\n Workers Planned: 3\n Workers Launched: 3\n -> Partial Aggregate (cost=1558776.69..1558776.70\nrows=1 width=32) (actual time=5247.714..5247.714 rows=1 loops=4)\n -> Parallel Bitmap Heap Scan on lineitem\n(cost=300603.01..1556898.89 rows=375560 width=12) (actual\ntime=3475.944..50\n37.484 rows=285808 loops=4)\n Recheck Cond: ((l_shipdate >=\n'1997-01-01'::date) AND (l_shipdate < '1998-01-01 00:00:00'::timestamp\nwithout tim\ne zone) AND (l_discount >= 0.02) AND (l_discount <= 0.04) AND\n(l_quantity < '24'::numeric))\n Heap Blocks: exact=205250\n -> Bitmap Index Scan on\nidx_lineitem_shipdate (cost=0.00..300311.95 rows=1164235 width=0)\n(actual time=3169.85\n5..3169.855 rows=1143234 loops=1)\n Index Cond: ((l_shipdate >=\n'1997-01-01'::date) AND (l_shipdate < '1998-01-01 00:00:00'::timestamp\nwithout\n time zone) AND (l_discount >= 0.02) AND (l_discount <= 0.04) AND\n(l_quantity < '24'::numeric))\n Planning Time: 0.659 ms\n Execution Time: 5289.787 ms\n(13 rows)\n\n\nPATCH:\n\n Limit (cost=1559579.85..1559579.86 rows=1 width=32) (actual\ntime=3333.572..3333.572 rows=1 loops=1)\n -> Finalize Aggregate (cost=1559579.85..1559579.86 rows=1\nwidth=32) (actual time=3333.569..3333.569 rows=1 loops=1)\n -> Gather (cost=1559579.52..1559579.83 rows=3 width=32)\n(actual time=3328.619..3365.227 rows=4 loops=1)\n Workers Planned: 3\n Workers Launched: 3\n -> Partial Aggregate (cost=1558579.52..1558579.53\nrows=1 width=32) (actual time=3307.805..3307.805 rows=1 loops=4)\n -> Parallel Bitmap Heap Scan on lineitem\n(cost=300405.84..1556701.72 rows=375560 width=12) (actual\ntime=1585.726..30\n97.628 rows=285808 loops=4)\n Recheck Cond: ((l_shipdate >=\n'1997-01-01'::date) AND (l_shipdate < '1998-01-01 00:00:00'::timestamp\nwithout tim\ne zone) AND (l_discount >= 0.02) AND (l_discount <= 0.04) AND\n(l_quantity < '24'::numeric))\n Heap Blocks: exact=184293\n -> Parallel Bitmap Index Scan on\nidx_lineitem_shipdate (cost=0.00..300311.95 rows=1164235 width=0)\n(actual tim\ne=1008.361..1008.361 rows=285808 loops=4)\n Index Cond: ((l_shipdate >=\n'1997-01-01'::date) AND (l_shipdate < '1998-01-01 00:00:00'::timestamp\nwithout\n time zone) AND (l_discount >= 0.02) AND (l_discount <= 0.04) AND\n(l_quantity < '24'::numeric))\n Planning Time: 0.690 ms\n Execution Time: 3365.420 ms\n\nNote:\n- Currently, I have only parallelized then bitmap index path when we\nhave a bitmap index scan directly under bitmap heap. But, if we have\nBitmapAnd or BitmapOr path then I did not parallelize the underlying\nbitmap index scan. I think for BitmapAnd and BitmapOr we should use a\ncompletely different design, something similar to what we are doing in\nparallel append so I don't think BitmapAnd and BitmapOr we need to\ncover under this patch.\n\n- POC patch is attached to discuss the idea. The patch still needs\ncleanup and testing.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sun, 26 Jul 2020 18:42:31 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Parallel bitmap index scan" }, { "msg_contents": "On Sun, Jul 26, 2020 at 6:42 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> I would like to propose a patch for enabling the parallelism for the\n> bitmap index scan path.\n>\n> Background:\n> Currently, we support only a parallel bitmap heap scan path. Therein,\n> the underlying bitmap index scan is done by a single worker called the\n> leader. The leader creates a bitmap in shared memory and once the\n> bitmap is ready it creates a shared iterator and after that, all the\n> workers process the shared iterator and scan the heap in parallel.\n> While analyzing the TPCH plan we have observed that some of the\n> queries are spending significant time in preparing the bitmap. So the\n> idea of this patch is to use the parallel index scan for preparing the\n> underlying bitmap in parallel.\n>\n> Design:\n> If underlying index AM supports the parallel path (currently only\n> BTREE support it), then we will create a parallel bitmap heap scan\n> path on top of the parallel bitmap index scan path. So the idea of\n> this patch is that each worker will do the parallel index scan and\n> generate their part of the bitmap. And, we will create a barrier so\n> that we can not start preparing the shared iterator until all the\n> worker is ready with their bitmap. The first worker, which is ready\n> with the bitmap will keep a copy of its TBM and the page table in the\n> shared memory. And, all the subsequent workers will merge their TBM\n> with the shared TBM. Once all the TBM are merged we will get one\n> common shared TBM and after that stage, the worker can continue. The\n> remaining part is the same, basically, again one worker will scan the\n> shared TBM and prepare the shared iterator and once it is ready all\n> the workers will jointly scan the heap in parallel using shared\n> iterator.\n>\n> BitmapHeapNext\n> {\n> ...\n> BarrierAttach();\n> tbm = MultiExecProcNode();\n> tbm_merge(tbm); --Merge with common tbm using tbm_union\n> BarrierArriveAndWait();\n>\n> if (BitmapShouldInitializeSharedState(pstate)). --> only one worker\n> come out of this\n> {\n> tbm_prepare_shared_iterate();\n> BitmapDoneInitializingSharedState(). -->wakeup others\n> }\n> tbm_attach_shared_iterate(). --> all worker attach to shared iterator\n> ...\n> }\n>\n> Performance: With scale factor 10, I could see that Q6 is spending\n> significant time in a bitmap index scan so I have taken the\n> performance with that query and I can see that the bitmap index scan\n> node is 3x faster by using 3 workers whereas overall plan got ~40%\n> faster.\n>\n> TPCH: S.F. 10, work_mem=512MB shared_buffers: 1GB\n>\n> HEAD:\n> Limit (cost=1559777.02..1559777.03 rows=1 width=32) (actual\n> time=5260.121..5260.122 rows=1 loops=1)\n> -> Finalize Aggregate (cost=1559777.02..1559777.03 rows=1\n> width=32) (actual time=5260.119..5260.119 rows=1 loops=1)\n> -> Gather (cost=1559776.69..1559777.00 rows=3 width=32)\n> (actual time=5257.251..5289.595 rows=4 loops=1)\n> Workers Planned: 3\n> Workers Launched: 3\n> -> Partial Aggregate (cost=1558776.69..1558776.70\n> rows=1 width=32) (actual time=5247.714..5247.714 rows=1 loops=4)\n> -> Parallel Bitmap Heap Scan on lineitem\n> (cost=300603.01..1556898.89 rows=375560 width=12) (actual\n> time=3475.944..50\n> 37.484 rows=285808 loops=4)\n> Recheck Cond: ((l_shipdate >=\n> '1997-01-01'::date) AND (l_shipdate < '1998-01-01 00:00:00'::timestamp\n> without tim\n> e zone) AND (l_discount >= 0.02) AND (l_discount <= 0.04) AND\n> (l_quantity < '24'::numeric))\n> Heap Blocks: exact=205250\n> -> Bitmap Index Scan on\n> idx_lineitem_shipdate (cost=0.00..300311.95 rows=1164235 width=0)\n> (actual time=3169.85\n> 5..3169.855 rows=1143234 loops=1)\n> Index Cond: ((l_shipdate >=\n> '1997-01-01'::date) AND (l_shipdate < '1998-01-01 00:00:00'::timestamp\n> without\n> time zone) AND (l_discount >= 0.02) AND (l_discount <= 0.04) AND\n> (l_quantity < '24'::numeric))\n> Planning Time: 0.659 ms\n> Execution Time: 5289.787 ms\n> (13 rows)\n>\n>\n> PATCH:\n>\n> Limit (cost=1559579.85..1559579.86 rows=1 width=32) (actual\n> time=3333.572..3333.572 rows=1 loops=1)\n> -> Finalize Aggregate (cost=1559579.85..1559579.86 rows=1\n> width=32) (actual time=3333.569..3333.569 rows=1 loops=1)\n> -> Gather (cost=1559579.52..1559579.83 rows=3 width=32)\n> (actual time=3328.619..3365.227 rows=4 loops=1)\n> Workers Planned: 3\n> Workers Launched: 3\n> -> Partial Aggregate (cost=1558579.52..1558579.53\n> rows=1 width=32) (actual time=3307.805..3307.805 rows=1 loops=4)\n> -> Parallel Bitmap Heap Scan on lineitem\n> (cost=300405.84..1556701.72 rows=375560 width=12) (actual\n> time=1585.726..30\n> 97.628 rows=285808 loops=4)\n> Recheck Cond: ((l_shipdate >=\n> '1997-01-01'::date) AND (l_shipdate < '1998-01-01 00:00:00'::timestamp\n> without tim\n> e zone) AND (l_discount >= 0.02) AND (l_discount <= 0.04) AND\n> (l_quantity < '24'::numeric))\n> Heap Blocks: exact=184293\n> -> Parallel Bitmap Index Scan on\n> idx_lineitem_shipdate (cost=0.00..300311.95 rows=1164235 width=0)\n> (actual tim\n> e=1008.361..1008.361 rows=285808 loops=4)\n> Index Cond: ((l_shipdate >=\n> '1997-01-01'::date) AND (l_shipdate < '1998-01-01 00:00:00'::timestamp\n> without\n> time zone) AND (l_discount >= 0.02) AND (l_discount <= 0.04) AND\n> (l_quantity < '24'::numeric))\n> Planning Time: 0.690 ms\n> Execution Time: 3365.420 ms\n>\n> Note:\n> - Currently, I have only parallelized then bitmap index path when we\n> have a bitmap index scan directly under bitmap heap. But, if we have\n> BitmapAnd or BitmapOr path then I did not parallelize the underlying\n> bitmap index scan. I think for BitmapAnd and BitmapOr we should use a\n> completely different design, something similar to what we are doing in\n> parallel append so I don't think BitmapAnd and BitmapOr we need to\n> cover under this patch.\n>\n> - POC patch is attached to discuss the idea. The patch still needs\n> cleanup and testing.\n>\n\nThere was one compilation warning so fixed in this version.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sun, 26 Jul 2020 19:27:45 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel bitmap index scan" }, { "msg_contents": "On Mon, Jul 27, 2020 at 1:58 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> On Sun, Jul 26, 2020 at 6:42 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > I would like to propose a patch for enabling the parallelism for the\n> > bitmap index scan path.\n\n Workers Planned: 4\n -> Parallel Bitmap Heap Scan on tenk1\n Recheck Cond: (hundred > 1)\n- -> Bitmap Index Scan on tenk1_hundred\n+ -> Parallel Bitmap Index Scan on tenk1_hundred\n Index Cond: (hundred > 1)\n\n+1, this is a very good feature to have.\n\n+ /* Merge bitmap to a common\nshared bitmap */\n+ SpinLockAcquire(&pstate->mutex);\n+ tbm_merge(tbm,\n&pstate->tbm_shared, &pstate->pt_shared);\n+ SpinLockRelease(&pstate->mutex);\n\nThis doesn't look like a job for a spinlock.\n\nYou have a barrier so that you can wait until all workers have\nfinished merging their partial bitmap into the complete bitmap, which\nmakes perfect sense. You also use that spinlock (probably should be\nLWLock) to serialise the bitmap merging work... Hmm, I suppose there\nwould be an alternative design which also uses the barrier to\nserialise the merge, and has the merging done entirely by one process,\nlike this:\n\n bool chosen_to_merge = false;\n\n /* Attach to the barrier, and see what phase we're up to. */\n switch (BarrierAttach())\n {\n case PBH_BUILDING:\n ... build my partial bitmap in shmem ...\n chosen_to_merge = BarrierArriveAndWait();\n /* Fall through */\n\n case PBH_MERGING:\n if (chosen_to_merge)\n ... perform merge of all partial results into final shmem bitmap ...\n BarrierArriveAndWait();\n /* Fall through */\n\n case PBH_SCANNING:\n /* We attached too late to help build the bitmap. */\n BarrierDetach();\n break;\n }\n\nJust an idea, not sure if it's a good one. I find it a little easier\nto reason about the behaviour of late-attaching workers when the\nphases are explicitly named and handled with code like that (it's not\nimmediately clear to me whether your coding handles late attachers\ncorrectly, which seems to be one of the main problems to think about\nwith \"dynamic party\" parallelism...).\n\n\n", "msg_date": "Mon, 27 Jul 2020 10:17:36 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel bitmap index scan" }, { "msg_contents": "On Mon, 27 Jul 2020 at 3:48 AM, Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Mon, Jul 27, 2020 at 1:58 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > On Sun, Jul 26, 2020 at 6:42 PM Dilip Kumar <dilipbalaut@gmail.com>\n> wrote:\n> > >\n> > > I would like to propose a patch for enabling the parallelism for the\n> > > bitmap index scan path.\n>\n> Workers Planned: 4\n> -> Parallel Bitmap Heap Scan on tenk1\n> Recheck Cond: (hundred > 1)\n> - -> Bitmap Index Scan on tenk1_hundred\n> + -> Parallel Bitmap Index Scan on tenk1_hundred\n> Index Cond: (hundred > 1)\n>\n> +1, this is a very good feature to have.\n>\n> + /* Merge bitmap to a common\n> shared bitmap */\n> + SpinLockAcquire(&pstate->mutex);\n> + tbm_merge(tbm,\n> &pstate->tbm_shared, &pstate->pt_shared);\n> + SpinLockRelease(&pstate->mutex);\n>\n> This doesn't look like a job for a spinlock.\n\n\nYes I agree with that.\n\nYou have a barrier so that you can wait until all workers have\n> finished merging their partial bitmap into the complete bitmap, which\n> makes perfect sense. You also use that spinlock (probably should be\n> LWLock) to serialise the bitmap merging work... Hmm, I suppose there\n> would be an alternative design which also uses the barrier to\n> serialise the merge, and has the merging done entirely by one process,\n> like this:\n>\n> bool chosen_to_merge = false;\n>\n> /* Attach to the barrier, and see what phase we're up to. */\n> switch (BarrierAttach())\n> {\n> case PBH_BUILDING:\n> ... build my partial bitmap in shmem ...\n> chosen_to_merge = BarrierArriveAndWait();\n> /* Fall through */\n>\n> case PBH_MERGING:\n> if (chosen_to_merge)\n> ... perform merge of all partial results into final shmem bitmap ...\n> BarrierArriveAndWait();\n> /* Fall through */\n>\n> case PBH_SCANNING:\n> /* We attached too late to help build the bitmap. */\n> BarrierDetach();\n> break;\n> }\n>\n> Just an idea, not sure if it's a good one. I find it a little easier\n> to reason about the behaviour of late-attaching workers when the\n> phases are explicitly named and handled with code like that (it's not\n> immediately clear to me whether your coding handles late attachers\n> correctly, which seems to be one of the main problems to think about\n> with \"dynamic party\" parallelism...).\n\n\nYeah this seems better idea. I am handling late attachers case but the\nidea of using the barrier phase looks quite clean. I will change it this\nway.\n\n> --\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Mon, 27 Jul 2020 at 3:48 AM, Thomas Munro <thomas.munro@gmail.com> wrote:On Mon, Jul 27, 2020 at 1:58 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> On Sun, Jul 26, 2020 at 6:42 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > I would like to propose a patch for enabling the parallelism for the\n> > bitmap index scan path.\n\n                Workers Planned: 4\n                ->  Parallel Bitmap Heap Scan on tenk1\n                      Recheck Cond: (hundred > 1)\n-                     ->  Bitmap Index Scan on tenk1_hundred\n+                     ->  Parallel Bitmap Index Scan on tenk1_hundred\n                            Index Cond: (hundred > 1)\n\n+1, this is a very good feature to have.\n\n+                                       /* Merge bitmap to a common\nshared bitmap */\n+                                       SpinLockAcquire(&pstate->mutex);\n+                                       tbm_merge(tbm,\n&pstate->tbm_shared, &pstate->pt_shared);\n+                                       SpinLockRelease(&pstate->mutex);\n\nThis doesn't look like a job for a spinlock.Yes I agree with that.\nYou have a barrier so that you can wait until all workers have\nfinished merging their partial bitmap into the complete bitmap, which\nmakes perfect sense.  You also use that spinlock (probably should be\nLWLock) to serialise the bitmap merging work...  Hmm, I suppose there\nwould be an alternative design which also uses the barrier to\nserialise the merge, and has the merging done entirely by one process,\nlike this:\n\n  bool chosen_to_merge = false;\n\n  /* Attach to the barrier, and see what phase we're up to. */\n  switch (BarrierAttach())\n  {\n  case PBH_BUILDING:\n    ... build my partial bitmap in shmem ...\n    chosen_to_merge = BarrierArriveAndWait();\n    /* Fall through */\n\n  case PBH_MERGING:\n    if (chosen_to_merge)\n      ... perform merge of all partial results into final shmem bitmap ...\n    BarrierArriveAndWait();\n    /* Fall through */\n\n  case PBH_SCANNING:\n    /* We attached too late to help build the bitmap.  */\n    BarrierDetach();\n    break;\n  }\n\nJust an idea, not sure if it's a good one.  I find it a little easier\nto reason about the behaviour of late-attaching workers when the\nphases are explicitly named and handled with code like that (it's not\nimmediately clear to me whether your coding handles late attachers\ncorrectly, which seems to be one of the main problems to think about\nwith \"dynamic party\" parallelism...).Yeah this seems better idea.  I am handling late attachers case but the idea of using the barrier phase looks quite clean.  I will change it this way.-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 27 Jul 2020 06:43:00 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel bitmap index scan" }, { "msg_contents": "On Mon, Jul 27, 2020 at 3:48 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Mon, Jul 27, 2020 at 1:58 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > On Sun, Jul 26, 2020 at 6:42 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > I would like to propose a patch for enabling the parallelism for the\n> > > bitmap index scan path.\n>\n> Workers Planned: 4\n> -> Parallel Bitmap Heap Scan on tenk1\n> Recheck Cond: (hundred > 1)\n> - -> Bitmap Index Scan on tenk1_hundred\n> + -> Parallel Bitmap Index Scan on tenk1_hundred\n> Index Cond: (hundred > 1)\n>\n> +1, this is a very good feature to have.\n>\n> + /* Merge bitmap to a common\n> shared bitmap */\n> + SpinLockAcquire(&pstate->mutex);\n> + tbm_merge(tbm,\n> &pstate->tbm_shared, &pstate->pt_shared);\n> + SpinLockRelease(&pstate->mutex);\n>\n> This doesn't look like a job for a spinlock.\n>\n> You have a barrier so that you can wait until all workers have\n> finished merging their partial bitmap into the complete bitmap, which\n> makes perfect sense. You also use that spinlock (probably should be\n> LWLock) to serialise the bitmap merging work... Hmm, I suppose there\n> would be an alternative design which also uses the barrier to\n> serialise the merge, and has the merging done entirely by one process,\n> like this:\n>\n> bool chosen_to_merge = false;\n>\n> /* Attach to the barrier, and see what phase we're up to. */\n> switch (BarrierAttach())\n> {\n> case PBH_BUILDING:\n> ... build my partial bitmap in shmem ...\n> chosen_to_merge = BarrierArriveAndWait();\n> /* Fall through */\n>\n> case PBH_MERGING:\n> if (chosen_to_merge)\n> ... perform merge of all partial results into final shmem bitmap ...\n> BarrierArriveAndWait();\n> /* Fall through */\n>\n> case PBH_SCANNING:\n> /* We attached too late to help build the bitmap. */\n> BarrierDetach();\n> break;\n> }\n>\n> Just an idea, not sure if it's a good one. I find it a little easier\n> to reason about the behaviour of late-attaching workers when the\n> phases are explicitly named and handled with code like that (it's not\n> immediately clear to me whether your coding handles late attachers\n> correctly, which seems to be one of the main problems to think about\n> with \"dynamic party\" parallelism...).\n\nActually, for merging, I could not use the strategy you suggested\nbecause in this case all the worker prepare their TBM and merge to the\nshared TBM. Basically, we don't need to choose a leader for that all\nthe workers need to merge their TBM to the shared location but one at\na time, and also we don't need to wait for all the workers to prepare\nTBM before they start merging. However, once the merge is over we\nneed to wait for all the workers to finish the merge and after that\nonly one worker will be allowed to prepare the shared iterator. So\nfor that, I have used your idea of the barrier phase.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 28 Jul 2020 14:06:02 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel bitmap index scan" }, { "msg_contents": "On Sun, Jul 26, 2020 at 6:43 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> I would like to propose a patch for enabling the parallelism for the\n> bitmap index scan path.\n>\n> Background:\n> Currently, we support only a parallel bitmap heap scan path. Therein,\n> the underlying bitmap index scan is done by a single worker called the\n> leader. The leader creates a bitmap in shared memory and once the\n> bitmap is ready it creates a shared iterator and after that, all the\n> workers process the shared iterator and scan the heap in parallel.\n> While analyzing the TPCH plan we have observed that some of the\n> queries are spending significant time in preparing the bitmap. So the\n> idea of this patch is to use the parallel index scan for preparing the\n> underlying bitmap in parallel.\n>\n> Design:\n> If underlying index AM supports the parallel path (currently only\n> BTREE support it), then we will create a parallel bitmap heap scan\n> path on top of the parallel bitmap index scan path. So the idea of\n> this patch is that each worker will do the parallel index scan and\n> generate their part of the bitmap. And, we will create a barrier so\n> that we can not start preparing the shared iterator until all the\n> worker is ready with their bitmap. The first worker, which is ready\n> with the bitmap will keep a copy of its TBM and the page table in the\n> shared memory. And, all the subsequent workers will merge their TBM\n> with the shared TBM. Once all the TBM are merged we will get one\n> common shared TBM and after that stage, the worker can continue. The\n> remaining part is the same, basically, again one worker will scan the\n> shared TBM and prepare the shared iterator and once it is ready all\n> the workers will jointly scan the heap in parallel using shared\n> iterator.\n>\n\nThough I have not looked at the patch or code for the existing\nparallel bitmap heap scan, one point keeps bugging in my mind. I may\nbe utterly wrong or my question may be so silly, anyways I would like\nto ask here:\n\n From the above design: each parallel worker creates partial bitmaps\nfor the index data that they looked at. Why should they merge these\nbitmaps to a single bitmap in shared memory? Why can't each parallel\nworker do a bitmap heap scan using the partial bitmaps they built\nduring it's bitmap index scan and emit qualified tuples/rows so that\nthe gather node can collect them? There may not be even lock\ncontention as bitmap heap scan takes read locks for the heap\npages/tuples.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 17 Aug 2020 19:41:57 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel bitmap index scan" }, { "msg_contents": "On Mon, 17 Aug 2020 at 7:42 PM, Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Sun, Jul 26, 2020 at 6:43 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> >\n>\n> > I would like to propose a patch for enabling the parallelism for the\n>\n> > bitmap index scan path.\n>\n> >\n>\n> > Background:\n>\n> > Currently, we support only a parallel bitmap heap scan path. Therein,\n>\n> > the underlying bitmap index scan is done by a single worker called the\n>\n> > leader. The leader creates a bitmap in shared memory and once the\n>\n> > bitmap is ready it creates a shared iterator and after that, all the\n>\n> > workers process the shared iterator and scan the heap in parallel.\n>\n> > While analyzing the TPCH plan we have observed that some of the\n>\n> > queries are spending significant time in preparing the bitmap. So the\n>\n> > idea of this patch is to use the parallel index scan for preparing the\n>\n> > underlying bitmap in parallel.\n>\n> >\n>\n> > Design:\n>\n> > If underlying index AM supports the parallel path (currently only\n>\n> > BTREE support it), then we will create a parallel bitmap heap scan\n>\n> > path on top of the parallel bitmap index scan path. So the idea of\n>\n> > this patch is that each worker will do the parallel index scan and\n>\n> > generate their part of the bitmap. And, we will create a barrier so\n>\n> > that we can not start preparing the shared iterator until all the\n>\n> > worker is ready with their bitmap. The first worker, which is ready\n>\n> > with the bitmap will keep a copy of its TBM and the page table in the\n>\n> > shared memory. And, all the subsequent workers will merge their TBM\n>\n> > with the shared TBM. Once all the TBM are merged we will get one\n>\n> > common shared TBM and after that stage, the worker can continue. The\n>\n> > remaining part is the same, basically, again one worker will scan the\n>\n> > shared TBM and prepare the shared iterator and once it is ready all\n>\n> > the workers will jointly scan the heap in parallel using shared\n>\n> > iterator.\n>\n> >\n>\n>\n>\n> Though I have not looked at the patch or code for the existing\n>\n> parallel bitmap heap scan, one point keeps bugging in my mind. I may\n>\n> be utterly wrong or my question may be so silly, anyways I would like\n>\n> to ask here:\n>\n>\n>\n> From the above design: each parallel worker creates partial bitmaps\n>\n> for the index data that they looked at. Why should they merge these\n>\n> bitmaps to a single bitmap in shared memory? Why can't each parallel\n>\n> worker do a bitmap heap scan using the partial bitmaps they built\n>\n> during it's bitmap index scan and emit qualified tuples/rows so that\n>\n> the gather node can collect them? There may not be even lock\n>\n> contention as bitmap heap scan takes read locks for the heap\n>\n> pages/tuples.\n\n\nThe main reason is that there could be lossy pages in bitmap and if that is\nthe case then there will be duplicate data. Maybe if there is no lossy\ndata in any of the bitmap we might do as u describe but still I think that\nit is very much possible that different bitmap might have many common heap\npages because bitmap is prepared using index scan. And in such cases we\nwill be doing duplicate i/o.\n\n> --\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Mon, 17 Aug 2020 at 7:42 PM, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Sun, Jul 26, 2020 at 6:43 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:>> I would like to propose a patch for enabling the parallelism for the> bitmap index scan path.>> Background:> Currently, we support only a parallel bitmap heap scan path.  Therein,> the underlying bitmap index scan is done by a single worker called the> leader.  The leader creates a bitmap in shared memory and once the> bitmap is ready it creates a shared iterator and after that, all the> workers process the shared iterator and scan the heap in parallel.> While analyzing the TPCH plan we have observed that some of the> queries are spending significant time in preparing the bitmap.  So the> idea of this patch is to use the parallel index scan for preparing the> underlying bitmap in parallel.>> Design:> If underlying index AM supports the parallel path (currently only> BTREE support it), then we will create a parallel bitmap heap scan> path on top of the parallel bitmap index scan path.  So the idea of> this patch is that each worker will do the parallel index scan and> generate their part of the bitmap.  And, we will create a barrier so> that we can not start preparing the shared iterator until all the> worker is ready with their bitmap.  The first worker, which is ready> with the bitmap will keep a copy of its TBM and the page table in the> shared memory.  And, all the subsequent workers will merge their TBM> with the shared TBM.  Once all the TBM are merged we will get one> common shared TBM and after that stage, the worker can continue.  The> remaining part is the same,  basically, again one worker will scan the> shared TBM and prepare the shared iterator and once it is ready all> the workers will jointly scan the heap in parallel using shared> iterator.>Though I have not looked at the patch or code for the existingparallel bitmap heap scan, one point keeps bugging in my mind. I maybe utterly wrong or my question may be so silly, anyways I would liketo ask here:From the above design: each parallel worker creates partial bitmapsfor the index data that they looked at. Why should they merge thesebitmaps to a single bitmap in shared memory? Why can't each parallelworker do a bitmap heap scan using the partial bitmaps they builtduring it's bitmap index scan and emit qualified tuples/rows so thatthe gather node can collect them? There may not be even lockcontention as bitmap heap scan takes read locks for the heappages/tuples.The main reason is that there could be lossy pages in bitmap and if that is the case then there will be duplicate data.  Maybe if there is no lossy data in any of the bitmap we might do as u describe but still I think that it is very much possible that different bitmap might have many common heap pages because bitmap is prepared using index scan.  And in such cases we will be doing duplicate i/o.-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 17 Aug 2020 19:48:10 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel bitmap index scan" }, { "msg_contents": "On Sun, Jul 26, 2020 at 6:42 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> I would like to propose a patch for enabling the parallelism for the\n> bitmap index scan path.\n>\n> Background:\n> Currently, we support only a parallel bitmap heap scan path. Therein,\n> the underlying bitmap index scan is done by a single worker called the\n> leader. The leader creates a bitmap in shared memory and once the\n> bitmap is ready it creates a shared iterator and after that, all the\n> workers process the shared iterator and scan the heap in parallel.\n> While analyzing the TPCH plan we have observed that some of the\n> queries are spending significant time in preparing the bitmap. So the\n> idea of this patch is to use the parallel index scan for preparing the\n> underlying bitmap in parallel.\n>\n> Design:\n> If underlying index AM supports the parallel path (currently only\n> BTREE support it), then we will create a parallel bitmap heap scan\n> path on top of the parallel bitmap index scan path. So the idea of\n> this patch is that each worker will do the parallel index scan and\n> generate their part of the bitmap. And, we will create a barrier so\n> that we can not start preparing the shared iterator until all the\n> worker is ready with their bitmap. The first worker, which is ready\n> with the bitmap will keep a copy of its TBM and the page table in the\n> shared memory. And, all the subsequent workers will merge their TBM\n> with the shared TBM. Once all the TBM are merged we will get one\n> common shared TBM and after that stage, the worker can continue. The\n> remaining part is the same, basically, again one worker will scan the\n> shared TBM and prepare the shared iterator and once it is ready all\n> the workers will jointly scan the heap in parallel using shared\n> iterator.\n>\n> BitmapHeapNext\n> {\n> ...\n> BarrierAttach();\n> tbm = MultiExecProcNode();\n> tbm_merge(tbm); --Merge with common tbm using tbm_union\n> BarrierArriveAndWait();\n>\n> if (BitmapShouldInitializeSharedState(pstate)). --> only one worker\n> come out of this\n> {\n> tbm_prepare_shared_iterate();\n> BitmapDoneInitializingSharedState(). -->wakeup others\n> }\n> tbm_attach_shared_iterate(). --> all worker attach to shared iterator\n> ...\n> }\n>\n> Performance: With scale factor 10, I could see that Q6 is spending\n> significant time in a bitmap index scan so I have taken the\n> performance with that query and I can see that the bitmap index scan\n> node is 3x faster by using 3 workers whereas overall plan got ~40%\n> faster.\n>\n> TPCH: S.F. 10, work_mem=512MB shared_buffers: 1GB\n>\n> HEAD:\n> Limit (cost=1559777.02..1559777.03 rows=1 width=32) (actual\n> time=5260.121..5260.122 rows=1 loops=1)\n> -> Finalize Aggregate (cost=1559777.02..1559777.03 rows=1\n> width=32) (actual time=5260.119..5260.119 rows=1 loops=1)\n> -> Gather (cost=1559776.69..1559777.00 rows=3 width=32)\n> (actual time=5257.251..5289.595 rows=4 loops=1)\n> Workers Planned: 3\n> Workers Launched: 3\n> -> Partial Aggregate (cost=1558776.69..1558776.70\n> rows=1 width=32) (actual time=5247.714..5247.714 rows=1 loops=4)\n> -> Parallel Bitmap Heap Scan on lineitem\n> (cost=300603.01..1556898.89 rows=375560 width=12) (actual\n> time=3475.944..50\n> 37.484 rows=285808 loops=4)\n> Recheck Cond: ((l_shipdate >=\n> '1997-01-01'::date) AND (l_shipdate < '1998-01-01 00:00:00'::timestamp\n> without tim\n> e zone) AND (l_discount >= 0.02) AND (l_discount <= 0.04) AND\n> (l_quantity < '24'::numeric))\n> Heap Blocks: exact=205250\n> -> Bitmap Index Scan on\n> idx_lineitem_shipdate (cost=0.00..300311.95 rows=1164235 width=0)\n> (actual time=3169.85\n> 5..3169.855 rows=1143234 loops=1)\n> Index Cond: ((l_shipdate >=\n> '1997-01-01'::date) AND (l_shipdate < '1998-01-01 00:00:00'::timestamp\n> without\n> time zone) AND (l_discount >= 0.02) AND (l_discount <= 0.04) AND\n> (l_quantity < '24'::numeric))\n> Planning Time: 0.659 ms\n> Execution Time: 5289.787 ms\n> (13 rows)\n>\n>\n> PATCH:\n>\n> Limit (cost=1559579.85..1559579.86 rows=1 width=32) (actual\n> time=3333.572..3333.572 rows=1 loops=1)\n> -> Finalize Aggregate (cost=1559579.85..1559579.86 rows=1\n> width=32) (actual time=3333.569..3333.569 rows=1 loops=1)\n> -> Gather (cost=1559579.52..1559579.83 rows=3 width=32)\n> (actual time=3328.619..3365.227 rows=4 loops=1)\n> Workers Planned: 3\n> Workers Launched: 3\n> -> Partial Aggregate (cost=1558579.52..1558579.53\n> rows=1 width=32) (actual time=3307.805..3307.805 rows=1 loops=4)\n> -> Parallel Bitmap Heap Scan on lineitem\n> (cost=300405.84..1556701.72 rows=375560 width=12) (actual\n> time=1585.726..30\n> 97.628 rows=285808 loops=4)\n> Recheck Cond: ((l_shipdate >=\n> '1997-01-01'::date) AND (l_shipdate < '1998-01-01 00:00:00'::timestamp\n> without tim\n> e zone) AND (l_discount >= 0.02) AND (l_discount <= 0.04) AND\n> (l_quantity < '24'::numeric))\n> Heap Blocks: exact=184293\n> -> Parallel Bitmap Index Scan on\n> idx_lineitem_shipdate (cost=0.00..300311.95 rows=1164235 width=0)\n> (actual tim\n> e=1008.361..1008.361 rows=285808 loops=4)\n> Index Cond: ((l_shipdate >=\n> '1997-01-01'::date) AND (l_shipdate < '1998-01-01 00:00:00'::timestamp\n> without\n> time zone) AND (l_discount >= 0.02) AND (l_discount <= 0.04) AND\n> (l_quantity < '24'::numeric))\n> Planning Time: 0.690 ms\n> Execution Time: 3365.420 ms\n>\n> Note:\n> - Currently, I have only parallelized then bitmap index path when we\n> have a bitmap index scan directly under bitmap heap. But, if we have\n> BitmapAnd or BitmapOr path then I did not parallelize the underlying\n> bitmap index scan. I think for BitmapAnd and BitmapOr we should use a\n> completely different design, something similar to what we are doing in\n> parallel append so I don't think BitmapAnd and BitmapOr we need to\n> cover under this patch.\n>\n> - POC patch is attached to discuss the idea. The patch still needs\n> cleanup and testing.\n\n I have rebased this patch on the current head. Apart from this, I\nhave also measure performance with the higher scalare factor this\ntime. At a higher scale factor I can see the performance with the\npatch is dropping. Basically, the underlying bitmap index scan node\nis getting faster with parallelism but the overall performance is\ngoing down due to the TBM merging in the parallel bitmap heap node.\nCurrently, there is a lot of scope for improving tbm_merge.\n- Currently, whichever worker produces the TBM first becomes the host\nTBM and all the other workers merge their TBM to that. Ideally, the\nlargest TBM should become the host TBM.\n- While merging we are directly using tbm_union and that need to\nreinsert the complete entry in the host TBM's hashtable, I think\ninstead of merging like this we can create just a shared iterator (and\nsomehow remove the duplicates) but don't really need to merge the\nhashtable. I haven't thought about this design completely but seems\ndoable, basically by doing this the TBM iterator array will keep the\nitems from multiple tbm_hashtables.\n\nmax_parallel_workers_per_gather=4\nwork_mem=20GB\nshared_buffes=20GB\n\n\nHEAD\nTPCH QUERY (Parallel BitmapHeap+ BitmapIndex)\n BitmapIndex\n4 19764\n 535\n5 12035\n 1545\n6 119815\n 7943\n14 44154\n 1007\n\nPATCH\nTPCH QUERY (Parallel BitmapHeap+Parallel BitmapIndex).\nParallel BitmapIndex\n4 19765\n 287\n5 13799\n 848\n6 116204\n 3255\n14 44078\n 416\n\nSo if we see the performance results, in most of the queries the time\nspent in the bitmap index is reduced by half or more but still, the\ntotal time spent in the bitmap heap scan is either not reduced\nsignificantly or it is increased.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 21 Sep 2020 11:36:18 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel bitmap index scan" }, { "msg_contents": "Hi,\n\nI took a look at this today, doing a bit of stress-testing, and I can\nget it to crash because of segfaults in pagetable_create (not sure if\nthe issue is there, it might be just a symptom of an issue elsewhere).\n\nAttached is a shell script I use to run the stress test - it's using\n'test' database, generates tables of different size and then runs\nqueries with various parameter combinations. It takes a while to trigger\nthe crash, so it might depend on timing or something like that.\n\nI've also attached two examples of backtraces. I've also seen infinite\nloop in pagetable_create, but the crashes are much more common.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 11 Nov 2020 20:52:56 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Parallel bitmap index scan" }, { "msg_contents": "On 11/11/20 8:52 PM, Tomas Vondra wrote:\n> Hi,\n> \n> I took a look at this today, doing a bit of stress-testing, and I can\n> get it to crash because of segfaults in pagetable_create (not sure if\n> the issue is there, it might be just a symptom of an issue elsewhere).\n> \n> Attached is a shell script I use to run the stress test - it's using\n> 'test' database, generates tables of different size and then runs\n> queries with various parameter combinations. It takes a while to trigger\n> the crash, so it might depend on timing or something like that.\n> \n> I've also attached two examples of backtraces. I've also seen infinite\n> loop in pagetable_create, but the crashes are much more common.\n> \n\nHi Dilip,\n\nDo you plan to work on this for PG14? I haven't noticed any response in\nthis thread, dealing with the crashes I reported a while ago. Also, it\ndoesn't seem to be added to any of the commitfests.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 22 Dec 2020 23:45:42 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Parallel bitmap index scan" }, { "msg_contents": "On Wed, 23 Dec 2020 at 4:15 AM, Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n> On 11/11/20 8:52 PM, Tomas Vondra wrote:\n> > Hi,\n> >\n> > I took a look at this today, doing a bit of stress-testing, and I can\n> > get it to crash because of segfaults in pagetable_create (not sure if\n> > the issue is there, it might be just a symptom of an issue elsewhere).\n> >\n> > Attached is a shell script I use to run the stress test - it's using\n> > 'test' database, generates tables of different size and then runs\n> > queries with various parameter combinations. It takes a while to trigger\n> > the crash, so it might depend on timing or something like that.\n> >\n> > I've also attached two examples of backtraces. I've also seen infinite\n> > loop in pagetable_create, but the crashes are much more common.\n> >\n>\n> Hi Dilip,\n>\n> Do you plan to work on this for PG14? I haven't noticed any response in\n> this thread, dealing with the crashes I reported a while ago. Also, it\n> doesn't seem to be added to any of the commitfests.\n\n\n\nHi Tomas,\n\nThanks for testing this. Actually we have noticed a lot of performance\ndrop in many cases due to the tbm_merge. So off list we are discussing\ndifferent approaches and testing the performance. So basically, in the\ncurrent approach all the worker are first preparing their bitmap hash and\nthen they are merging into the common bitmap hash under a lock. So based\non the off list discussion with Robert, the next approach I am trying is to\ndirectly insert into the shared bitmap hash while scanning the index\nitself. So now instead of preparing a separate bitmap, all the workers\nwill directly insert into the shared bitmap hash. I agree that for getting\neach page from the bitmaphash we need to acquire the lock and this also\nmight generate a lot of lock contention but we want to try the POC and\ncheck the performance. In fact I have already implemented the POC and\nresults aren't great. But I am still experimenting with it to see whether\nthe lock can be more granular than I have now. I will share my finding\nsoon along with the POC patch.\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n>\n\nOn Wed, 23 Dec 2020 at 4:15 AM, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:On 11/11/20 8:52 PM, Tomas Vondra wrote:\n> Hi,\n> \n> I took a look at this today, doing a bit of stress-testing, and I can\n> get it to crash because of segfaults in pagetable_create (not sure if\n> the issue is there, it might be just a symptom of an issue elsewhere).\n> \n> Attached is a shell script I use to run the stress test - it's using\n> 'test' database, generates tables of different size and then runs\n> queries with various parameter combinations. It takes a while to trigger\n> the crash, so it might depend on timing or something like that.\n> \n> I've also attached two examples of backtraces. I've also seen infinite\n> loop in pagetable_create, but the crashes are much more common.\n> \n\nHi Dilip,\n\nDo you plan to work on this for PG14? I haven't noticed any response in\nthis thread, dealing with the crashes I reported a while ago. Also, it\ndoesn't seem to be added to any of the commitfests.Hi Tomas,Thanks for testing this.  Actually we have noticed a lot of performance drop in many cases due to the tbm_merge.  So off list we are discussing different approaches and testing the performance.  So basically, in the current approach all the worker are first preparing their bitmap hash and then they are merging into the common bitmap hash under a lock.  So based on the off list discussion with Robert, the next approach I am trying is to directly insert into the shared bitmap hash while scanning the index itself.  So now instead of preparing a separate bitmap, all the workers will directly insert into the shared bitmap hash.  I agree that for getting each page from the bitmaphash we need to acquire the lock and this also might generate a lot of lock contention but we want to try the POC and check the performance.  In fact I have already implemented the POC and results aren't great.  But I am still experimenting with it to see whether the lock can be more granular than I have now.  I will share my finding soon along with the POC patch.--Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 23 Dec 2020 08:29:52 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel bitmap index scan" } ]
[ { "msg_contents": "A database with a very large number of tables eligible for autovacuum can result in autovacuum workers “stuck” in a tight loop of table_recheck_autovac() constantly reporting nothing to do on the table. This is because a database with a very large number of tables means it takes a while to search the statistics hash to verify that the table still needs to be processed[1]. If a worker spends some time processing a table, when it’s done it can spend a significant amount of time rechecking each table that it identified at launch (I’ve seen a worker in this state for over an hour). A simple work-around in this scenario is to kill the worker; the launcher will quickly fire up a new worker on the same database, and that worker will build a new list of tables.\r\n\r\nThat’s not a complete solution though… if the database contains a large number of very small tables you can end up in a state where 1 or 2 workers is busy chugging through those small tables so quickly than any additional workers spend all their time in table_recheck_autovac(), because that takes long enough that the additional workers are never able to “leapfrog” the workers that are doing useful work.\r\n\r\nPoC patch attached.\r\n\r\n1: top hits from `perf top -p xxx` on an affected worker\r\nSamples: 72K of event 'cycles', Event count (approx.): 17131910436\r\nOverhead Shared Object Symbol\r\n 42.62% postgres [.] hash_search_with_hash_value\r\n 10.34% libc-2.17.so [.] __memcpy_sse2\r\n 6.99% [kernel] [k] copy_user_enhanced_fast_string\r\n 4.73% libc-2.17.so [.] _IO_fread\r\n 3.91% postgres [.] 0x00000000002d6478\r\n 2.95% libc-2.17.so [.] _IO_getc\r\n 2.44% libc-2.17.so [.] _IO_file_xsgetn\r\n 1.73% postgres [.] hash_search\r\n 1.65% [kernel] [k] find_get_entry\r\n 1.10% postgres [.] hash_uint32\r\n 0.99% libc-2.17.so [.] __memcpy_ssse3_back", "msg_date": "Sun, 26 Jul 2020 21:43:29 +0000", "msg_from": "\"Nasby, Jim\" <nasbyj@amazon.com>", "msg_from_op": true, "msg_subject": "autovac issue with large number of tables" }, { "msg_contents": "On Mon, 27 Jul 2020 at 06:43, Nasby, Jim <nasbyj@amazon.com> wrote:\n>\n> A database with a very large number of tables eligible for autovacuum can result in autovacuum workers “stuck” in a tight loop of table_recheck_autovac() constantly reporting nothing to do on the table. This is because a database with a very large number of tables means it takes a while to search the statistics hash to verify that the table still needs to be processed[1]. If a worker spends some time processing a table, when it’s done it can spend a significant amount of time rechecking each table that it identified at launch (I’ve seen a worker in this state for over an hour). A simple work-around in this scenario is to kill the worker; the launcher will quickly fire up a new worker on the same database, and that worker will build a new list of tables.\n>\n>\n>\n> That’s not a complete solution though… if the database contains a large number of very small tables you can end up in a state where 1 or 2 workers is busy chugging through those small tables so quickly than any additional workers spend all their time in table_recheck_autovac(), because that takes long enough that the additional workers are never able to “leapfrog” the workers that are doing useful work.\n>\n\nAs another solution, I've been considering adding a queue having table\nOIDs that need to vacuumed/analyzed on the shared memory (i.g. on\nDSA). Since all autovacuum workers running on the same database can\nsee a consistent queue, the issue explained above won't happen and\nprobably it makes the implementation of prioritization of tables being\nvacuumed easier which is sometimes discussed on pgsql-hackers. I guess\nit might be worth to discuss including this idea.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 27 Jul 2020 15:51:32 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: autovac issue with large number of tables" }, { "msg_contents": "A database with a very large number of tables eligible for autovacuum can result in autovacuum workers “stuck” in a tight loop of table_recheck_autovac() constantly reporting nothing to do on the table. This is because a database with a very large number of tables means it takes a while to search the statistics hash to verify that the table still needs to be processed[1]. If a worker spends some time processing a table, when it’s done it can spend a significant amount of time rechecking each table that it identified at launch (I’ve seen a worker in this state for over an hour). A simple work-around in this scenario is to kill the worker; the launcher will quickly fire up a new worker on the same database, and that worker will build a new list of tables.\r\n\r\nThat’s not a complete solution though… if the database contains a large number of very small tables you can end up in a state where 1 or 2 workers is busy chugging through those small tables so quickly than any additional workers spend all their time in table_recheck_autovac(), because that takes long enough that the additional workers are never able to “leapfrog” the workers that are doing useful work.\r\n\r\nPoC patch attached.\r\n\r\n1: top hits from `perf top -p xxx` on an affected worker\r\nSamples: 72K of event 'cycles', Event count (approx.): 17131910436\r\nOverhead Shared Object Symbol\r\n 42.62% postgres [.] hash_search_with_hash_value\r\n 10.34% libc-2.17.so [.] __memcpy_sse2\r\n 6.99% [kernel] [k] copy_user_enhanced_fast_string\r\n 4.73% libc-2.17.so [.] _IO_fread\r\n 3.91% postgres [.] 0x00000000002d6478\r\n 2.95% libc-2.17.so [.] _IO_getc\r\n 2.44% libc-2.17.so [.] _IO_file_xsgetn\r\n 1.73% postgres [.] hash_search\r\n 1.65% [kernel] [k] find_get_entry\r\n 1.10% postgres [.] hash_uint32\r\n 0.99% libc-2.17.so [.] __memcpy_ssse3_back", "msg_date": "Mon, 27 Jul 2020 18:39:36 +0000", "msg_from": "\"Nasby, Jim\" <nasbyj@amazon.com>", "msg_from_op": true, "msg_subject": "FW: autovac issue with large number of tables" }, { "msg_contents": "Sorry, please ignore this duplicate!\n\nOn 7/27/20 1:39 PM, Nasby, Jim wrote:\n>\n> A database with a very large number of  tables eligible for autovacuum \n> can result in autovacuum workers “stuck” in a tight loop of \n> table_recheck_autovac() constantly reporting nothing to do on the \n> table. This is because a database with a very large number of tables \n> means it takes a while to search the statistics hash to verify that \n> the table still needs to be processed[1]. If a worker spends some time \n> processing a table, when it’s done it can spend a significant amount \n> of time rechecking each table that it identified at launch (I’ve seen \n> a worker in this state for over an hour). A simple work-around in this \n> scenario is to kill the worker; the launcher will quickly fire up a \n> new worker on the same database, and that worker will build a new list \n> of tables.\n>\n> That’s not a complete solution though… if the database contains a \n> large number of very small tables you can end up in a state where 1 or \n> 2 workers is busy chugging through those small tables so quickly than \n> any additional workers spend all their time in \n> table_recheck_autovac(), because that takes long enough that the \n> additional workers are never able to “leapfrog” the workers that are \n> doing useful work.\n>\n> PoC patch attached.\n>\n> 1: top hits from `perf top -p xxx` on an affected worker\n>\n> Samples: 72K of event 'cycles', Event count (approx.): 17131910436\n>\n> Overhead Shared Object     Symbol\n>\n>   42.62% postgres          [.] hash_search_with_hash_value\n>\n>   10.34% libc-2.17.so      [.] __memcpy_sse2\n>\n>    6.99% [kernel]          [k] copy_user_enhanced_fast_string\n>\n>    4.73% libc-2.17.so      [.] _IO_fread\n>\n>    3.91% postgres          [.] 0x00000000002d6478\n>\n>    2.95% libc-2.17.so      [.] _IO_getc\n>\n>    2.44% libc-2.17.so      [.] _IO_file_xsgetn\n>\n>    1.73% postgres          [.] hash_search\n>\n>    1.65% [kernel]          [k] find_get_entry\n>\n>    1.10% postgres          [.] hash_uint32\n>\n>    0.99% libc-2.17.so      [.] __memcpy_ssse3_back\n>\n\n\n\n\n\n\nSorry, please ignore this duplicate!\n\nOn 7/27/20 1:39 PM, Nasby, Jim wrote:\n\n\n\n\n\n\nA database\n with a very large number of  tables eligible for autovacuum\n can result in autovacuum workers “stuck” in a tight loop of\n table_recheck_autovac() constantly reporting nothing to do\n on the table. This is because a database with a very large\n number of tables means it takes a while to search the\n statistics hash to verify that the table still needs to be\n processed[1]. If a worker spends some time processing a\n table, when it’s done it can spend a significant amount of\n time rechecking each table that it identified at launch\n (I’ve seen a worker in this state for over an hour). A\n simple work-around in this scenario is to kill the worker;\n the launcher will quickly fire up a new worker on the same\n database, and that worker will build a new list of tables.\n \nThat’s not a\n complete solution though… if the database contains a large\n number of very small tables you can end up in a state where\n 1 or 2 workers is busy chugging through those small tables\n so quickly than any additional workers spend all their time\n in table_recheck_autovac(), because that takes long enough\n that the additional workers are never able to “leapfrog” the\n workers that are doing useful work.\n \nPoC patch\n attached.\n \n1: top hits\n from `perf top -p xxx` on an affected worker\nSamples: 72K\n of event 'cycles', Event count (approx.): 17131910436\nOverhead \n Shared Object     Symbol\n  42.62% \n postgres          [.] hash_search_with_hash_value\n  10.34% \n libc-2.17.so      [.] __memcpy_sse2\n   6.99% \n [kernel]          [k] copy_user_enhanced_fast_string\n   4.73% \n libc-2.17.so      [.] _IO_fread\n   3.91% \n postgres          [.] 0x00000000002d6478\n   2.95% \n libc-2.17.so      [.] _IO_getc\n   2.44% \n libc-2.17.so      [.] _IO_file_xsgetn\n   1.73% \n postgres          [.] hash_search\n   1.65% \n [kernel]          [k] find_get_entry\n   1.10% \n postgres          [.] hash_uint32\n   0.99% \n libc-2.17.so      [.] __memcpy_ssse3_back", "msg_date": "Mon, 27 Jul 2020 13:41:54 -0500", "msg_from": "Jim Nasby <nasbyj@amazon.com>", "msg_from_op": false, "msg_subject": "Re: [UNVERIFIED SENDER] FW: autovac issue with large number of tables" }, { "msg_contents": "On 7/27/20 1:51 AM, Masahiko Sawada wrote:\n\n> On Mon, 27 Jul 2020 at 06:43, Nasby, Jim <nasbyj@amazon.com> wrote:\n>> A database with a very large number of tables eligible for autovacuum can result in autovacuum workers “stuck” in a tight loop of table_recheck_autovac() constantly reporting nothing to do on the table. This is because a database with a very large number of tables means it takes a while to search the statistics hash to verify that the table still needs to be processed[1]. If a worker spends some time processing a table, when it’s done it can spend a significant amount of time rechecking each table that it identified at launch (I’ve seen a worker in this state for over an hour). A simple work-around in this scenario is to kill the worker; the launcher will quickly fire up a new worker on the same database, and that worker will build a new list of tables.\n>>\n>>\n>>\n>> That’s not a complete solution though… if the database contains a large number of very small tables you can end up in a state where 1 or 2 workers is busy chugging through those small tables so quickly than any additional workers spend all their time in table_recheck_autovac(), because that takes long enough that the additional workers are never able to “leapfrog” the workers that are doing useful work.\n>>\n> As another solution, I've been considering adding a queue having table\n> OIDs that need to vacuumed/analyzed on the shared memory (i.g. on\n> DSA). Since all autovacuum workers running on the same database can\n> see a consistent queue, the issue explained above won't happen and\n> probably it makes the implementation of prioritization of tables being\n> vacuumed easier which is sometimes discussed on pgsql-hackers. I guess\n> it might be worth to discuss including this idea.\nI'm in favor of trying to improve scheduling (especially allowing users \nto control how things are scheduled), but that's a far more invasive \npatch. I'd like to get something like this patch in without waiting on a \nsignificantly larger effort.\n\n\n", "msg_date": "Mon, 27 Jul 2020 13:49:46 -0500", "msg_from": "Jim Nasby <nasbyj@amazon.com>", "msg_from_op": false, "msg_subject": "Re: autovac issue with large number of tables" }, { "msg_contents": "Hi,\n\nOn Tue, Jul 28, 2020 at 3:49 AM Jim Nasby <nasbyj@amazon.com> wrote:\n> I'm in favor of trying to improve scheduling (especially allowing users\n> to control how things are scheduled), but that's a far more invasive\n> patch. I'd like to get something like this patch in without waiting on a\n> significantly larger effort.\n\nBTW, Have you tried the patch suggested in the thread below?\n\nhttps://www.postgresql.org/message-id/20180629.173418.190173462.horiguchi.kyotaro%40lab.ntt.co.jp\n\nThe above is a suggestion to manage statistics on shared memory rather\nthan in a file, but I think this feature may mitigate your problem.\nI think that feature has yet another performance challenge, but it\nmight be worth a try.\nThe above patch will also require a great deal of effort to get into\nthe PostgreSQL-core, but I'm curious to see how well it works for this\nproblem.\n\nBest regards,\n\n-- \nTatsuhito Kasahara\nkasahara.tatsuhito _at_ gmail.com\n\n\n", "msg_date": "Fri, 31 Jul 2020 15:26:44 +0900", "msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>", "msg_from_op": false, "msg_subject": "Re: autovac issue with large number of tables" }, { "msg_contents": "On 7/31/20 1:26 AM, Kasahara Tatsuhito wrote:\n\n> On Tue, Jul 28, 2020 at 3:49 AM Jim Nasby <nasbyj@amazon.com> wrote:\n>> I'm in favor of trying to improve scheduling (especially allowing users\n>> to control how things are scheduled), but that's a far more invasive\n>> patch. I'd like to get something like this patch in without waiting on a\n>> significantly larger effort.\n> BTW, Have you tried the patch suggested in the thread below?\n>\n> https://www.postgresql.org/message-id/20180629.173418.190173462.horiguchi.kyotaro%40lab.ntt.co.jp\n>\n> The above is a suggestion to manage statistics on shared memory rather\n> than in a file, but I think this feature may mitigate your problem.\n> I think that feature has yet another performance challenge, but it\n> might be worth a try.\n> The above patch will also require a great deal of effort to get into\n> the PostgreSQL-core, but I'm curious to see how well it works for this\n> problem.\n\nWithout reading the 100+ emails or the 260k patch, I'm guessing that it \nwon't help because the problem I observed was spending most of it's time in\n\n   42.62% postgres          [.] hash_search_with_hash_value\n\nI don't see how moving things to shared memory would help that at all.\n\nBTW, when it comes to getting away from using files to store stats, IMHO \nthe best first pass on that is to put hooks in place to allow an \nextension to replace/supplement different parts of the existing stats \ninfrastructure.\n\n\n\n\n\n\n\nOn 7/31/20 1:26 AM, Kasahara Tatsuhito wrote:\n \n\nOn Tue, Jul 28, 2020 at 3:49 AM Jim Nasby <nasbyj@amazon.com> wrote:\n\n\nI'm in favor of trying to improve scheduling (especially allowing users\nto control how things are scheduled), but that's a far more invasive\npatch. I'd like to get something like this patch in without waiting on a\nsignificantly larger effort.\n\n\n\nBTW, Have you tried the patch suggested in the thread below?\n\nhttps://www.postgresql.org/message-id/20180629.173418.190173462.horiguchi.kyotaro%40lab.ntt.co.jp\n\nThe above is a suggestion to manage statistics on shared memory rather\nthan in a file, but I think this feature may mitigate your problem.\nI think that feature has yet another performance challenge, but it\nmight be worth a try.\nThe above patch will also require a great deal of effort to get into\nthe PostgreSQL-core, but I'm curious to see how well it works for this\nproblem.\n\n\nWithout reading the 100+ emails or the 260k patch, I'm guessing\n that it won't help because the problem I observed was spending\n most of it's time in \n\n  42.62% \n postgres          [.] hash_search_with_hash_value\nI don't see how\n moving things to shared memory would help that at all.\nBTW, when it\n comes to getting away from using files to store stats, IMHO the\n best first pass on that is to put hooks in place to allow an\n extension to replace/supplement different parts of the existing\n stats infrastructure.", "msg_date": "Mon, 10 Aug 2020 15:41:42 -0500", "msg_from": "Jim Nasby <nasbyj@amazon.com>", "msg_from_op": false, "msg_subject": "Re: autovac issue with large number of tables" }, { "msg_contents": "Jim Nasby <nasbyj@amazon.com> writes:\n> Without reading the 100+ emails or the 260k patch, I'm guessing that it \n> won't help because the problem I observed was spending most of it's time in\n>   42.62% postgres          [.] hash_search_with_hash_value\n> I don't see how moving things to shared memory would help that at all.\n\nSo I'm a bit mystified as to why that would show up as the primary cost.\nIt looks to me like we force a re-read of the pgstats data each time\nthrough table_recheck_autovac(), and it seems like the costs associated\nwith that would swamp everything else in the case you're worried about.\n\nI suspect that the bulk of the hash_search_with_hash_value costs are\nHASH_ENTER calls caused by repopulating the pgstats hash table, rather\nthan the single read probe that table_recheck_autovac itself will do.\nIt's still surprising that that would dominate the other costs of reading\nthe data, but maybe those costs just aren't as well localized in the code.\n\nSo I think Kasahara-san's point is that the shared memory stats collector\nmight wipe out those costs, depending on how it's implemented. (I've not\nlooked at that patch in a long time either, so I don't know how much it'd\ncut the reader-side costs. But maybe it'd be substantial.)\n\nIn the meantime, though, do we want to do something else to alleviate\nthe issue? I realize you only described your patch as a PoC, but I\ncan't say I like it much:\n\n* Giving up after we've wasted 1000 pgstats re-reads seems like locking\nthe barn door only after the horse is well across the state line.\n\n* I'm not convinced that the business with skipping N entries at a time\nbuys anything. You'd have to make pretty strong assumptions about the\nworkers all processing tables at about the same rate to believe it will\nhelp. In the worst case, it might lead to all the workers ignoring the\nsame table(s).\n\nI think the real issue here is autovac_refresh_stats's insistence that it\nshouldn't throttle pgstats re-reads in workers. I see the point about not\nwanting to repeat vacuum work on the basis of stale data, but still ...\nI wonder if we could have table_recheck_autovac do two probes of the stats\ndata. First probe the existing stats data, and if it shows the table to\nbe already vacuumed, return immediately. If not, *then* force a stats\nre-read, and check a second time.\n\nBTW, can you provide a test script that reproduces the problem you're\nlooking at? The rest of us are kind of guessing at what's happening.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 11 Aug 2020 13:46:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: autovac issue with large number of tables" }, { "msg_contents": "Hi,\n\nOn Wed, Aug 12, 2020 at 2:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> So I think Kasahara-san's point is that the shared memory stats collector\n> might wipe out those costs, depending on how it's implemented. (I've not\n> looked at that patch in a long time either, so I don't know how much it'd\n> cut the reader-side costs. But maybe it'd be substantial.)\nThanks for your clarification, that's what I wanted to say.\nSorry for the lack of explanation.\n\n> I think the real issue here is autovac_refresh_stats's insistence that it\n> shouldn't throttle pgstats re-reads in workers.\nI agree that.\n\n> I wonder if we could have table_recheck_autovac do two probes of the stats\n> data. First probe the existing stats data, and if it shows the table to\n> be already vacuumed, return immediately. If not, *then* force a stats\n> re-read, and check a second time.\nDoes the above mean that the second and subsequent table_recheck_autovac()\nwill be improved to first check using the previous refreshed statistics?\nI think that certainly works.\n\nIf that's correct, I'll try to create a patch for the PoC.\n\nBest regards,\n\n-- \nTatsuhito Kasahara\nkasahara.tatsuhito _at_ gmail.com\n\n\n", "msg_date": "Wed, 2 Sep 2020 02:10:22 +0900", "msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>", "msg_from_op": false, "msg_subject": "Re: autovac issue with large number of tables" }, { "msg_contents": "Hi,\n\nOn Wed, Sep 2, 2020 at 2:10 AM Kasahara Tatsuhito\n<kasahara.tatsuhito@gmail.com> wrote:\n> > I wonder if we could have table_recheck_autovac do two probes of the stats\n> > data. First probe the existing stats data, and if it shows the table to\n> > be already vacuumed, return immediately. If not, *then* force a stats\n> > re-read, and check a second time.\n> Does the above mean that the second and subsequent table_recheck_autovac()\n> will be improved to first check using the previous refreshed statistics?\n> I think that certainly works.\n>\n> If that's correct, I'll try to create a patch for the PoC\n\nI still don't know how to reproduce Jim's troubles, but I was able to reproduce\nwhat was probably a very similar problem.\n\nThis problem seems to be more likely to occur in cases where you have\na large number of tables,\ni.e., a large amount of stats, and many small tables need VACUUM at\nthe same time.\n\nSo I followed Tom's advice and created a patch for the PoC.\nThis patch will enable a flag in the table_recheck_autovac function to use\nthe existing stats next time if VACUUM (or ANALYZE) has already been done\nby another worker on the check after the stats have been updated.\nIf the tables continue to require VACUUM after the refresh, then a refresh\nwill be required instead of using the existing statistics.\n\nI did simple test with HEAD and HEAD + this PoC patch.\nThe tests were conducted in two cases.\n(I changed few configurations. see attached scripts)\n\n1. Normal VACUUM case\n - SET autovacuum = off\n - CREATE tables with 100 rows\n - DELETE 90 rows for each tables\n - SET autovacuum = on and restart PostgreSQL\n - Measure the time it takes for all tables to be VACUUMed\n\n2. Anti wrap round VACUUM case\n - CREATE brank tables\n - SELECT all of these tables (for generate stats)\n - SET autovacuum_freeze_max_age to low values and restart PostgreSQL\n - Consumes a lot of XIDs by using txid_curent()\n - Measure the time it takes for all tables to be VACUUMed\n\nFor each test case, the following results were obtained by changing\nautovacuum_max_workers parameters to 1, 2, 3(def) 5 and 10.\nAlso changing num of tables to 1000, 5000, 10000 and 20000.\n\nDue to the poor VM environment (2 VCPU/4 GB), the results are a little unstable,\nbut I think it's enough to ask for a trend.\n\n===========================================================================\n[1.Normal VACUUM case]\n tables:1000\n autovacuum_max_workers 1: (HEAD) 20 sec VS (with patch) 20 sec\n autovacuum_max_workers 2: (HEAD) 18 sec VS (with patch) 16 sec\n autovacuum_max_workers 3: (HEAD) 18 sec VS (with patch) 16 sec\n autovacuum_max_workers 5: (HEAD) 19 sec VS (with patch) 17 sec\n autovacuum_max_workers 10: (HEAD) 19 sec VS (with patch) 17 sec\n\n tables:5000\n autovacuum_max_workers 1: (HEAD) 77 sec VS (with patch) 78 sec\n autovacuum_max_workers 2: (HEAD) 61 sec VS (with patch) 43 sec\n autovacuum_max_workers 3: (HEAD) 38 sec VS (with patch) 38 sec\n autovacuum_max_workers 5: (HEAD) 45 sec VS (with patch) 37 sec\n autovacuum_max_workers 10: (HEAD) 43 sec VS (with patch) 35 sec\n\n tables:10000\n autovacuum_max_workers 1: (HEAD) 152 sec VS (with patch) 153 sec\n autovacuum_max_workers 2: (HEAD) 119 sec VS (with patch) 98 sec\n autovacuum_max_workers 3: (HEAD) 87 sec VS (with patch) 78 sec\n autovacuum_max_workers 5: (HEAD) 100 sec VS (with patch) 66 sec\n autovacuum_max_workers 10: (HEAD) 97 sec VS (with patch) 56 sec\n\n tables:20000\n autovacuum_max_workers 1: (HEAD) 338 sec VS (with patch) 339 sec\n autovacuum_max_workers 2: (HEAD) 231 sec VS (with patch) 229 sec\n autovacuum_max_workers 3: (HEAD) 220 sec VS (with patch) 191 sec\n autovacuum_max_workers 5: (HEAD) 234 sec VS (with patch) 147 sec\n autovacuum_max_workers 10: (HEAD) 320 sec VS (with patch) 113 sec\n\n[2.Anti wrap round VACUUM case]\n tables:1000\n autovacuum_max_workers 1: (HEAD) 19 sec VS (with patch) 18 sec\n autovacuum_max_workers 2: (HEAD) 14 sec VS (with patch) 15 sec\n autovacuum_max_workers 3: (HEAD) 14 sec VS (with patch) 14 sec\n autovacuum_max_workers 5: (HEAD) 14 sec VS (with patch) 16 sec\n autovacuum_max_workers 10: (HEAD) 16 sec VS (with patch) 14 sec\n\n tables:5000\n autovacuum_max_workers 1: (HEAD) 69 sec VS (with patch) 69 sec\n autovacuum_max_workers 2: (HEAD) 66 sec VS (with patch) 47 sec\n autovacuum_max_workers 3: (HEAD) 59 sec VS (with patch) 37 sec\n autovacuum_max_workers 5: (HEAD) 39 sec VS (with patch) 28 sec\n autovacuum_max_workers 10: (HEAD) 39 sec VS (with patch) 29 sec\n\n tables:10000\n autovacuum_max_workers 1: (HEAD) 139 sec VS (with patch) 138 sec\n autovacuum_max_workers 2: (HEAD) 130 sec VS (with patch) 86 sec\n autovacuum_max_workers 3: (HEAD) 120 sec VS (with patch) 68 sec\n autovacuum_max_workers 5: (HEAD) 96 sec VS (with patch) 41 sec\n autovacuum_max_workers 10: (HEAD) 90 sec VS (with patch) 39 sec\n\n tables:20000\n autovacuum_max_workers 1: (HEAD) 313 sec VS (with patch) 331 sec\n autovacuum_max_workers 2: (HEAD) 209 sec VS (with patch) 201 sec\n autovacuum_max_workers 3: (HEAD) 227 sec VS (with patch) 141 sec\n autovacuum_max_workers 5: (HEAD) 236 sec VS (with patch) 88 sec\n autovacuum_max_workers 10: (HEAD) 309 sec VS (with patch) 74 sec\n===========================================================================\n\nThe cases without patch, the scalability of the worker has decreased\nas the number of tables has increased.\nIn fact, the more workers there are, the longer it takes to complete\nVACUUM to all tables.\nThe cases with patch, it shows good scalability with respect to the\nnumber of workers.\n\nNote that perf top results showed that hash_search_with_hash_value,\nhash_seq_search and\npgstat_read_statsfiles are dominant during VACUUM in all patterns,\nwith or without the patch.\n\nTherefore, there is still a need to find ways to optimize the reading\nof large amounts of stats.\nHowever, this patch is effective in its own right, and since there are\nonly a few parts to modify,\nI think it should be able to be applied to current (preferably\npre-v13) PostgreSQL.\n\nThe patch and reproduce scripts were attached.\n\nThoughts ?\n\nBest regards,\n\n-- \nTatsuhito Kasahara\nkasahara.tatsuhito _at_ gmail.com", "msg_date": "Fri, 4 Sep 2020 19:50:52 +0900", "msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>", "msg_from_op": false, "msg_subject": "Re: autovac issue with large number of tables" }, { "msg_contents": "Therefore, we expect this patch [1] to be committed for its original\npurpose, as well as to improve autovacuum from v14 onwards.Hi,\n\nOn Fri, Sep 4, 2020 at 7:50 PM Kasahara Tatsuhito\n<kasahara.tatsuhito@gmail.com> wrote:\n>\n> Hi,\n>\n> On Wed, Sep 2, 2020 at 2:10 AM Kasahara Tatsuhito\n> <kasahara.tatsuhito@gmail.com> wrote:\n> > > I wonder if we could have table_recheck_autovac do two probes of the stats\n> > > data. First probe the existing stats data, and if it shows the table to\n> > > be already vacuumed, return immediately. If not, *then* force a stats\n> > > re-read, and check a second time.\n> > Does the above mean that the second and subsequent table_recheck_autovac()\n> > will be improved to first check using the previous refreshed statistics?\n> > I think that certainly works.\n> >\n> > If that's correct, I'll try to create a patch for the PoC\n>\n> I still don't know how to reproduce Jim's troubles, but I was able to reproduce\n> what was probably a very similar problem.\n>\n> This problem seems to be more likely to occur in cases where you have\n> a large number of tables,\n> i.e., a large amount of stats, and many small tables need VACUUM at\n> the same time.\n>\n> So I followed Tom's advice and created a patch for the PoC.\n> This patch will enable a flag in the table_recheck_autovac function to use\n> the existing stats next time if VACUUM (or ANALYZE) has already been done\n> by another worker on the check after the stats have been updated.\n> If the tables continue to require VACUUM after the refresh, then a refresh\n> will be required instead of using the existing statistics.\n>\n> I did simple test with HEAD and HEAD + this PoC patch.\n> The tests were conducted in two cases.\n> (I changed few configurations. see attached scripts)\n>\n> 1. Normal VACUUM case\n> - SET autovacuum = off\n> - CREATE tables with 100 rows\n> - DELETE 90 rows for each tables\n> - SET autovacuum = on and restart PostgreSQL\n> - Measure the time it takes for all tables to be VACUUMed\n>\n> 2. Anti wrap round VACUUM case\n> - CREATE brank tables\n> - SELECT all of these tables (for generate stats)\n> - SET autovacuum_freeze_max_age to low values and restart PostgreSQL\n> - Consumes a lot of XIDs by using txid_curent()\n> - Measure the time it takes for all tables to be VACUUMed\n>\n> For each test case, the following results were obtained by changing\n> autovacuum_max_workers parameters to 1, 2, 3(def) 5 and 10.\n> Also changing num of tables to 1000, 5000, 10000 and 20000.\n>\n> Due to the poor VM environment (2 VCPU/4 GB), the results are a little unstable,\n> but I think it's enough to ask for a trend.\n>\n> ===========================================================================\n> [1.Normal VACUUM case]\n> tables:1000\n> autovacuum_max_workers 1: (HEAD) 20 sec VS (with patch) 20 sec\n> autovacuum_max_workers 2: (HEAD) 18 sec VS (with patch) 16 sec\n> autovacuum_max_workers 3: (HEAD) 18 sec VS (with patch) 16 sec\n> autovacuum_max_workers 5: (HEAD) 19 sec VS (with patch) 17 sec\n> autovacuum_max_workers 10: (HEAD) 19 sec VS (with patch) 17 sec\n>\n> tables:5000\n> autovacuum_max_workers 1: (HEAD) 77 sec VS (with patch) 78 sec\n> autovacuum_max_workers 2: (HEAD) 61 sec VS (with patch) 43 sec\n> autovacuum_max_workers 3: (HEAD) 38 sec VS (with patch) 38 sec\n> autovacuum_max_workers 5: (HEAD) 45 sec VS (with patch) 37 sec\n> autovacuum_max_workers 10: (HEAD) 43 sec VS (with patch) 35 sec\n>\n> tables:10000\n> autovacuum_max_workers 1: (HEAD) 152 sec VS (with patch) 153 sec\n> autovacuum_max_workers 2: (HEAD) 119 sec VS (with patch) 98 sec\n> autovacuum_max_workers 3: (HEAD) 87 sec VS (with patch) 78 sec\n> autovacuum_max_workers 5: (HEAD) 100 sec VS (with patch) 66 sec\n> autovacuum_max_workers 10: (HEAD) 97 sec VS (with patch) 56 sec\n>\n> tables:20000\n> autovacuum_max_workers 1: (HEAD) 338 sec VS (with patch) 339 sec\n> autovacuum_max_workers 2: (HEAD) 231 sec VS (with patch) 229 sec\n> autovacuum_max_workers 3: (HEAD) 220 sec VS (with patch) 191 sec\n> autovacuum_max_workers 5: (HEAD) 234 sec VS (with patch) 147 sec\n> autovacuum_max_workers 10: (HEAD) 320 sec VS (with patch) 113 sec\n>\n> [2.Anti wrap round VACUUM case]\n> tables:1000\n> autovacuum_max_workers 1: (HEAD) 19 sec VS (with patch) 18 sec\n> autovacuum_max_workers 2: (HEAD) 14 sec VS (with patch) 15 sec\n> autovacuum_max_workers 3: (HEAD) 14 sec VS (with patch) 14 sec\n> autovacuum_max_workers 5: (HEAD) 14 sec VS (with patch) 16 sec\n> autovacuum_max_workers 10: (HEAD) 16 sec VS (with patch) 14 sec\n>\n> tables:5000\n> autovacuum_max_workers 1: (HEAD) 69 sec VS (with patch) 69 sec\n> autovacuum_max_workers 2: (HEAD) 66 sec VS (with patch) 47 sec\n> autovacuum_max_workers 3: (HEAD) 59 sec VS (with patch) 37 sec\n> autovacuum_max_workers 5: (HEAD) 39 sec VS (with patch) 28 sec\n> autovacuum_max_workers 10: (HEAD) 39 sec VS (with patch) 29 sec\n>\n> tables:10000\n> autovacuum_max_workers 1: (HEAD) 139 sec VS (with patch) 138 sec\n> autovacuum_max_workers 2: (HEAD) 130 sec VS (with patch) 86 sec\n> autovacuum_max_workers 3: (HEAD) 120 sec VS (with patch) 68 sec\n> autovacuum_max_workers 5: (HEAD) 96 sec VS (with patch) 41 sec\n> autovacuum_max_workers 10: (HEAD) 90 sec VS (with patch) 39 sec\n>\n> tables:20000\n> autovacuum_max_workers 1: (HEAD) 313 sec VS (with patch) 331 sec\n> autovacuum_max_workers 2: (HEAD) 209 sec VS (with patch) 201 sec\n> autovacuum_max_workers 3: (HEAD) 227 sec VS (with patch) 141 sec\n> autovacuum_max_workers 5: (HEAD) 236 sec VS (with patch) 88 sec\n> autovacuum_max_workers 10: (HEAD) 309 sec VS (with patch) 74 sec\n> ===========================================================================\n>\n> The cases without patch, the scalability of the worker has decreased\n> as the number of tables has increased.\n> In fact, the more workers there are, the longer it takes to complete\n> VACUUM to all tables.\n> The cases with patch, it shows good scalability with respect to the\n> number of workers.\n>\n> Note that perf top results showed that hash_search_with_hash_value,\n> hash_seq_search and\n> pgstat_read_statsfiles are dominant during VACUUM in all patterns,\n> with or without the patch.\n>\n> Therefore, there is still a need to find ways to optimize the reading\n> of large amounts of stats.\n> However, this patch is effective in its own right, and since there are\n> only a few parts to modify,\n> I think it should be able to be applied to current (preferably\n> pre-v13) PostgreSQL.\n>\n> The patch and reproduce scripts were attached.\n>\n> Thoughts ?\n\nHi.\n\nI ran the same test with a patch[1] that manages the statistics on\nshared memory.\nThis patch is expected to reduce the burden of refreshing large\namounts of stats.\n\nAnd the following results were obtained.\n(The results for HEAD are the same as in my last post.)\n\n========================================================================================\n[1.Normal VACUUM case]\n tables:1000\n autovacuum_max_workers 1: (HEAD) 20 sec VS (with shared_base_stast\npatch) 8 sec\n autovacuum_max_workers 2: (HEAD) 18 sec VS (with shared_base_stast\npatch) 8 sec\n autovacuum_max_workers 3: (HEAD) 18 sec VS (with shared_base_stast\npatch) 8 sec\n autovacuum_max_workers 5: (HEAD) 19 sec VS (with shared_base_stast\npatch) 9 sec\n autovacuum_max_workers 10: (HEAD) 19 sec VS (with shared_base_stast\npatch) 9 sec\n\n tables:5000\n autovacuum_max_workers 1: (HEAD) 77 sec VS (with shared_base_stast\npatch) 13 sec\n autovacuum_max_workers 2: (HEAD) 61 sec VS (with shared_base_stast\npatch) 12 sec\n autovacuum_max_workers 3: (HEAD) 38 sec VS (with shared_base_stast\npatch) 13 sec\n autovacuum_max_workers 5: (HEAD) 45 sec VS (with shared_base_stast\npatch) 12 sec\n autovacuum_max_workers 10: (HEAD) 43 sec VS (with shared_base_stast\npatch) 12 sec\n\n tables:10000\n autovacuum_max_workers 1: (HEAD) 152 sec VS (with\nshared_base_stast patch) 18 sec\n autovacuum_max_workers 2: (HEAD) 119 sec VS (with\nshared_base_stast patch) 25 sec\n autovacuum_max_workers 3: (HEAD) 87 sec VS (with\nshared_base_stast patch) 28 sec\n autovacuum_max_workers 5: (HEAD) 100 sec VS (with\nshared_base_stast patch) 28 sec\n autovacuum_max_workers 10: (HEAD) 97 sec VS (with\nshared_base_stast patch) 29 sec\n\n tables:20000\n autovacuum_max_workers 1: (HEAD) 338 sec VS (with\nshared_base_stast patch) 27 sec\n autovacuum_max_workers 2: (HEAD) 231 sec VS (with\nshared_base_stast patch) 54 sec\n autovacuum_max_workers 3: (HEAD) 220 sec VS (with\nshared_base_stast patch) 67 sec\n autovacuum_max_workers 5: (HEAD) 234 sec VS (with\nshared_base_stast patch) 75 sec\n autovacuum_max_workers 10: (HEAD) 320 sec VS (with\nshared_base_stast patch) 83 sec\n\n[2.Anti wrap round VACUUM case]\n tables:1000\n autovacuum_max_workers 1: (HEAD) 19 sec VS (with shared_base_stats\npatch) 6 sec\n autovacuum_max_workers 2: (HEAD) 14 sec VS (with shared_base_stats\npatch) 7 sec\n autovacuum_max_workers 3: (HEAD) 14 sec VS (with shared_base_stats\npatch) 6 sec\n autovacuum_max_workers 5: (HEAD) 14 sec VS (with shared_base_stats\npatch) 6 sec\n autovacuum_max_workers 10: (HEAD) 16 sec VS (with shared_base_stats\npatch) 7 sec\n\n tables:5000\n autovacuum_max_workers 1: (HEAD) 69 sec VS (with shared_base_stats\npatch) 8 sec\n autovacuum_max_workers 2: (HEAD) 66 sec VS (with shared_base_stats\npatch) 8 sec\n autovacuum_max_workers 3: (HEAD) 59 sec VS (with shared_base_stats\npatch) 8 sec\n autovacuum_max_workers 5: (HEAD) 39 sec VS (with shared_base_stats\npatch) 9 sec\n autovacuum_max_workers 10: (HEAD) 39 sec VS (with shared_base_stats\npatch) 8 sec\n\n tables:10000\n autovacuum_max_workers 1: (HEAD) 139 sec VS (with\nshared_base_stats patch) 9 sec\n autovacuum_max_workers 2: (HEAD) 130 sec VS (with\nshared_base_stats patch) 9 sec\n autovacuum_max_workers 3: (HEAD) 120 sec VS (with\nshared_base_stats patch) 9 sec\n autovacuum_max_workers 5: (HEAD) 96 sec VS (with\nshared_base_stats patch) 8 sec\n autovacuum_max_workers 10: (HEAD) 90 sec VS (with\nshared_base_stats patch) 9 sec\n\n tables:20000\n autovacuum_max_workers 1: (HEAD) 313 sec VS (with\nshared_base_stats patch) 12 sec\n autovacuum_max_workers 2: (HEAD) 209 sec VS (with\nshared_base_stats patch) 12 sec\n autovacuum_max_workers 3: (HEAD) 227 sec VS (with\nshared_base_stats patch) 12 sec\n autovacuum_max_workers 5: (HEAD) 236 sec VS (with\nshared_base_stats patch) 11 sec\n autovacuum_max_workers 10: (HEAD) 309 sec VS (with\nshared_base_stats patch) 12 sec\n========================================================================================\n\nThis patch provided a very nice speedup in both cases.\nHowever, in case 1, when the number of tables is large, there is an\nincrease in the time required\nas the number of workers increases.\nWhether this is due to CPU and IO conflicts or patch characteristics\nis not yet known.\nNevertheless, at least the problems associated with\ntable_recheck_autovac() appear to have been resolved.\n\nSo, I hope that this patch [1] to be committed for its original purpose,\nas well as to improve autovacuum of v14 and later.\n\nThe other patch I submitted (v1_mod_table_recheck_autovac.patch) is\nuseful for slight\nimproving autovacuum of PostgreSQL 13 and before.\nIs it worth backporting this patch to current PostgreSQL 13 and earlier?\n\nBest regards,\n\n[1] https://www.postgresql.org/message-id/20200908.175557.617150409868541587.horikyota.ntt%40gmail.com\n\n-- \nTatsuhito Kasahara\nkasahara.tatsuhito _at_ gmail.com\n\n\n", "msg_date": "Thu, 10 Sep 2020 18:29:09 +0900", "msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>", "msg_from_op": false, "msg_subject": "Re: autovac issue with large number of tables" }, { "msg_contents": "On Fri, Sep 4, 2020 at 7:50 PM Kasahara Tatsuhito\n<kasahara.tatsuhito@gmail.com> wrote:\n>\n> Hi,\n>\n> On Wed, Sep 2, 2020 at 2:10 AM Kasahara Tatsuhito\n> <kasahara.tatsuhito@gmail.com> wrote:\n> > > I wonder if we could have table_recheck_autovac do two probes of the stats\n> > > data. First probe the existing stats data, and if it shows the table to\n> > > be already vacuumed, return immediately. If not, *then* force a stats\n> > > re-read, and check a second time.\n> > Does the above mean that the second and subsequent table_recheck_autovac()\n> > will be improved to first check using the previous refreshed statistics?\n> > I think that certainly works.\n> >\n> > If that's correct, I'll try to create a patch for the PoC\n>\n> I still don't know how to reproduce Jim's troubles, but I was able to reproduce\n> what was probably a very similar problem.\n>\n> This problem seems to be more likely to occur in cases where you have\n> a large number of tables,\n> i.e., a large amount of stats, and many small tables need VACUUM at\n> the same time.\n>\n> So I followed Tom's advice and created a patch for the PoC.\n> This patch will enable a flag in the table_recheck_autovac function to use\n> the existing stats next time if VACUUM (or ANALYZE) has already been done\n> by another worker on the check after the stats have been updated.\n> If the tables continue to require VACUUM after the refresh, then a refresh\n> will be required instead of using the existing statistics.\n>\n> I did simple test with HEAD and HEAD + this PoC patch.\n> The tests were conducted in two cases.\n> (I changed few configurations. see attached scripts)\n>\n> 1. Normal VACUUM case\n> - SET autovacuum = off\n> - CREATE tables with 100 rows\n> - DELETE 90 rows for each tables\n> - SET autovacuum = on and restart PostgreSQL\n> - Measure the time it takes for all tables to be VACUUMed\n>\n> 2. Anti wrap round VACUUM case\n> - CREATE brank tables\n> - SELECT all of these tables (for generate stats)\n> - SET autovacuum_freeze_max_age to low values and restart PostgreSQL\n> - Consumes a lot of XIDs by using txid_curent()\n> - Measure the time it takes for all tables to be VACUUMed\n>\n> For each test case, the following results were obtained by changing\n> autovacuum_max_workers parameters to 1, 2, 3(def) 5 and 10.\n> Also changing num of tables to 1000, 5000, 10000 and 20000.\n>\n> Due to the poor VM environment (2 VCPU/4 GB), the results are a little unstable,\n> but I think it's enough to ask for a trend.\n>\n> ===========================================================================\n> [1.Normal VACUUM case]\n> tables:1000\n> autovacuum_max_workers 1: (HEAD) 20 sec VS (with patch) 20 sec\n> autovacuum_max_workers 2: (HEAD) 18 sec VS (with patch) 16 sec\n> autovacuum_max_workers 3: (HEAD) 18 sec VS (with patch) 16 sec\n> autovacuum_max_workers 5: (HEAD) 19 sec VS (with patch) 17 sec\n> autovacuum_max_workers 10: (HEAD) 19 sec VS (with patch) 17 sec\n>\n> tables:5000\n> autovacuum_max_workers 1: (HEAD) 77 sec VS (with patch) 78 sec\n> autovacuum_max_workers 2: (HEAD) 61 sec VS (with patch) 43 sec\n> autovacuum_max_workers 3: (HEAD) 38 sec VS (with patch) 38 sec\n> autovacuum_max_workers 5: (HEAD) 45 sec VS (with patch) 37 sec\n> autovacuum_max_workers 10: (HEAD) 43 sec VS (with patch) 35 sec\n>\n> tables:10000\n> autovacuum_max_workers 1: (HEAD) 152 sec VS (with patch) 153 sec\n> autovacuum_max_workers 2: (HEAD) 119 sec VS (with patch) 98 sec\n> autovacuum_max_workers 3: (HEAD) 87 sec VS (with patch) 78 sec\n> autovacuum_max_workers 5: (HEAD) 100 sec VS (with patch) 66 sec\n> autovacuum_max_workers 10: (HEAD) 97 sec VS (with patch) 56 sec\n>\n> tables:20000\n> autovacuum_max_workers 1: (HEAD) 338 sec VS (with patch) 339 sec\n> autovacuum_max_workers 2: (HEAD) 231 sec VS (with patch) 229 sec\n> autovacuum_max_workers 3: (HEAD) 220 sec VS (with patch) 191 sec\n> autovacuum_max_workers 5: (HEAD) 234 sec VS (with patch) 147 sec\n> autovacuum_max_workers 10: (HEAD) 320 sec VS (with patch) 113 sec\n>\n> [2.Anti wrap round VACUUM case]\n> tables:1000\n> autovacuum_max_workers 1: (HEAD) 19 sec VS (with patch) 18 sec\n> autovacuum_max_workers 2: (HEAD) 14 sec VS (with patch) 15 sec\n> autovacuum_max_workers 3: (HEAD) 14 sec VS (with patch) 14 sec\n> autovacuum_max_workers 5: (HEAD) 14 sec VS (with patch) 16 sec\n> autovacuum_max_workers 10: (HEAD) 16 sec VS (with patch) 14 sec\n>\n> tables:5000\n> autovacuum_max_workers 1: (HEAD) 69 sec VS (with patch) 69 sec\n> autovacuum_max_workers 2: (HEAD) 66 sec VS (with patch) 47 sec\n> autovacuum_max_workers 3: (HEAD) 59 sec VS (with patch) 37 sec\n> autovacuum_max_workers 5: (HEAD) 39 sec VS (with patch) 28 sec\n> autovacuum_max_workers 10: (HEAD) 39 sec VS (with patch) 29 sec\n>\n> tables:10000\n> autovacuum_max_workers 1: (HEAD) 139 sec VS (with patch) 138 sec\n> autovacuum_max_workers 2: (HEAD) 130 sec VS (with patch) 86 sec\n> autovacuum_max_workers 3: (HEAD) 120 sec VS (with patch) 68 sec\n> autovacuum_max_workers 5: (HEAD) 96 sec VS (with patch) 41 sec\n> autovacuum_max_workers 10: (HEAD) 90 sec VS (with patch) 39 sec\n>\n> tables:20000\n> autovacuum_max_workers 1: (HEAD) 313 sec VS (with patch) 331 sec\n> autovacuum_max_workers 2: (HEAD) 209 sec VS (with patch) 201 sec\n> autovacuum_max_workers 3: (HEAD) 227 sec VS (with patch) 141 sec\n> autovacuum_max_workers 5: (HEAD) 236 sec VS (with patch) 88 sec\n> autovacuum_max_workers 10: (HEAD) 309 sec VS (with patch) 74 sec\n> ===========================================================================\n>\n> The cases without patch, the scalability of the worker has decreased\n> as the number of tables has increased.\n> In fact, the more workers there are, the longer it takes to complete\n> VACUUM to all tables.\n> The cases with patch, it shows good scalability with respect to the\n> number of workers.\n\nIt seems a good performance improvement even without the patch of\nshared memory based stats collector.\n\n>\n> Note that perf top results showed that hash_search_with_hash_value,\n> hash_seq_search and\n> pgstat_read_statsfiles are dominant during VACUUM in all patterns,\n> with or without the patch.\n>\n> Therefore, there is still a need to find ways to optimize the reading\n> of large amounts of stats.\n> However, this patch is effective in its own right, and since there are\n> only a few parts to modify,\n> I think it should be able to be applied to current (preferably\n> pre-v13) PostgreSQL.\n\n+1\n\n+\n+ /* We might be better to refresh stats */\n+ use_existing_stats = false;\n }\n+ else\n+ {\n\n- heap_freetuple(classTup);\n+ heap_freetuple(classTup);\n+ /* The relid has already vacuumed, so we might be better to\nuse exiting stats */\n+ use_existing_stats = true;\n+ }\n\nWith that patch, the autovacuum process refreshes the stats in the\nnext check if it finds out that the table still needs to be vacuumed.\nBut I guess it's not necessarily true because the next table might be\nvacuumed already. So I think we might want to always use the existing\nfor the first check. What do you think?\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 25 Nov 2020 14:17:01 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: autovac issue with large number of tables" }, { "msg_contents": "Hi,\n\nOn Wed, Nov 25, 2020 at 2:17 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Sep 4, 2020 at 7:50 PM Kasahara Tatsuhito\n> <kasahara.tatsuhito@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > On Wed, Sep 2, 2020 at 2:10 AM Kasahara Tatsuhito\n> > <kasahara.tatsuhito@gmail.com> wrote:\n> > > > I wonder if we could have table_recheck_autovac do two probes of the stats\n> > > > data. First probe the existing stats data, and if it shows the table to\n> > > > be already vacuumed, return immediately. If not, *then* force a stats\n> > > > re-read, and check a second time.\n> > > Does the above mean that the second and subsequent table_recheck_autovac()\n> > > will be improved to first check using the previous refreshed statistics?\n> > > I think that certainly works.\n> > >\n> > > If that's correct, I'll try to create a patch for the PoC\n> >\n> > I still don't know how to reproduce Jim's troubles, but I was able to reproduce\n> > what was probably a very similar problem.\n> >\n> > This problem seems to be more likely to occur in cases where you have\n> > a large number of tables,\n> > i.e., a large amount of stats, and many small tables need VACUUM at\n> > the same time.\n> >\n> > So I followed Tom's advice and created a patch for the PoC.\n> > This patch will enable a flag in the table_recheck_autovac function to use\n> > the existing stats next time if VACUUM (or ANALYZE) has already been done\n> > by another worker on the check after the stats have been updated.\n> > If the tables continue to require VACUUM after the refresh, then a refresh\n> > will be required instead of using the existing statistics.\n> >\n> > I did simple test with HEAD and HEAD + this PoC patch.\n> > The tests were conducted in two cases.\n> > (I changed few configurations. see attached scripts)\n> >\n> > 1. Normal VACUUM case\n> > - SET autovacuum = off\n> > - CREATE tables with 100 rows\n> > - DELETE 90 rows for each tables\n> > - SET autovacuum = on and restart PostgreSQL\n> > - Measure the time it takes for all tables to be VACUUMed\n> >\n> > 2. Anti wrap round VACUUM case\n> > - CREATE brank tables\n> > - SELECT all of these tables (for generate stats)\n> > - SET autovacuum_freeze_max_age to low values and restart PostgreSQL\n> > - Consumes a lot of XIDs by using txid_curent()\n> > - Measure the time it takes for all tables to be VACUUMed\n> >\n> > For each test case, the following results were obtained by changing\n> > autovacuum_max_workers parameters to 1, 2, 3(def) 5 and 10.\n> > Also changing num of tables to 1000, 5000, 10000 and 20000.\n> >\n> > Due to the poor VM environment (2 VCPU/4 GB), the results are a little unstable,\n> > but I think it's enough to ask for a trend.\n> >\n> > ===========================================================================\n> > [1.Normal VACUUM case]\n> > tables:1000\n> > autovacuum_max_workers 1: (HEAD) 20 sec VS (with patch) 20 sec\n> > autovacuum_max_workers 2: (HEAD) 18 sec VS (with patch) 16 sec\n> > autovacuum_max_workers 3: (HEAD) 18 sec VS (with patch) 16 sec\n> > autovacuum_max_workers 5: (HEAD) 19 sec VS (with patch) 17 sec\n> > autovacuum_max_workers 10: (HEAD) 19 sec VS (with patch) 17 sec\n> >\n> > tables:5000\n> > autovacuum_max_workers 1: (HEAD) 77 sec VS (with patch) 78 sec\n> > autovacuum_max_workers 2: (HEAD) 61 sec VS (with patch) 43 sec\n> > autovacuum_max_workers 3: (HEAD) 38 sec VS (with patch) 38 sec\n> > autovacuum_max_workers 5: (HEAD) 45 sec VS (with patch) 37 sec\n> > autovacuum_max_workers 10: (HEAD) 43 sec VS (with patch) 35 sec\n> >\n> > tables:10000\n> > autovacuum_max_workers 1: (HEAD) 152 sec VS (with patch) 153 sec\n> > autovacuum_max_workers 2: (HEAD) 119 sec VS (with patch) 98 sec\n> > autovacuum_max_workers 3: (HEAD) 87 sec VS (with patch) 78 sec\n> > autovacuum_max_workers 5: (HEAD) 100 sec VS (with patch) 66 sec\n> > autovacuum_max_workers 10: (HEAD) 97 sec VS (with patch) 56 sec\n> >\n> > tables:20000\n> > autovacuum_max_workers 1: (HEAD) 338 sec VS (with patch) 339 sec\n> > autovacuum_max_workers 2: (HEAD) 231 sec VS (with patch) 229 sec\n> > autovacuum_max_workers 3: (HEAD) 220 sec VS (with patch) 191 sec\n> > autovacuum_max_workers 5: (HEAD) 234 sec VS (with patch) 147 sec\n> > autovacuum_max_workers 10: (HEAD) 320 sec VS (with patch) 113 sec\n> >\n> > [2.Anti wrap round VACUUM case]\n> > tables:1000\n> > autovacuum_max_workers 1: (HEAD) 19 sec VS (with patch) 18 sec\n> > autovacuum_max_workers 2: (HEAD) 14 sec VS (with patch) 15 sec\n> > autovacuum_max_workers 3: (HEAD) 14 sec VS (with patch) 14 sec\n> > autovacuum_max_workers 5: (HEAD) 14 sec VS (with patch) 16 sec\n> > autovacuum_max_workers 10: (HEAD) 16 sec VS (with patch) 14 sec\n> >\n> > tables:5000\n> > autovacuum_max_workers 1: (HEAD) 69 sec VS (with patch) 69 sec\n> > autovacuum_max_workers 2: (HEAD) 66 sec VS (with patch) 47 sec\n> > autovacuum_max_workers 3: (HEAD) 59 sec VS (with patch) 37 sec\n> > autovacuum_max_workers 5: (HEAD) 39 sec VS (with patch) 28 sec\n> > autovacuum_max_workers 10: (HEAD) 39 sec VS (with patch) 29 sec\n> >\n> > tables:10000\n> > autovacuum_max_workers 1: (HEAD) 139 sec VS (with patch) 138 sec\n> > autovacuum_max_workers 2: (HEAD) 130 sec VS (with patch) 86 sec\n> > autovacuum_max_workers 3: (HEAD) 120 sec VS (with patch) 68 sec\n> > autovacuum_max_workers 5: (HEAD) 96 sec VS (with patch) 41 sec\n> > autovacuum_max_workers 10: (HEAD) 90 sec VS (with patch) 39 sec\n> >\n> > tables:20000\n> > autovacuum_max_workers 1: (HEAD) 313 sec VS (with patch) 331 sec\n> > autovacuum_max_workers 2: (HEAD) 209 sec VS (with patch) 201 sec\n> > autovacuum_max_workers 3: (HEAD) 227 sec VS (with patch) 141 sec\n> > autovacuum_max_workers 5: (HEAD) 236 sec VS (with patch) 88 sec\n> > autovacuum_max_workers 10: (HEAD) 309 sec VS (with patch) 74 sec\n> > ===========================================================================\n> >\n> > The cases without patch, the scalability of the worker has decreased\n> > as the number of tables has increased.\n> > In fact, the more workers there are, the longer it takes to complete\n> > VACUUM to all tables.\n> > The cases with patch, it shows good scalability with respect to the\n> > number of workers.\n>\n> It seems a good performance improvement even without the patch of\n> shared memory based stats collector.\n>\n> >\n> > Note that perf top results showed that hash_search_with_hash_value,\n> > hash_seq_search and\n> > pgstat_read_statsfiles are dominant during VACUUM in all patterns,\n> > with or without the patch.\n> >\n> > Therefore, there is still a need to find ways to optimize the reading\n> > of large amounts of stats.\n> > However, this patch is effective in its own right, and since there are\n> > only a few parts to modify,\n> > I think it should be able to be applied to current (preferably\n> > pre-v13) PostgreSQL.\n>\n> +1\n>\n> +\n> + /* We might be better to refresh stats */\n> + use_existing_stats = false;\n> }\n> + else\n> + {\n>\n> - heap_freetuple(classTup);\n> + heap_freetuple(classTup);\n> + /* The relid has already vacuumed, so we might be better to\n> use exiting stats */\n> + use_existing_stats = true;\n> + }\n>\n> With that patch, the autovacuum process refreshes the stats in the\n> next check if it finds out that the table still needs to be vacuumed.\n> But I guess it's not necessarily true because the next table might be\n> vacuumed already. So I think we might want to always use the existing\n> for the first check. What do you think?\nThanks for your comment.\n\nIf we assume the case where some workers vacuum on large tables\nand a single worker vacuum on small tables, the processing\nperformance of the single worker will be slightly lower if the\nexisting statistics are checked every time.\n\nIn fact, at first I tried to check the existing stats every time,\nbut the performance was slightly worse in cases with a small number of workers.\n(Checking the existing stats is lightweight , but at high frequency,\n it affects processing performance.)\nTherefore, at after refresh statistics, determine whether autovac\nshould use the existing statistics.\n\nBTW, I found some typos in comments, so attache a fixed version.\n\nBest regards,\n\n-- \nTatsuhito Kasahara\nkasahara.tatsuhito _at_ gmail.com", "msg_date": "Wed, 25 Nov 2020 16:18:19 +0900", "msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>", "msg_from_op": false, "msg_subject": "Re: autovac issue with large number of tables" }, { "msg_contents": "On Wed, Nov 25, 2020 at 4:18 PM Kasahara Tatsuhito\n<kasahara.tatsuhito@gmail.com> wrote:\n>\n> Hi,\n>\n> On Wed, Nov 25, 2020 at 2:17 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Fri, Sep 4, 2020 at 7:50 PM Kasahara Tatsuhito\n> > <kasahara.tatsuhito@gmail.com> wrote:\n> > >\n> > > Hi,\n> > >\n> > > On Wed, Sep 2, 2020 at 2:10 AM Kasahara Tatsuhito\n> > > <kasahara.tatsuhito@gmail.com> wrote:\n> > > > > I wonder if we could have table_recheck_autovac do two probes of the stats\n> > > > > data. First probe the existing stats data, and if it shows the table to\n> > > > > be already vacuumed, return immediately. If not, *then* force a stats\n> > > > > re-read, and check a second time.\n> > > > Does the above mean that the second and subsequent table_recheck_autovac()\n> > > > will be improved to first check using the previous refreshed statistics?\n> > > > I think that certainly works.\n> > > >\n> > > > If that's correct, I'll try to create a patch for the PoC\n> > >\n> > > I still don't know how to reproduce Jim's troubles, but I was able to reproduce\n> > > what was probably a very similar problem.\n> > >\n> > > This problem seems to be more likely to occur in cases where you have\n> > > a large number of tables,\n> > > i.e., a large amount of stats, and many small tables need VACUUM at\n> > > the same time.\n> > >\n> > > So I followed Tom's advice and created a patch for the PoC.\n> > > This patch will enable a flag in the table_recheck_autovac function to use\n> > > the existing stats next time if VACUUM (or ANALYZE) has already been done\n> > > by another worker on the check after the stats have been updated.\n> > > If the tables continue to require VACUUM after the refresh, then a refresh\n> > > will be required instead of using the existing statistics.\n> > >\n> > > I did simple test with HEAD and HEAD + this PoC patch.\n> > > The tests were conducted in two cases.\n> > > (I changed few configurations. see attached scripts)\n> > >\n> > > 1. Normal VACUUM case\n> > > - SET autovacuum = off\n> > > - CREATE tables with 100 rows\n> > > - DELETE 90 rows for each tables\n> > > - SET autovacuum = on and restart PostgreSQL\n> > > - Measure the time it takes for all tables to be VACUUMed\n> > >\n> > > 2. Anti wrap round VACUUM case\n> > > - CREATE brank tables\n> > > - SELECT all of these tables (for generate stats)\n> > > - SET autovacuum_freeze_max_age to low values and restart PostgreSQL\n> > > - Consumes a lot of XIDs by using txid_curent()\n> > > - Measure the time it takes for all tables to be VACUUMed\n> > >\n> > > For each test case, the following results were obtained by changing\n> > > autovacuum_max_workers parameters to 1, 2, 3(def) 5 and 10.\n> > > Also changing num of tables to 1000, 5000, 10000 and 20000.\n> > >\n> > > Due to the poor VM environment (2 VCPU/4 GB), the results are a little unstable,\n> > > but I think it's enough to ask for a trend.\n> > >\n> > > ===========================================================================\n> > > [1.Normal VACUUM case]\n> > > tables:1000\n> > > autovacuum_max_workers 1: (HEAD) 20 sec VS (with patch) 20 sec\n> > > autovacuum_max_workers 2: (HEAD) 18 sec VS (with patch) 16 sec\n> > > autovacuum_max_workers 3: (HEAD) 18 sec VS (with patch) 16 sec\n> > > autovacuum_max_workers 5: (HEAD) 19 sec VS (with patch) 17 sec\n> > > autovacuum_max_workers 10: (HEAD) 19 sec VS (with patch) 17 sec\n> > >\n> > > tables:5000\n> > > autovacuum_max_workers 1: (HEAD) 77 sec VS (with patch) 78 sec\n> > > autovacuum_max_workers 2: (HEAD) 61 sec VS (with patch) 43 sec\n> > > autovacuum_max_workers 3: (HEAD) 38 sec VS (with patch) 38 sec\n> > > autovacuum_max_workers 5: (HEAD) 45 sec VS (with patch) 37 sec\n> > > autovacuum_max_workers 10: (HEAD) 43 sec VS (with patch) 35 sec\n> > >\n> > > tables:10000\n> > > autovacuum_max_workers 1: (HEAD) 152 sec VS (with patch) 153 sec\n> > > autovacuum_max_workers 2: (HEAD) 119 sec VS (with patch) 98 sec\n> > > autovacuum_max_workers 3: (HEAD) 87 sec VS (with patch) 78 sec\n> > > autovacuum_max_workers 5: (HEAD) 100 sec VS (with patch) 66 sec\n> > > autovacuum_max_workers 10: (HEAD) 97 sec VS (with patch) 56 sec\n> > >\n> > > tables:20000\n> > > autovacuum_max_workers 1: (HEAD) 338 sec VS (with patch) 339 sec\n> > > autovacuum_max_workers 2: (HEAD) 231 sec VS (with patch) 229 sec\n> > > autovacuum_max_workers 3: (HEAD) 220 sec VS (with patch) 191 sec\n> > > autovacuum_max_workers 5: (HEAD) 234 sec VS (with patch) 147 sec\n> > > autovacuum_max_workers 10: (HEAD) 320 sec VS (with patch) 113 sec\n> > >\n> > > [2.Anti wrap round VACUUM case]\n> > > tables:1000\n> > > autovacuum_max_workers 1: (HEAD) 19 sec VS (with patch) 18 sec\n> > > autovacuum_max_workers 2: (HEAD) 14 sec VS (with patch) 15 sec\n> > > autovacuum_max_workers 3: (HEAD) 14 sec VS (with patch) 14 sec\n> > > autovacuum_max_workers 5: (HEAD) 14 sec VS (with patch) 16 sec\n> > > autovacuum_max_workers 10: (HEAD) 16 sec VS (with patch) 14 sec\n> > >\n> > > tables:5000\n> > > autovacuum_max_workers 1: (HEAD) 69 sec VS (with patch) 69 sec\n> > > autovacuum_max_workers 2: (HEAD) 66 sec VS (with patch) 47 sec\n> > > autovacuum_max_workers 3: (HEAD) 59 sec VS (with patch) 37 sec\n> > > autovacuum_max_workers 5: (HEAD) 39 sec VS (with patch) 28 sec\n> > > autovacuum_max_workers 10: (HEAD) 39 sec VS (with patch) 29 sec\n> > >\n> > > tables:10000\n> > > autovacuum_max_workers 1: (HEAD) 139 sec VS (with patch) 138 sec\n> > > autovacuum_max_workers 2: (HEAD) 130 sec VS (with patch) 86 sec\n> > > autovacuum_max_workers 3: (HEAD) 120 sec VS (with patch) 68 sec\n> > > autovacuum_max_workers 5: (HEAD) 96 sec VS (with patch) 41 sec\n> > > autovacuum_max_workers 10: (HEAD) 90 sec VS (with patch) 39 sec\n> > >\n> > > tables:20000\n> > > autovacuum_max_workers 1: (HEAD) 313 sec VS (with patch) 331 sec\n> > > autovacuum_max_workers 2: (HEAD) 209 sec VS (with patch) 201 sec\n> > > autovacuum_max_workers 3: (HEAD) 227 sec VS (with patch) 141 sec\n> > > autovacuum_max_workers 5: (HEAD) 236 sec VS (with patch) 88 sec\n> > > autovacuum_max_workers 10: (HEAD) 309 sec VS (with patch) 74 sec\n> > > ===========================================================================\n> > >\n> > > The cases without patch, the scalability of the worker has decreased\n> > > as the number of tables has increased.\n> > > In fact, the more workers there are, the longer it takes to complete\n> > > VACUUM to all tables.\n> > > The cases with patch, it shows good scalability with respect to the\n> > > number of workers.\n> >\n> > It seems a good performance improvement even without the patch of\n> > shared memory based stats collector.\n> >\n> > >\n> > > Note that perf top results showed that hash_search_with_hash_value,\n> > > hash_seq_search and\n> > > pgstat_read_statsfiles are dominant during VACUUM in all patterns,\n> > > with or without the patch.\n> > >\n> > > Therefore, there is still a need to find ways to optimize the reading\n> > > of large amounts of stats.\n> > > However, this patch is effective in its own right, and since there are\n> > > only a few parts to modify,\n> > > I think it should be able to be applied to current (preferably\n> > > pre-v13) PostgreSQL.\n> >\n> > +1\n> >\n> > +\n> > + /* We might be better to refresh stats */\n> > + use_existing_stats = false;\n> > }\n> > + else\n> > + {\n> >\n> > - heap_freetuple(classTup);\n> > + heap_freetuple(classTup);\n> > + /* The relid has already vacuumed, so we might be better to\n> > use exiting stats */\n> > + use_existing_stats = true;\n> > + }\n> >\n> > With that patch, the autovacuum process refreshes the stats in the\n> > next check if it finds out that the table still needs to be vacuumed.\n> > But I guess it's not necessarily true because the next table might be\n> > vacuumed already. So I think we might want to always use the existing\n> > for the first check. What do you think?\n> Thanks for your comment.\n>\n> If we assume the case where some workers vacuum on large tables\n> and a single worker vacuum on small tables, the processing\n> performance of the single worker will be slightly lower if the\n> existing statistics are checked every time.\n>\n> In fact, at first I tried to check the existing stats every time,\n> but the performance was slightly worse in cases with a small number of workers.\n> (Checking the existing stats is lightweight , but at high frequency,\n> it affects processing performance.)\n> Therefore, at after refresh statistics, determine whether autovac\n> should use the existing statistics.\n\nYeah, since the test you used uses a lot of small tables, if there are\na few workers, checking the existing stats is unlikely to return true\n(no need to vacuum). So the cost of existing stats check ends up being\noverhead. Not sure how slow always checking the existing stats was,\nbut given that the shared memory based stats collector patch could\nimprove the performance of refreshing stats, it might be better not to\ncheck the existing stats frequently like the patch does. Anyway, I\nthink it’s better to evaluate the performance improvement with other\ncases too.\n\n>\n> BTW, I found some typos in comments, so attache a fixed version.\n\nThank you for updating the patch! I'll also run the performance test\nyou shared with the latest version patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 25 Nov 2020 20:46:20 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: autovac issue with large number of tables" }, { "msg_contents": "On Wed, Nov 25, 2020 at 8:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Nov 25, 2020 at 4:18 PM Kasahara Tatsuhito\n> <kasahara.tatsuhito@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > On Wed, Nov 25, 2020 at 2:17 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Fri, Sep 4, 2020 at 7:50 PM Kasahara Tatsuhito\n> > > <kasahara.tatsuhito@gmail.com> wrote:\n> > > >\n> > > > Hi,\n> > > >\n> > > > On Wed, Sep 2, 2020 at 2:10 AM Kasahara Tatsuhito\n> > > > <kasahara.tatsuhito@gmail.com> wrote:\n> > > > > > I wonder if we could have table_recheck_autovac do two probes of the stats\n> > > > > > data. First probe the existing stats data, and if it shows the table to\n> > > > > > be already vacuumed, return immediately. If not, *then* force a stats\n> > > > > > re-read, and check a second time.\n> > > > > Does the above mean that the second and subsequent table_recheck_autovac()\n> > > > > will be improved to first check using the previous refreshed statistics?\n> > > > > I think that certainly works.\n> > > > >\n> > > > > If that's correct, I'll try to create a patch for the PoC\n> > > >\n> > > > I still don't know how to reproduce Jim's troubles, but I was able to reproduce\n> > > > what was probably a very similar problem.\n> > > >\n> > > > This problem seems to be more likely to occur in cases where you have\n> > > > a large number of tables,\n> > > > i.e., a large amount of stats, and many small tables need VACUUM at\n> > > > the same time.\n> > > >\n> > > > So I followed Tom's advice and created a patch for the PoC.\n> > > > This patch will enable a flag in the table_recheck_autovac function to use\n> > > > the existing stats next time if VACUUM (or ANALYZE) has already been done\n> > > > by another worker on the check after the stats have been updated.\n> > > > If the tables continue to require VACUUM after the refresh, then a refresh\n> > > > will be required instead of using the existing statistics.\n> > > >\n> > > > I did simple test with HEAD and HEAD + this PoC patch.\n> > > > The tests were conducted in two cases.\n> > > > (I changed few configurations. see attached scripts)\n> > > >\n> > > > 1. Normal VACUUM case\n> > > > - SET autovacuum = off\n> > > > - CREATE tables with 100 rows\n> > > > - DELETE 90 rows for each tables\n> > > > - SET autovacuum = on and restart PostgreSQL\n> > > > - Measure the time it takes for all tables to be VACUUMed\n> > > >\n> > > > 2. Anti wrap round VACUUM case\n> > > > - CREATE brank tables\n> > > > - SELECT all of these tables (for generate stats)\n> > > > - SET autovacuum_freeze_max_age to low values and restart PostgreSQL\n> > > > - Consumes a lot of XIDs by using txid_curent()\n> > > > - Measure the time it takes for all tables to be VACUUMed\n> > > >\n> > > > For each test case, the following results were obtained by changing\n> > > > autovacuum_max_workers parameters to 1, 2, 3(def) 5 and 10.\n> > > > Also changing num of tables to 1000, 5000, 10000 and 20000.\n> > > >\n> > > > Due to the poor VM environment (2 VCPU/4 GB), the results are a little unstable,\n> > > > but I think it's enough to ask for a trend.\n> > > >\n> > > > ===========================================================================\n> > > > [1.Normal VACUUM case]\n> > > > tables:1000\n> > > > autovacuum_max_workers 1: (HEAD) 20 sec VS (with patch) 20 sec\n> > > > autovacuum_max_workers 2: (HEAD) 18 sec VS (with patch) 16 sec\n> > > > autovacuum_max_workers 3: (HEAD) 18 sec VS (with patch) 16 sec\n> > > > autovacuum_max_workers 5: (HEAD) 19 sec VS (with patch) 17 sec\n> > > > autovacuum_max_workers 10: (HEAD) 19 sec VS (with patch) 17 sec\n> > > >\n> > > > tables:5000\n> > > > autovacuum_max_workers 1: (HEAD) 77 sec VS (with patch) 78 sec\n> > > > autovacuum_max_workers 2: (HEAD) 61 sec VS (with patch) 43 sec\n> > > > autovacuum_max_workers 3: (HEAD) 38 sec VS (with patch) 38 sec\n> > > > autovacuum_max_workers 5: (HEAD) 45 sec VS (with patch) 37 sec\n> > > > autovacuum_max_workers 10: (HEAD) 43 sec VS (with patch) 35 sec\n> > > >\n> > > > tables:10000\n> > > > autovacuum_max_workers 1: (HEAD) 152 sec VS (with patch) 153 sec\n> > > > autovacuum_max_workers 2: (HEAD) 119 sec VS (with patch) 98 sec\n> > > > autovacuum_max_workers 3: (HEAD) 87 sec VS (with patch) 78 sec\n> > > > autovacuum_max_workers 5: (HEAD) 100 sec VS (with patch) 66 sec\n> > > > autovacuum_max_workers 10: (HEAD) 97 sec VS (with patch) 56 sec\n> > > >\n> > > > tables:20000\n> > > > autovacuum_max_workers 1: (HEAD) 338 sec VS (with patch) 339 sec\n> > > > autovacuum_max_workers 2: (HEAD) 231 sec VS (with patch) 229 sec\n> > > > autovacuum_max_workers 3: (HEAD) 220 sec VS (with patch) 191 sec\n> > > > autovacuum_max_workers 5: (HEAD) 234 sec VS (with patch) 147 sec\n> > > > autovacuum_max_workers 10: (HEAD) 320 sec VS (with patch) 113 sec\n> > > >\n> > > > [2.Anti wrap round VACUUM case]\n> > > > tables:1000\n> > > > autovacuum_max_workers 1: (HEAD) 19 sec VS (with patch) 18 sec\n> > > > autovacuum_max_workers 2: (HEAD) 14 sec VS (with patch) 15 sec\n> > > > autovacuum_max_workers 3: (HEAD) 14 sec VS (with patch) 14 sec\n> > > > autovacuum_max_workers 5: (HEAD) 14 sec VS (with patch) 16 sec\n> > > > autovacuum_max_workers 10: (HEAD) 16 sec VS (with patch) 14 sec\n> > > >\n> > > > tables:5000\n> > > > autovacuum_max_workers 1: (HEAD) 69 sec VS (with patch) 69 sec\n> > > > autovacuum_max_workers 2: (HEAD) 66 sec VS (with patch) 47 sec\n> > > > autovacuum_max_workers 3: (HEAD) 59 sec VS (with patch) 37 sec\n> > > > autovacuum_max_workers 5: (HEAD) 39 sec VS (with patch) 28 sec\n> > > > autovacuum_max_workers 10: (HEAD) 39 sec VS (with patch) 29 sec\n> > > >\n> > > > tables:10000\n> > > > autovacuum_max_workers 1: (HEAD) 139 sec VS (with patch) 138 sec\n> > > > autovacuum_max_workers 2: (HEAD) 130 sec VS (with patch) 86 sec\n> > > > autovacuum_max_workers 3: (HEAD) 120 sec VS (with patch) 68 sec\n> > > > autovacuum_max_workers 5: (HEAD) 96 sec VS (with patch) 41 sec\n> > > > autovacuum_max_workers 10: (HEAD) 90 sec VS (with patch) 39 sec\n> > > >\n> > > > tables:20000\n> > > > autovacuum_max_workers 1: (HEAD) 313 sec VS (with patch) 331 sec\n> > > > autovacuum_max_workers 2: (HEAD) 209 sec VS (with patch) 201 sec\n> > > > autovacuum_max_workers 3: (HEAD) 227 sec VS (with patch) 141 sec\n> > > > autovacuum_max_workers 5: (HEAD) 236 sec VS (with patch) 88 sec\n> > > > autovacuum_max_workers 10: (HEAD) 309 sec VS (with patch) 74 sec\n> > > > ===========================================================================\n> > > >\n> > > > The cases without patch, the scalability of the worker has decreased\n> > > > as the number of tables has increased.\n> > > > In fact, the more workers there are, the longer it takes to complete\n> > > > VACUUM to all tables.\n> > > > The cases with patch, it shows good scalability with respect to the\n> > > > number of workers.\n> > >\n> > > It seems a good performance improvement even without the patch of\n> > > shared memory based stats collector.\n> > >\n> > > >\n> > > > Note that perf top results showed that hash_search_with_hash_value,\n> > > > hash_seq_search and\n> > > > pgstat_read_statsfiles are dominant during VACUUM in all patterns,\n> > > > with or without the patch.\n> > > >\n> > > > Therefore, there is still a need to find ways to optimize the reading\n> > > > of large amounts of stats.\n> > > > However, this patch is effective in its own right, and since there are\n> > > > only a few parts to modify,\n> > > > I think it should be able to be applied to current (preferably\n> > > > pre-v13) PostgreSQL.\n> > >\n> > > +1\n> > >\n> > > +\n> > > + /* We might be better to refresh stats */\n> > > + use_existing_stats = false;\n> > > }\n> > > + else\n> > > + {\n> > >\n> > > - heap_freetuple(classTup);\n> > > + heap_freetuple(classTup);\n> > > + /* The relid has already vacuumed, so we might be better to\n> > > use exiting stats */\n> > > + use_existing_stats = true;\n> > > + }\n> > >\n> > > With that patch, the autovacuum process refreshes the stats in the\n> > > next check if it finds out that the table still needs to be vacuumed.\n> > > But I guess it's not necessarily true because the next table might be\n> > > vacuumed already. So I think we might want to always use the existing\n> > > for the first check. What do you think?\n> > Thanks for your comment.\n> >\n> > If we assume the case where some workers vacuum on large tables\n> > and a single worker vacuum on small tables, the processing\n> > performance of the single worker will be slightly lower if the\n> > existing statistics are checked every time.\n> >\n> > In fact, at first I tried to check the existing stats every time,\n> > but the performance was slightly worse in cases with a small number of workers.\n> > (Checking the existing stats is lightweight , but at high frequency,\n> > it affects processing performance.)\n> > Therefore, at after refresh statistics, determine whether autovac\n> > should use the existing statistics.\n>\n> Yeah, since the test you used uses a lot of small tables, if there are\n> a few workers, checking the existing stats is unlikely to return true\n> (no need to vacuum). So the cost of existing stats check ends up being\n> overhead. Not sure how slow always checking the existing stats was,\n> but given that the shared memory based stats collector patch could\n> improve the performance of refreshing stats, it might be better not to\n> check the existing stats frequently like the patch does. Anyway, I\n> think it’s better to evaluate the performance improvement with other\n> cases too.\nYeah, I would like to see how much the performance changes in other cases.\nIn addition, if the shared-based-stats patch is applied, we won't need to reload\na huge stats file, so we will just have to check the stats on\nshared-mem every time.\nPerhaps the logic of table_recheck_autovac could be simpler.\n\n> > BTW, I found some typos in comments, so attache a fixed version.\n>\n> Thank you for updating the patch! I'll also run the performance test\n> you shared with the latest version patch.\nThank you!\nIt's very helpful.\n\nBest regards,\n-- \nTatsuhito Kasahara\nkasahara.tatsuhito _at_ gmail.com\n\n\n", "msg_date": "Thu, 26 Nov 2020 10:41:03 +0900", "msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>", "msg_from_op": false, "msg_subject": "Re: autovac issue with large number of tables" }, { "msg_contents": "\n\nOn 2020/11/26 10:41, Kasahara Tatsuhito wrote:\n> On Wed, Nov 25, 2020 at 8:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>\n>> On Wed, Nov 25, 2020 at 4:18 PM Kasahara Tatsuhito\n>> <kasahara.tatsuhito@gmail.com> wrote:\n>>>\n>>> Hi,\n>>>\n>>> On Wed, Nov 25, 2020 at 2:17 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>>>\n>>>> On Fri, Sep 4, 2020 at 7:50 PM Kasahara Tatsuhito\n>>>> <kasahara.tatsuhito@gmail.com> wrote:\n>>>>>\n>>>>> Hi,\n>>>>>\n>>>>> On Wed, Sep 2, 2020 at 2:10 AM Kasahara Tatsuhito\n>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n>>>>>>> I wonder if we could have table_recheck_autovac do two probes of the stats\n>>>>>>> data. First probe the existing stats data, and if it shows the table to\n>>>>>>> be already vacuumed, return immediately. If not, *then* force a stats\n>>>>>>> re-read, and check a second time.\n>>>>>> Does the above mean that the second and subsequent table_recheck_autovac()\n>>>>>> will be improved to first check using the previous refreshed statistics?\n>>>>>> I think that certainly works.\n>>>>>>\n>>>>>> If that's correct, I'll try to create a patch for the PoC\n>>>>>\n>>>>> I still don't know how to reproduce Jim's troubles, but I was able to reproduce\n>>>>> what was probably a very similar problem.\n>>>>>\n>>>>> This problem seems to be more likely to occur in cases where you have\n>>>>> a large number of tables,\n>>>>> i.e., a large amount of stats, and many small tables need VACUUM at\n>>>>> the same time.\n>>>>>\n>>>>> So I followed Tom's advice and created a patch for the PoC.\n>>>>> This patch will enable a flag in the table_recheck_autovac function to use\n>>>>> the existing stats next time if VACUUM (or ANALYZE) has already been done\n>>>>> by another worker on the check after the stats have been updated.\n>>>>> If the tables continue to require VACUUM after the refresh, then a refresh\n>>>>> will be required instead of using the existing statistics.\n>>>>>\n>>>>> I did simple test with HEAD and HEAD + this PoC patch.\n>>>>> The tests were conducted in two cases.\n>>>>> (I changed few configurations. see attached scripts)\n>>>>>\n>>>>> 1. Normal VACUUM case\n>>>>> - SET autovacuum = off\n>>>>> - CREATE tables with 100 rows\n>>>>> - DELETE 90 rows for each tables\n>>>>> - SET autovacuum = on and restart PostgreSQL\n>>>>> - Measure the time it takes for all tables to be VACUUMed\n>>>>>\n>>>>> 2. Anti wrap round VACUUM case\n>>>>> - CREATE brank tables\n>>>>> - SELECT all of these tables (for generate stats)\n>>>>> - SET autovacuum_freeze_max_age to low values and restart PostgreSQL\n>>>>> - Consumes a lot of XIDs by using txid_curent()\n>>>>> - Measure the time it takes for all tables to be VACUUMed\n>>>>>\n>>>>> For each test case, the following results were obtained by changing\n>>>>> autovacuum_max_workers parameters to 1, 2, 3(def) 5 and 10.\n>>>>> Also changing num of tables to 1000, 5000, 10000 and 20000.\n>>>>>\n>>>>> Due to the poor VM environment (2 VCPU/4 GB), the results are a little unstable,\n>>>>> but I think it's enough to ask for a trend.\n>>>>>\n>>>>> ===========================================================================\n>>>>> [1.Normal VACUUM case]\n>>>>> tables:1000\n>>>>> autovacuum_max_workers 1: (HEAD) 20 sec VS (with patch) 20 sec\n>>>>> autovacuum_max_workers 2: (HEAD) 18 sec VS (with patch) 16 sec\n>>>>> autovacuum_max_workers 3: (HEAD) 18 sec VS (with patch) 16 sec\n>>>>> autovacuum_max_workers 5: (HEAD) 19 sec VS (with patch) 17 sec\n>>>>> autovacuum_max_workers 10: (HEAD) 19 sec VS (with patch) 17 sec\n>>>>>\n>>>>> tables:5000\n>>>>> autovacuum_max_workers 1: (HEAD) 77 sec VS (with patch) 78 sec\n>>>>> autovacuum_max_workers 2: (HEAD) 61 sec VS (with patch) 43 sec\n>>>>> autovacuum_max_workers 3: (HEAD) 38 sec VS (with patch) 38 sec\n>>>>> autovacuum_max_workers 5: (HEAD) 45 sec VS (with patch) 37 sec\n>>>>> autovacuum_max_workers 10: (HEAD) 43 sec VS (with patch) 35 sec\n>>>>>\n>>>>> tables:10000\n>>>>> autovacuum_max_workers 1: (HEAD) 152 sec VS (with patch) 153 sec\n>>>>> autovacuum_max_workers 2: (HEAD) 119 sec VS (with patch) 98 sec\n>>>>> autovacuum_max_workers 3: (HEAD) 87 sec VS (with patch) 78 sec\n>>>>> autovacuum_max_workers 5: (HEAD) 100 sec VS (with patch) 66 sec\n>>>>> autovacuum_max_workers 10: (HEAD) 97 sec VS (with patch) 56 sec\n>>>>>\n>>>>> tables:20000\n>>>>> autovacuum_max_workers 1: (HEAD) 338 sec VS (with patch) 339 sec\n>>>>> autovacuum_max_workers 2: (HEAD) 231 sec VS (with patch) 229 sec\n>>>>> autovacuum_max_workers 3: (HEAD) 220 sec VS (with patch) 191 sec\n>>>>> autovacuum_max_workers 5: (HEAD) 234 sec VS (with patch) 147 sec\n>>>>> autovacuum_max_workers 10: (HEAD) 320 sec VS (with patch) 113 sec\n>>>>>\n>>>>> [2.Anti wrap round VACUUM case]\n>>>>> tables:1000\n>>>>> autovacuum_max_workers 1: (HEAD) 19 sec VS (with patch) 18 sec\n>>>>> autovacuum_max_workers 2: (HEAD) 14 sec VS (with patch) 15 sec\n>>>>> autovacuum_max_workers 3: (HEAD) 14 sec VS (with patch) 14 sec\n>>>>> autovacuum_max_workers 5: (HEAD) 14 sec VS (with patch) 16 sec\n>>>>> autovacuum_max_workers 10: (HEAD) 16 sec VS (with patch) 14 sec\n>>>>>\n>>>>> tables:5000\n>>>>> autovacuum_max_workers 1: (HEAD) 69 sec VS (with patch) 69 sec\n>>>>> autovacuum_max_workers 2: (HEAD) 66 sec VS (with patch) 47 sec\n>>>>> autovacuum_max_workers 3: (HEAD) 59 sec VS (with patch) 37 sec\n>>>>> autovacuum_max_workers 5: (HEAD) 39 sec VS (with patch) 28 sec\n>>>>> autovacuum_max_workers 10: (HEAD) 39 sec VS (with patch) 29 sec\n>>>>>\n>>>>> tables:10000\n>>>>> autovacuum_max_workers 1: (HEAD) 139 sec VS (with patch) 138 sec\n>>>>> autovacuum_max_workers 2: (HEAD) 130 sec VS (with patch) 86 sec\n>>>>> autovacuum_max_workers 3: (HEAD) 120 sec VS (with patch) 68 sec\n>>>>> autovacuum_max_workers 5: (HEAD) 96 sec VS (with patch) 41 sec\n>>>>> autovacuum_max_workers 10: (HEAD) 90 sec VS (with patch) 39 sec\n>>>>>\n>>>>> tables:20000\n>>>>> autovacuum_max_workers 1: (HEAD) 313 sec VS (with patch) 331 sec\n>>>>> autovacuum_max_workers 2: (HEAD) 209 sec VS (with patch) 201 sec\n>>>>> autovacuum_max_workers 3: (HEAD) 227 sec VS (with patch) 141 sec\n>>>>> autovacuum_max_workers 5: (HEAD) 236 sec VS (with patch) 88 sec\n>>>>> autovacuum_max_workers 10: (HEAD) 309 sec VS (with patch) 74 sec\n>>>>> ===========================================================================\n>>>>>\n>>>>> The cases without patch, the scalability of the worker has decreased\n>>>>> as the number of tables has increased.\n>>>>> In fact, the more workers there are, the longer it takes to complete\n>>>>> VACUUM to all tables.\n>>>>> The cases with patch, it shows good scalability with respect to the\n>>>>> number of workers.\n>>>>\n>>>> It seems a good performance improvement even without the patch of\n>>>> shared memory based stats collector.\n\nSounds great!\n\n\n>>>>\n>>>>>\n>>>>> Note that perf top results showed that hash_search_with_hash_value,\n>>>>> hash_seq_search and\n>>>>> pgstat_read_statsfiles are dominant during VACUUM in all patterns,\n>>>>> with or without the patch.\n>>>>>\n>>>>> Therefore, there is still a need to find ways to optimize the reading\n>>>>> of large amounts of stats.\n>>>>> However, this patch is effective in its own right, and since there are\n>>>>> only a few parts to modify,\n>>>>> I think it should be able to be applied to current (preferably\n>>>>> pre-v13) PostgreSQL.\n>>>>\n>>>> +1\n>>>>\n>>>> +\n>>>> + /* We might be better to refresh stats */\n>>>> + use_existing_stats = false;\n>>>> }\n>>>> + else\n>>>> + {\n>>>>\n>>>> - heap_freetuple(classTup);\n>>>> + heap_freetuple(classTup);\n>>>> + /* The relid has already vacuumed, so we might be better to\n>>>> use exiting stats */\n>>>> + use_existing_stats = true;\n>>>> + }\n>>>>\n>>>> With that patch, the autovacuum process refreshes the stats in the\n>>>> next check if it finds out that the table still needs to be vacuumed.\n>>>> But I guess it's not necessarily true because the next table might be\n>>>> vacuumed already. So I think we might want to always use the existing\n>>>> for the first check. What do you think?\n>>> Thanks for your comment.\n>>>\n>>> If we assume the case where some workers vacuum on large tables\n>>> and a single worker vacuum on small tables, the processing\n>>> performance of the single worker will be slightly lower if the\n>>> existing statistics are checked every time.\n>>>\n>>> In fact, at first I tried to check the existing stats every time,\n>>> but the performance was slightly worse in cases with a small number of workers.\n\nDo you have this benchmark result?\n\n\n>>> (Checking the existing stats is lightweight , but at high frequency,\n>>> it affects processing performance.)\n>>> Therefore, at after refresh statistics, determine whether autovac\n>>> should use the existing statistics.\n>>\n>> Yeah, since the test you used uses a lot of small tables, if there are\n>> a few workers, checking the existing stats is unlikely to return true\n>> (no need to vacuum). So the cost of existing stats check ends up being\n>> overhead. Not sure how slow always checking the existing stats was,\n>> but given that the shared memory based stats collector patch could\n>> improve the performance of refreshing stats, it might be better not to\n>> check the existing stats frequently like the patch does. Anyway, I\n>> think it’s better to evaluate the performance improvement with other\n>> cases too.\n> Yeah, I would like to see how much the performance changes in other cases.\n> In addition, if the shared-based-stats patch is applied, we won't need to reload\n> a huge stats file, so we will just have to check the stats on\n> shared-mem every time.\n> Perhaps the logic of table_recheck_autovac could be simpler.\n> \n>>> BTW, I found some typos in comments, so attache a fixed version.\n\nThe patch adds some duplicated codes into table_recheck_autovac().\nIt's better to make the common function performing them and make\ntable_recheck_autovac() call that common function, to simplify the code.\n\n+\t\t/*\n+\t \t * Get the applicable reloptions. If it is a TOAST table, try to get the\n+\t \t * main table reloptions if the toast table itself doesn't have.\n+\t \t */\n+\t\tavopts = extract_autovac_opts(classTup, pg_class_desc);\n+\t\tif (classForm->relkind == RELKIND_TOASTVALUE &&\n+\t\t\tavopts == NULL && table_toast_map != NULL)\n+\t\t{\n+\t\t\tav_relation *hentry;\n+\t\t\tbool\t\tfound;\n+\n+\t\t\thentry = hash_search(table_toast_map, &relid, HASH_FIND, &found);\n+\t\t\tif (found && hentry->ar_hasrelopts)\n+\t\t\tavopts = &hentry->ar_reloptions;\n+\t\t}\n\nThe above is performed both when using the existing stats and\nalso when the stats are refreshed. But it's actually required\nonly at once?\n\n-\theap_freetuple(classTup);\n+\t\theap_freetuple(classTup);\n\nWith the patch, heap_freetuple() is not called when either doanalyze\nor dovacuum is true. But it should be called even in that case,\nlike it is originally?\n\n\n>>\n>> Thank you for updating the patch! I'll also run the performance test\n>> you shared with the latest version patch.\n\n+1\n\n\n> Thank you!\n> It's very helpful.\n\nAgreed.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 27 Nov 2020 01:43:25 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: autovac issue with large number of tables" }, { "msg_contents": "On Fri, Nov 27, 2020 at 1:43 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/11/26 10:41, Kasahara Tatsuhito wrote:\n> > On Wed, Nov 25, 2020 at 8:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >>\n> >> On Wed, Nov 25, 2020 at 4:18 PM Kasahara Tatsuhito\n> >> <kasahara.tatsuhito@gmail.com> wrote:\n> >>>\n> >>> Hi,\n> >>>\n> >>> On Wed, Nov 25, 2020 at 2:17 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >>>>\n> >>>> On Fri, Sep 4, 2020 at 7:50 PM Kasahara Tatsuhito\n> >>>> <kasahara.tatsuhito@gmail.com> wrote:\n> >>>>>\n> >>>>> Hi,\n> >>>>>\n> >>>>> On Wed, Sep 2, 2020 at 2:10 AM Kasahara Tatsuhito\n> >>>>> <kasahara.tatsuhito@gmail.com> wrote:\n> >>>>>>> I wonder if we could have table_recheck_autovac do two probes of the stats\n> >>>>>>> data. First probe the existing stats data, and if it shows the table to\n> >>>>>>> be already vacuumed, return immediately. If not, *then* force a stats\n> >>>>>>> re-read, and check a second time.\n> >>>>>> Does the above mean that the second and subsequent table_recheck_autovac()\n> >>>>>> will be improved to first check using the previous refreshed statistics?\n> >>>>>> I think that certainly works.\n> >>>>>>\n> >>>>>> If that's correct, I'll try to create a patch for the PoC\n> >>>>>\n> >>>>> I still don't know how to reproduce Jim's troubles, but I was able to reproduce\n> >>>>> what was probably a very similar problem.\n> >>>>>\n> >>>>> This problem seems to be more likely to occur in cases where you have\n> >>>>> a large number of tables,\n> >>>>> i.e., a large amount of stats, and many small tables need VACUUM at\n> >>>>> the same time.\n> >>>>>\n> >>>>> So I followed Tom's advice and created a patch for the PoC.\n> >>>>> This patch will enable a flag in the table_recheck_autovac function to use\n> >>>>> the existing stats next time if VACUUM (or ANALYZE) has already been done\n> >>>>> by another worker on the check after the stats have been updated.\n> >>>>> If the tables continue to require VACUUM after the refresh, then a refresh\n> >>>>> will be required instead of using the existing statistics.\n> >>>>>\n> >>>>> I did simple test with HEAD and HEAD + this PoC patch.\n> >>>>> The tests were conducted in two cases.\n> >>>>> (I changed few configurations. see attached scripts)\n> >>>>>\n> >>>>> 1. Normal VACUUM case\n> >>>>> - SET autovacuum = off\n> >>>>> - CREATE tables with 100 rows\n> >>>>> - DELETE 90 rows for each tables\n> >>>>> - SET autovacuum = on and restart PostgreSQL\n> >>>>> - Measure the time it takes for all tables to be VACUUMed\n> >>>>>\n> >>>>> 2. Anti wrap round VACUUM case\n> >>>>> - CREATE brank tables\n> >>>>> - SELECT all of these tables (for generate stats)\n> >>>>> - SET autovacuum_freeze_max_age to low values and restart PostgreSQL\n> >>>>> - Consumes a lot of XIDs by using txid_curent()\n> >>>>> - Measure the time it takes for all tables to be VACUUMed\n> >>>>>\n> >>>>> For each test case, the following results were obtained by changing\n> >>>>> autovacuum_max_workers parameters to 1, 2, 3(def) 5 and 10.\n> >>>>> Also changing num of tables to 1000, 5000, 10000 and 20000.\n> >>>>>\n> >>>>> Due to the poor VM environment (2 VCPU/4 GB), the results are a little unstable,\n> >>>>> but I think it's enough to ask for a trend.\n> >>>>>\n> >>>>> ===========================================================================\n> >>>>> [1.Normal VACUUM case]\n> >>>>> tables:1000\n> >>>>> autovacuum_max_workers 1: (HEAD) 20 sec VS (with patch) 20 sec\n> >>>>> autovacuum_max_workers 2: (HEAD) 18 sec VS (with patch) 16 sec\n> >>>>> autovacuum_max_workers 3: (HEAD) 18 sec VS (with patch) 16 sec\n> >>>>> autovacuum_max_workers 5: (HEAD) 19 sec VS (with patch) 17 sec\n> >>>>> autovacuum_max_workers 10: (HEAD) 19 sec VS (with patch) 17 sec\n> >>>>>\n> >>>>> tables:5000\n> >>>>> autovacuum_max_workers 1: (HEAD) 77 sec VS (with patch) 78 sec\n> >>>>> autovacuum_max_workers 2: (HEAD) 61 sec VS (with patch) 43 sec\n> >>>>> autovacuum_max_workers 3: (HEAD) 38 sec VS (with patch) 38 sec\n> >>>>> autovacuum_max_workers 5: (HEAD) 45 sec VS (with patch) 37 sec\n> >>>>> autovacuum_max_workers 10: (HEAD) 43 sec VS (with patch) 35 sec\n> >>>>>\n> >>>>> tables:10000\n> >>>>> autovacuum_max_workers 1: (HEAD) 152 sec VS (with patch) 153 sec\n> >>>>> autovacuum_max_workers 2: (HEAD) 119 sec VS (with patch) 98 sec\n> >>>>> autovacuum_max_workers 3: (HEAD) 87 sec VS (with patch) 78 sec\n> >>>>> autovacuum_max_workers 5: (HEAD) 100 sec VS (with patch) 66 sec\n> >>>>> autovacuum_max_workers 10: (HEAD) 97 sec VS (with patch) 56 sec\n> >>>>>\n> >>>>> tables:20000\n> >>>>> autovacuum_max_workers 1: (HEAD) 338 sec VS (with patch) 339 sec\n> >>>>> autovacuum_max_workers 2: (HEAD) 231 sec VS (with patch) 229 sec\n> >>>>> autovacuum_max_workers 3: (HEAD) 220 sec VS (with patch) 191 sec\n> >>>>> autovacuum_max_workers 5: (HEAD) 234 sec VS (with patch) 147 sec\n> >>>>> autovacuum_max_workers 10: (HEAD) 320 sec VS (with patch) 113 sec\n> >>>>>\n> >>>>> [2.Anti wrap round VACUUM case]\n> >>>>> tables:1000\n> >>>>> autovacuum_max_workers 1: (HEAD) 19 sec VS (with patch) 18 sec\n> >>>>> autovacuum_max_workers 2: (HEAD) 14 sec VS (with patch) 15 sec\n> >>>>> autovacuum_max_workers 3: (HEAD) 14 sec VS (with patch) 14 sec\n> >>>>> autovacuum_max_workers 5: (HEAD) 14 sec VS (with patch) 16 sec\n> >>>>> autovacuum_max_workers 10: (HEAD) 16 sec VS (with patch) 14 sec\n> >>>>>\n> >>>>> tables:5000\n> >>>>> autovacuum_max_workers 1: (HEAD) 69 sec VS (with patch) 69 sec\n> >>>>> autovacuum_max_workers 2: (HEAD) 66 sec VS (with patch) 47 sec\n> >>>>> autovacuum_max_workers 3: (HEAD) 59 sec VS (with patch) 37 sec\n> >>>>> autovacuum_max_workers 5: (HEAD) 39 sec VS (with patch) 28 sec\n> >>>>> autovacuum_max_workers 10: (HEAD) 39 sec VS (with patch) 29 sec\n> >>>>>\n> >>>>> tables:10000\n> >>>>> autovacuum_max_workers 1: (HEAD) 139 sec VS (with patch) 138 sec\n> >>>>> autovacuum_max_workers 2: (HEAD) 130 sec VS (with patch) 86 sec\n> >>>>> autovacuum_max_workers 3: (HEAD) 120 sec VS (with patch) 68 sec\n> >>>>> autovacuum_max_workers 5: (HEAD) 96 sec VS (with patch) 41 sec\n> >>>>> autovacuum_max_workers 10: (HEAD) 90 sec VS (with patch) 39 sec\n> >>>>>\n> >>>>> tables:20000\n> >>>>> autovacuum_max_workers 1: (HEAD) 313 sec VS (with patch) 331 sec\n> >>>>> autovacuum_max_workers 2: (HEAD) 209 sec VS (with patch) 201 sec\n> >>>>> autovacuum_max_workers 3: (HEAD) 227 sec VS (with patch) 141 sec\n> >>>>> autovacuum_max_workers 5: (HEAD) 236 sec VS (with patch) 88 sec\n> >>>>> autovacuum_max_workers 10: (HEAD) 309 sec VS (with patch) 74 sec\n> >>>>> ===========================================================================\n> >>>>>\n> >>>>> The cases without patch, the scalability of the worker has decreased\n> >>>>> as the number of tables has increased.\n> >>>>> In fact, the more workers there are, the longer it takes to complete\n> >>>>> VACUUM to all tables.\n> >>>>> The cases with patch, it shows good scalability with respect to the\n> >>>>> number of workers.\n> >>>>\n> >>>> It seems a good performance improvement even without the patch of\n> >>>> shared memory based stats collector.\n>\n> Sounds great!\n>\n>\n> >>>>\n> >>>>>\n> >>>>> Note that perf top results showed that hash_search_with_hash_value,\n> >>>>> hash_seq_search and\n> >>>>> pgstat_read_statsfiles are dominant during VACUUM in all patterns,\n> >>>>> with or without the patch.\n> >>>>>\n> >>>>> Therefore, there is still a need to find ways to optimize the reading\n> >>>>> of large amounts of stats.\n> >>>>> However, this patch is effective in its own right, and since there are\n> >>>>> only a few parts to modify,\n> >>>>> I think it should be able to be applied to current (preferably\n> >>>>> pre-v13) PostgreSQL.\n> >>>>\n> >>>> +1\n> >>>>\n> >>>> +\n> >>>> + /* We might be better to refresh stats */\n> >>>> + use_existing_stats = false;\n> >>>> }\n> >>>> + else\n> >>>> + {\n> >>>>\n> >>>> - heap_freetuple(classTup);\n> >>>> + heap_freetuple(classTup);\n> >>>> + /* The relid has already vacuumed, so we might be better to\n> >>>> use exiting stats */\n> >>>> + use_existing_stats = true;\n> >>>> + }\n> >>>>\n> >>>> With that patch, the autovacuum process refreshes the stats in the\n> >>>> next check if it finds out that the table still needs to be vacuumed.\n> >>>> But I guess it's not necessarily true because the next table might be\n> >>>> vacuumed already. So I think we might want to always use the existing\n> >>>> for the first check. What do you think?\n> >>> Thanks for your comment.\n> >>>\n> >>> If we assume the case where some workers vacuum on large tables\n> >>> and a single worker vacuum on small tables, the processing\n> >>> performance of the single worker will be slightly lower if the\n> >>> existing statistics are checked every time.\n> >>>\n> >>> In fact, at first I tried to check the existing stats every time,\n> >>> but the performance was slightly worse in cases with a small number of workers.\n>\n> Do you have this benchmark result?\n\nFWIW I'd like to share the benchmark results of the same test in my\nenvironment as Kasahara-san did. In this performance evaluation, I\nmeasured the execution time for the loop in do_autovacuum(), line 2318\nin autovacuum.c, where taking a major time of autovacuum. So it checks\nhow much time an autovacuum worker took to process the list of the\ncollected all tables, including refreshing and checking the stats,\nvacuuming tables, and checking the existing stats. Since all tables\nare the same size (only 1 page) there is no big difference in the\nexecution time between concurrent autovacuum workers. The following\nresults show the maximum execution time among the autovacuum workers.\n From the left the execution time of the current HEAD, Kasahara-san's\npatch, the method of always checking the existing stats, in seconds.\nThe result has a similar trend to what Kasahara-san mentioned.\n\n1000 tables:\n autovac_workers 1 : 13s, 13s, 13s\n autovac_workers 2 : 6s, 4s, 5s\n autovac_workers 3 : 3s, 4s, 4s\n autovac_workers 5 : 3s, 3s, 3s\n autovac_workers 10: 2s, 3s, 3s\n\n5000 tables:\n autovac_workers 1 : 71s, 71s, 132s\n autovac_workers 2 : 37s, 32s, 48s\n autovac_workers 3 : 29s, 26s, 38s\n autovac_workers 5 : 20s, 19s, 19s\n autovac_workers 10: 13s, 8s, 9s\n\n10000 tables:\n autovac_workers 1 : 158s,157s, 290s\n autovac_workers 2 : 80s, 53s, 151s\n autovac_workers 3 : 75s, 67s, 89s\n autovac_workers 5 : 61s, 42s, 53s\n autovac_workers 10: 69s, 26s, 33s\n\n20000 tables:\n autovac_workers 1 : 379s, 380s, 695s\n autovac_workers 2 : 236s, 232s, 369s\n autovac_workers 3 : 222s, 181s, 238s\n autovac_workers 5 : 212s, 132s, 167s\n autovac_workers 10: 317s, 91s, 117s\n\nI'm benchmarking the performance improvement by the patch on other\nworkloads. I'll share that result.\n\nRegards,\n\n--\nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Fri, 27 Nov 2020 17:21:39 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: autovac issue with large number of tables" }, { "msg_contents": "Hi,\n\nOn Fri, Nov 27, 2020 at 1:43 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/11/26 10:41, Kasahara Tatsuhito wrote:\n> > On Wed, Nov 25, 2020 at 8:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >>\n> >> On Wed, Nov 25, 2020 at 4:18 PM Kasahara Tatsuhito\n> >> <kasahara.tatsuhito@gmail.com> wrote:\n> >>>\n> >>> Hi,\n> >>>\n> >>> On Wed, Nov 25, 2020 at 2:17 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >>>>\n> >>>> On Fri, Sep 4, 2020 at 7:50 PM Kasahara Tatsuhito\n> >>>> <kasahara.tatsuhito@gmail.com> wrote:\n> >>>>>\n> >>>>> Hi,\n> >>>>>\n> >>>>> On Wed, Sep 2, 2020 at 2:10 AM Kasahara Tatsuhito\n> >>>>> <kasahara.tatsuhito@gmail.com> wrote:\n> >>>>>>> I wonder if we could have table_recheck_autovac do two probes of the stats\n> >>>>>>> data. First probe the existing stats data, and if it shows the table to\n> >>>>>>> be already vacuumed, return immediately. If not, *then* force a stats\n> >>>>>>> re-read, and check a second time.\n> >>>>>> Does the above mean that the second and subsequent table_recheck_autovac()\n> >>>>>> will be improved to first check using the previous refreshed statistics?\n> >>>>>> I think that certainly works.\n> >>>>>>\n> >>>>>> If that's correct, I'll try to create a patch for the PoC\n> >>>>>\n> >>>>> I still don't know how to reproduce Jim's troubles, but I was able to reproduce\n> >>>>> what was probably a very similar problem.\n> >>>>>\n> >>>>> This problem seems to be more likely to occur in cases where you have\n> >>>>> a large number of tables,\n> >>>>> i.e., a large amount of stats, and many small tables need VACUUM at\n> >>>>> the same time.\n> >>>>>\n> >>>>> So I followed Tom's advice and created a patch for the PoC.\n> >>>>> This patch will enable a flag in the table_recheck_autovac function to use\n> >>>>> the existing stats next time if VACUUM (or ANALYZE) has already been done\n> >>>>> by another worker on the check after the stats have been updated.\n> >>>>> If the tables continue to require VACUUM after the refresh, then a refresh\n> >>>>> will be required instead of using the existing statistics.\n> >>>>>\n> >>>>> I did simple test with HEAD and HEAD + this PoC patch.\n> >>>>> The tests were conducted in two cases.\n> >>>>> (I changed few configurations. see attached scripts)\n> >>>>>\n> >>>>> 1. Normal VACUUM case\n> >>>>> - SET autovacuum = off\n> >>>>> - CREATE tables with 100 rows\n> >>>>> - DELETE 90 rows for each tables\n> >>>>> - SET autovacuum = on and restart PostgreSQL\n> >>>>> - Measure the time it takes for all tables to be VACUUMed\n> >>>>>\n> >>>>> 2. Anti wrap round VACUUM case\n> >>>>> - CREATE brank tables\n> >>>>> - SELECT all of these tables (for generate stats)\n> >>>>> - SET autovacuum_freeze_max_age to low values and restart PostgreSQL\n> >>>>> - Consumes a lot of XIDs by using txid_curent()\n> >>>>> - Measure the time it takes for all tables to be VACUUMed\n> >>>>>\n> >>>>> For each test case, the following results were obtained by changing\n> >>>>> autovacuum_max_workers parameters to 1, 2, 3(def) 5 and 10.\n> >>>>> Also changing num of tables to 1000, 5000, 10000 and 20000.\n> >>>>>\n> >>>>> Due to the poor VM environment (2 VCPU/4 GB), the results are a little unstable,\n> >>>>> but I think it's enough to ask for a trend.\n> >>>>>\n> >>>>> ===========================================================================\n> >>>>> [1.Normal VACUUM case]\n> >>>>> tables:1000\n> >>>>> autovacuum_max_workers 1: (HEAD) 20 sec VS (with patch) 20 sec\n> >>>>> autovacuum_max_workers 2: (HEAD) 18 sec VS (with patch) 16 sec\n> >>>>> autovacuum_max_workers 3: (HEAD) 18 sec VS (with patch) 16 sec\n> >>>>> autovacuum_max_workers 5: (HEAD) 19 sec VS (with patch) 17 sec\n> >>>>> autovacuum_max_workers 10: (HEAD) 19 sec VS (with patch) 17 sec\n> >>>>>\n> >>>>> tables:5000\n> >>>>> autovacuum_max_workers 1: (HEAD) 77 sec VS (with patch) 78 sec\n> >>>>> autovacuum_max_workers 2: (HEAD) 61 sec VS (with patch) 43 sec\n> >>>>> autovacuum_max_workers 3: (HEAD) 38 sec VS (with patch) 38 sec\n> >>>>> autovacuum_max_workers 5: (HEAD) 45 sec VS (with patch) 37 sec\n> >>>>> autovacuum_max_workers 10: (HEAD) 43 sec VS (with patch) 35 sec\n> >>>>>\n> >>>>> tables:10000\n> >>>>> autovacuum_max_workers 1: (HEAD) 152 sec VS (with patch) 153 sec\n> >>>>> autovacuum_max_workers 2: (HEAD) 119 sec VS (with patch) 98 sec\n> >>>>> autovacuum_max_workers 3: (HEAD) 87 sec VS (with patch) 78 sec\n> >>>>> autovacuum_max_workers 5: (HEAD) 100 sec VS (with patch) 66 sec\n> >>>>> autovacuum_max_workers 10: (HEAD) 97 sec VS (with patch) 56 sec\n> >>>>>\n> >>>>> tables:20000\n> >>>>> autovacuum_max_workers 1: (HEAD) 338 sec VS (with patch) 339 sec\n> >>>>> autovacuum_max_workers 2: (HEAD) 231 sec VS (with patch) 229 sec\n> >>>>> autovacuum_max_workers 3: (HEAD) 220 sec VS (with patch) 191 sec\n> >>>>> autovacuum_max_workers 5: (HEAD) 234 sec VS (with patch) 147 sec\n> >>>>> autovacuum_max_workers 10: (HEAD) 320 sec VS (with patch) 113 sec\n> >>>>>\n> >>>>> [2.Anti wrap round VACUUM case]\n> >>>>> tables:1000\n> >>>>> autovacuum_max_workers 1: (HEAD) 19 sec VS (with patch) 18 sec\n> >>>>> autovacuum_max_workers 2: (HEAD) 14 sec VS (with patch) 15 sec\n> >>>>> autovacuum_max_workers 3: (HEAD) 14 sec VS (with patch) 14 sec\n> >>>>> autovacuum_max_workers 5: (HEAD) 14 sec VS (with patch) 16 sec\n> >>>>> autovacuum_max_workers 10: (HEAD) 16 sec VS (with patch) 14 sec\n> >>>>>\n> >>>>> tables:5000\n> >>>>> autovacuum_max_workers 1: (HEAD) 69 sec VS (with patch) 69 sec\n> >>>>> autovacuum_max_workers 2: (HEAD) 66 sec VS (with patch) 47 sec\n> >>>>> autovacuum_max_workers 3: (HEAD) 59 sec VS (with patch) 37 sec\n> >>>>> autovacuum_max_workers 5: (HEAD) 39 sec VS (with patch) 28 sec\n> >>>>> autovacuum_max_workers 10: (HEAD) 39 sec VS (with patch) 29 sec\n> >>>>>\n> >>>>> tables:10000\n> >>>>> autovacuum_max_workers 1: (HEAD) 139 sec VS (with patch) 138 sec\n> >>>>> autovacuum_max_workers 2: (HEAD) 130 sec VS (with patch) 86 sec\n> >>>>> autovacuum_max_workers 3: (HEAD) 120 sec VS (with patch) 68 sec\n> >>>>> autovacuum_max_workers 5: (HEAD) 96 sec VS (with patch) 41 sec\n> >>>>> autovacuum_max_workers 10: (HEAD) 90 sec VS (with patch) 39 sec\n> >>>>>\n> >>>>> tables:20000\n> >>>>> autovacuum_max_workers 1: (HEAD) 313 sec VS (with patch) 331 sec\n> >>>>> autovacuum_max_workers 2: (HEAD) 209 sec VS (with patch) 201 sec\n> >>>>> autovacuum_max_workers 3: (HEAD) 227 sec VS (with patch) 141 sec\n> >>>>> autovacuum_max_workers 5: (HEAD) 236 sec VS (with patch) 88 sec\n> >>>>> autovacuum_max_workers 10: (HEAD) 309 sec VS (with patch) 74 sec\n> >>>>> ===========================================================================\n> >>>>>\n> >>>>> The cases without patch, the scalability of the worker has decreased\n> >>>>> as the number of tables has increased.\n> >>>>> In fact, the more workers there are, the longer it takes to complete\n> >>>>> VACUUM to all tables.\n> >>>>> The cases with patch, it shows good scalability with respect to the\n> >>>>> number of workers.\n> >>>>\n> >>>> It seems a good performance improvement even without the patch of\n> >>>> shared memory based stats collector.\n>\n> Sounds great!\n>\n>\n> >>>>\n> >>>>>\n> >>>>> Note that perf top results showed that hash_search_with_hash_value,\n> >>>>> hash_seq_search and\n> >>>>> pgstat_read_statsfiles are dominant during VACUUM in all patterns,\n> >>>>> with or without the patch.\n> >>>>>\n> >>>>> Therefore, there is still a need to find ways to optimize the reading\n> >>>>> of large amounts of stats.\n> >>>>> However, this patch is effective in its own right, and since there are\n> >>>>> only a few parts to modify,\n> >>>>> I think it should be able to be applied to current (preferably\n> >>>>> pre-v13) PostgreSQL.\n> >>>>\n> >>>> +1\n> >>>>\n> >>>> +\n> >>>> + /* We might be better to refresh stats */\n> >>>> + use_existing_stats = false;\n> >>>> }\n> >>>> + else\n> >>>> + {\n> >>>>\n> >>>> - heap_freetuple(classTup);\n> >>>> + heap_freetuple(classTup);\n> >>>> + /* The relid has already vacuumed, so we might be better to\n> >>>> use exiting stats */\n> >>>> + use_existing_stats = true;\n> >>>> + }\n> >>>>\n> >>>> With that patch, the autovacuum process refreshes the stats in the\n> >>>> next check if it finds out that the table still needs to be vacuumed.\n> >>>> But I guess it's not necessarily true because the next table might be\n> >>>> vacuumed already. So I think we might want to always use the existing\n> >>>> for the first check. What do you think?\n> >>> Thanks for your comment.\n> >>>\n> >>> If we assume the case where some workers vacuum on large tables\n> >>> and a single worker vacuum on small tables, the processing\n> >>> performance of the single worker will be slightly lower if the\n> >>> existing statistics are checked every time.\n> >>>\n> >>> In fact, at first I tried to check the existing stats every time,\n> >>> but the performance was slightly worse in cases with a small number of workers.\n>\n> Do you have this benchmark result?\n>\n>\n> >>> (Checking the existing stats is lightweight , but at high frequency,\n> >>> it affects processing performance.)\n> >>> Therefore, at after refresh statistics, determine whether autovac\n> >>> should use the existing statistics.\n> >>\n> >> Yeah, since the test you used uses a lot of small tables, if there are\n> >> a few workers, checking the existing stats is unlikely to return true\n> >> (no need to vacuum). So the cost of existing stats check ends up being\n> >> overhead. Not sure how slow always checking the existing stats was,\n> >> but given that the shared memory based stats collector patch could\n> >> improve the performance of refreshing stats, it might be better not to\n> >> check the existing stats frequently like the patch does. Anyway, I\n> >> think it’s better to evaluate the performance improvement with other\n> >> cases too.\n> > Yeah, I would like to see how much the performance changes in other cases.\n> > In addition, if the shared-based-stats patch is applied, we won't need to reload\n> > a huge stats file, so we will just have to check the stats on\n> > shared-mem every time.\n> > Perhaps the logic of table_recheck_autovac could be simpler.\n> >\n> >>> BTW, I found some typos in comments, so attache a fixed version.\n>\n> The patch adds some duplicated codes into table_recheck_autovac().\n> It's better to make the common function performing them and make\n> table_recheck_autovac() call that common function, to simplify the code.\nThanks for your comment.\nHmm.. I've cut out the duplicate part.\nAttach the patch.\nCould you confirm that it fits your expecting?\n\n>\n> + /*\n> + * Get the applicable reloptions. If it is a TOAST table, try to get the\n> + * main table reloptions if the toast table itself doesn't have.\n> + */\n> + avopts = extract_autovac_opts(classTup, pg_class_desc);\n> + if (classForm->relkind == RELKIND_TOASTVALUE &&\n> + avopts == NULL && table_toast_map != NULL)\n> + {\n> + av_relation *hentry;\n> + bool found;\n> +\n> + hentry = hash_search(table_toast_map, &relid, HASH_FIND, &found);\n> + if (found && hentry->ar_hasrelopts)\n> + avopts = &hentry->ar_reloptions;\n> + }\n>\n> The above is performed both when using the existing stats and\n> also when the stats are refreshed. But it's actually required\n> only at once?\nYeah right. Fixed.\n\n>\n> - heap_freetuple(classTup);\n> + heap_freetuple(classTup);\n>\n> With the patch, heap_freetuple() is not called when either doanalyze\n> or dovacuum is true. But it should be called even in that case,\n> like it is originally?\nYeah right. Fixed.\n\nBest regards,\n\n-- \nTatsuhito Kasahara\nkasahara.tatsuhito _at_ gmail.com", "msg_date": "Fri, 27 Nov 2020 18:38:45 +0900", "msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>", "msg_from_op": false, "msg_subject": "Re: autovac issue with large number of tables" }, { "msg_contents": "On Fri, Nov 27, 2020 at 5:22 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Nov 27, 2020 at 1:43 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >\n> >\n> >\n> > On 2020/11/26 10:41, Kasahara Tatsuhito wrote:\n> > > On Wed, Nov 25, 2020 at 8:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >>\n> > >> On Wed, Nov 25, 2020 at 4:18 PM Kasahara Tatsuhito\n> > >> <kasahara.tatsuhito@gmail.com> wrote:\n> > >>>\n> > >>> Hi,\n> > >>>\n> > >>> On Wed, Nov 25, 2020 at 2:17 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >>>>\n> > >>>> On Fri, Sep 4, 2020 at 7:50 PM Kasahara Tatsuhito\n> > >>>> <kasahara.tatsuhito@gmail.com> wrote:\n> > >>>>>\n> > >>>>> Hi,\n> > >>>>>\n> > >>>>> On Wed, Sep 2, 2020 at 2:10 AM Kasahara Tatsuhito\n> > >>>>> <kasahara.tatsuhito@gmail.com> wrote:\n> > >>>>>>> I wonder if we could have table_recheck_autovac do two probes of the stats\n> > >>>>>>> data. First probe the existing stats data, and if it shows the table to\n> > >>>>>>> be already vacuumed, return immediately. If not, *then* force a stats\n> > >>>>>>> re-read, and check a second time.\n> > >>>>>> Does the above mean that the second and subsequent table_recheck_autovac()\n> > >>>>>> will be improved to first check using the previous refreshed statistics?\n> > >>>>>> I think that certainly works.\n> > >>>>>>\n> > >>>>>> If that's correct, I'll try to create a patch for the PoC\n> > >>>>>\n> > >>>>> I still don't know how to reproduce Jim's troubles, but I was able to reproduce\n> > >>>>> what was probably a very similar problem.\n> > >>>>>\n> > >>>>> This problem seems to be more likely to occur in cases where you have\n> > >>>>> a large number of tables,\n> > >>>>> i.e., a large amount of stats, and many small tables need VACUUM at\n> > >>>>> the same time.\n> > >>>>>\n> > >>>>> So I followed Tom's advice and created a patch for the PoC.\n> > >>>>> This patch will enable a flag in the table_recheck_autovac function to use\n> > >>>>> the existing stats next time if VACUUM (or ANALYZE) has already been done\n> > >>>>> by another worker on the check after the stats have been updated.\n> > >>>>> If the tables continue to require VACUUM after the refresh, then a refresh\n> > >>>>> will be required instead of using the existing statistics.\n> > >>>>>\n> > >>>>> I did simple test with HEAD and HEAD + this PoC patch.\n> > >>>>> The tests were conducted in two cases.\n> > >>>>> (I changed few configurations. see attached scripts)\n> > >>>>>\n> > >>>>> 1. Normal VACUUM case\n> > >>>>> - SET autovacuum = off\n> > >>>>> - CREATE tables with 100 rows\n> > >>>>> - DELETE 90 rows for each tables\n> > >>>>> - SET autovacuum = on and restart PostgreSQL\n> > >>>>> - Measure the time it takes for all tables to be VACUUMed\n> > >>>>>\n> > >>>>> 2. Anti wrap round VACUUM case\n> > >>>>> - CREATE brank tables\n> > >>>>> - SELECT all of these tables (for generate stats)\n> > >>>>> - SET autovacuum_freeze_max_age to low values and restart PostgreSQL\n> > >>>>> - Consumes a lot of XIDs by using txid_curent()\n> > >>>>> - Measure the time it takes for all tables to be VACUUMed\n> > >>>>>\n> > >>>>> For each test case, the following results were obtained by changing\n> > >>>>> autovacuum_max_workers parameters to 1, 2, 3(def) 5 and 10.\n> > >>>>> Also changing num of tables to 1000, 5000, 10000 and 20000.\n> > >>>>>\n> > >>>>> Due to the poor VM environment (2 VCPU/4 GB), the results are a little unstable,\n> > >>>>> but I think it's enough to ask for a trend.\n> > >>>>>\n> > >>>>> ===========================================================================\n> > >>>>> [1.Normal VACUUM case]\n> > >>>>> tables:1000\n> > >>>>> autovacuum_max_workers 1: (HEAD) 20 sec VS (with patch) 20 sec\n> > >>>>> autovacuum_max_workers 2: (HEAD) 18 sec VS (with patch) 16 sec\n> > >>>>> autovacuum_max_workers 3: (HEAD) 18 sec VS (with patch) 16 sec\n> > >>>>> autovacuum_max_workers 5: (HEAD) 19 sec VS (with patch) 17 sec\n> > >>>>> autovacuum_max_workers 10: (HEAD) 19 sec VS (with patch) 17 sec\n> > >>>>>\n> > >>>>> tables:5000\n> > >>>>> autovacuum_max_workers 1: (HEAD) 77 sec VS (with patch) 78 sec\n> > >>>>> autovacuum_max_workers 2: (HEAD) 61 sec VS (with patch) 43 sec\n> > >>>>> autovacuum_max_workers 3: (HEAD) 38 sec VS (with patch) 38 sec\n> > >>>>> autovacuum_max_workers 5: (HEAD) 45 sec VS (with patch) 37 sec\n> > >>>>> autovacuum_max_workers 10: (HEAD) 43 sec VS (with patch) 35 sec\n> > >>>>>\n> > >>>>> tables:10000\n> > >>>>> autovacuum_max_workers 1: (HEAD) 152 sec VS (with patch) 153 sec\n> > >>>>> autovacuum_max_workers 2: (HEAD) 119 sec VS (with patch) 98 sec\n> > >>>>> autovacuum_max_workers 3: (HEAD) 87 sec VS (with patch) 78 sec\n> > >>>>> autovacuum_max_workers 5: (HEAD) 100 sec VS (with patch) 66 sec\n> > >>>>> autovacuum_max_workers 10: (HEAD) 97 sec VS (with patch) 56 sec\n> > >>>>>\n> > >>>>> tables:20000\n> > >>>>> autovacuum_max_workers 1: (HEAD) 338 sec VS (with patch) 339 sec\n> > >>>>> autovacuum_max_workers 2: (HEAD) 231 sec VS (with patch) 229 sec\n> > >>>>> autovacuum_max_workers 3: (HEAD) 220 sec VS (with patch) 191 sec\n> > >>>>> autovacuum_max_workers 5: (HEAD) 234 sec VS (with patch) 147 sec\n> > >>>>> autovacuum_max_workers 10: (HEAD) 320 sec VS (with patch) 113 sec\n> > >>>>>\n> > >>>>> [2.Anti wrap round VACUUM case]\n> > >>>>> tables:1000\n> > >>>>> autovacuum_max_workers 1: (HEAD) 19 sec VS (with patch) 18 sec\n> > >>>>> autovacuum_max_workers 2: (HEAD) 14 sec VS (with patch) 15 sec\n> > >>>>> autovacuum_max_workers 3: (HEAD) 14 sec VS (with patch) 14 sec\n> > >>>>> autovacuum_max_workers 5: (HEAD) 14 sec VS (with patch) 16 sec\n> > >>>>> autovacuum_max_workers 10: (HEAD) 16 sec VS (with patch) 14 sec\n> > >>>>>\n> > >>>>> tables:5000\n> > >>>>> autovacuum_max_workers 1: (HEAD) 69 sec VS (with patch) 69 sec\n> > >>>>> autovacuum_max_workers 2: (HEAD) 66 sec VS (with patch) 47 sec\n> > >>>>> autovacuum_max_workers 3: (HEAD) 59 sec VS (with patch) 37 sec\n> > >>>>> autovacuum_max_workers 5: (HEAD) 39 sec VS (with patch) 28 sec\n> > >>>>> autovacuum_max_workers 10: (HEAD) 39 sec VS (with patch) 29 sec\n> > >>>>>\n> > >>>>> tables:10000\n> > >>>>> autovacuum_max_workers 1: (HEAD) 139 sec VS (with patch) 138 sec\n> > >>>>> autovacuum_max_workers 2: (HEAD) 130 sec VS (with patch) 86 sec\n> > >>>>> autovacuum_max_workers 3: (HEAD) 120 sec VS (with patch) 68 sec\n> > >>>>> autovacuum_max_workers 5: (HEAD) 96 sec VS (with patch) 41 sec\n> > >>>>> autovacuum_max_workers 10: (HEAD) 90 sec VS (with patch) 39 sec\n> > >>>>>\n> > >>>>> tables:20000\n> > >>>>> autovacuum_max_workers 1: (HEAD) 313 sec VS (with patch) 331 sec\n> > >>>>> autovacuum_max_workers 2: (HEAD) 209 sec VS (with patch) 201 sec\n> > >>>>> autovacuum_max_workers 3: (HEAD) 227 sec VS (with patch) 141 sec\n> > >>>>> autovacuum_max_workers 5: (HEAD) 236 sec VS (with patch) 88 sec\n> > >>>>> autovacuum_max_workers 10: (HEAD) 309 sec VS (with patch) 74 sec\n> > >>>>> ===========================================================================\n> > >>>>>\n> > >>>>> The cases without patch, the scalability of the worker has decreased\n> > >>>>> as the number of tables has increased.\n> > >>>>> In fact, the more workers there are, the longer it takes to complete\n> > >>>>> VACUUM to all tables.\n> > >>>>> The cases with patch, it shows good scalability with respect to the\n> > >>>>> number of workers.\n> > >>>>\n> > >>>> It seems a good performance improvement even without the patch of\n> > >>>> shared memory based stats collector.\n> >\n> > Sounds great!\n> >\n> >\n> > >>>>\n> > >>>>>\n> > >>>>> Note that perf top results showed that hash_search_with_hash_value,\n> > >>>>> hash_seq_search and\n> > >>>>> pgstat_read_statsfiles are dominant during VACUUM in all patterns,\n> > >>>>> with or without the patch.\n> > >>>>>\n> > >>>>> Therefore, there is still a need to find ways to optimize the reading\n> > >>>>> of large amounts of stats.\n> > >>>>> However, this patch is effective in its own right, and since there are\n> > >>>>> only a few parts to modify,\n> > >>>>> I think it should be able to be applied to current (preferably\n> > >>>>> pre-v13) PostgreSQL.\n> > >>>>\n> > >>>> +1\n> > >>>>\n> > >>>> +\n> > >>>> + /* We might be better to refresh stats */\n> > >>>> + use_existing_stats = false;\n> > >>>> }\n> > >>>> + else\n> > >>>> + {\n> > >>>>\n> > >>>> - heap_freetuple(classTup);\n> > >>>> + heap_freetuple(classTup);\n> > >>>> + /* The relid has already vacuumed, so we might be better to\n> > >>>> use exiting stats */\n> > >>>> + use_existing_stats = true;\n> > >>>> + }\n> > >>>>\n> > >>>> With that patch, the autovacuum process refreshes the stats in the\n> > >>>> next check if it finds out that the table still needs to be vacuumed.\n> > >>>> But I guess it's not necessarily true because the next table might be\n> > >>>> vacuumed already. So I think we might want to always use the existing\n> > >>>> for the first check. What do you think?\n> > >>> Thanks for your comment.\n> > >>>\n> > >>> If we assume the case where some workers vacuum on large tables\n> > >>> and a single worker vacuum on small tables, the processing\n> > >>> performance of the single worker will be slightly lower if the\n> > >>> existing statistics are checked every time.\n> > >>>\n> > >>> In fact, at first I tried to check the existing stats every time,\n> > >>> but the performance was slightly worse in cases with a small number of workers.\n> >\n> > Do you have this benchmark result?\n>\n> FWIW I'd like to share the benchmark results of the same test in my\n> environment as Kasahara-san did. In this performance evaluation, I\n> measured the execution time for the loop in do_autovacuum(), line 2318\n> in autovacuum.c, where taking a major time of autovacuum. So it checks\n> how much time an autovacuum worker took to process the list of the\n> collected all tables, including refreshing and checking the stats,\n> vacuuming tables, and checking the existing stats. Since all tables\n> are the same size (only 1 page) there is no big difference in the\n> execution time between concurrent autovacuum workers. The following\n> results show the maximum execution time among the autovacuum workers.\n> From the left the execution time of the current HEAD, Kasahara-san's\n> patch, the method of always checking the existing stats, in seconds.\n> The result has a similar trend to what Kasahara-san mentioned.\nThanks!\nYes, I think the results are as expected.\n\n> 1000 tables:\n> autovac_workers 1 : 13s, 13s, 13s\n> autovac_workers 2 : 6s, 4s, 5s\n> autovac_workers 3 : 3s, 4s, 4s\n> autovac_workers 5 : 3s, 3s, 3s\n> autovac_workers 10: 2s, 3s, 3s\n>\n> 5000 tables:\n> autovac_workers 1 : 71s, 71s, 132s\n> autovac_workers 2 : 37s, 32s, 48s\n> autovac_workers 3 : 29s, 26s, 38s\n> autovac_workers 5 : 20s, 19s, 19s\n> autovac_workers 10: 13s, 8s, 9s\n>\n> 10000 tables:\n> autovac_workers 1 : 158s,157s, 290s\n> autovac_workers 2 : 80s, 53s, 151s\n> autovac_workers 3 : 75s, 67s, 89s\n> autovac_workers 5 : 61s, 42s, 53s\n> autovac_workers 10: 69s, 26s, 33s\n>\n> 20000 tables:\n> autovac_workers 1 : 379s, 380s, 695s\n> autovac_workers 2 : 236s, 232s, 369s\n> autovac_workers 3 : 222s, 181s, 238s\n> autovac_workers 5 : 212s, 132s, 167s\n> autovac_workers 10: 317s, 91s, 117s\n>\n> I'm benchmarking the performance improvement by the patch on other\n> workloads. I'll share that result.\n+1\nIf you would like to try the patch I just posted, it would be very helpful.\n\nBest regards,\n\n>\n> Regards,\n>\n> --\n> Masahiko Sawada\n> EnterpriseDB: https://www.enterprisedb.com/\n\n\n\n-- \nTatsuhito Kasahara\nkasahara.tatsuhito _at_ gmail.com\n\n\n", "msg_date": "Fri, 27 Nov 2020 18:46:36 +0900", "msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>", "msg_from_op": false, "msg_subject": "Re: autovac issue with large number of tables" }, { "msg_contents": "\n\nOn 2020/11/27 18:38, Kasahara Tatsuhito wrote:\n> Hi,\n> \n> On Fri, Nov 27, 2020 at 1:43 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2020/11/26 10:41, Kasahara Tatsuhito wrote:\n>>> On Wed, Nov 25, 2020 at 8:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>>>\n>>>> On Wed, Nov 25, 2020 at 4:18 PM Kasahara Tatsuhito\n>>>> <kasahara.tatsuhito@gmail.com> wrote:\n>>>>>\n>>>>> Hi,\n>>>>>\n>>>>> On Wed, Nov 25, 2020 at 2:17 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>>>>>\n>>>>>> On Fri, Sep 4, 2020 at 7:50 PM Kasahara Tatsuhito\n>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n>>>>>>>\n>>>>>>> Hi,\n>>>>>>>\n>>>>>>> On Wed, Sep 2, 2020 at 2:10 AM Kasahara Tatsuhito\n>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n>>>>>>>>> I wonder if we could have table_recheck_autovac do two probes of the stats\n>>>>>>>>> data. First probe the existing stats data, and if it shows the table to\n>>>>>>>>> be already vacuumed, return immediately. If not, *then* force a stats\n>>>>>>>>> re-read, and check a second time.\n>>>>>>>> Does the above mean that the second and subsequent table_recheck_autovac()\n>>>>>>>> will be improved to first check using the previous refreshed statistics?\n>>>>>>>> I think that certainly works.\n>>>>>>>>\n>>>>>>>> If that's correct, I'll try to create a patch for the PoC\n>>>>>>>\n>>>>>>> I still don't know how to reproduce Jim's troubles, but I was able to reproduce\n>>>>>>> what was probably a very similar problem.\n>>>>>>>\n>>>>>>> This problem seems to be more likely to occur in cases where you have\n>>>>>>> a large number of tables,\n>>>>>>> i.e., a large amount of stats, and many small tables need VACUUM at\n>>>>>>> the same time.\n>>>>>>>\n>>>>>>> So I followed Tom's advice and created a patch for the PoC.\n>>>>>>> This patch will enable a flag in the table_recheck_autovac function to use\n>>>>>>> the existing stats next time if VACUUM (or ANALYZE) has already been done\n>>>>>>> by another worker on the check after the stats have been updated.\n>>>>>>> If the tables continue to require VACUUM after the refresh, then a refresh\n>>>>>>> will be required instead of using the existing statistics.\n>>>>>>>\n>>>>>>> I did simple test with HEAD and HEAD + this PoC patch.\n>>>>>>> The tests were conducted in two cases.\n>>>>>>> (I changed few configurations. see attached scripts)\n>>>>>>>\n>>>>>>> 1. Normal VACUUM case\n>>>>>>> - SET autovacuum = off\n>>>>>>> - CREATE tables with 100 rows\n>>>>>>> - DELETE 90 rows for each tables\n>>>>>>> - SET autovacuum = on and restart PostgreSQL\n>>>>>>> - Measure the time it takes for all tables to be VACUUMed\n>>>>>>>\n>>>>>>> 2. Anti wrap round VACUUM case\n>>>>>>> - CREATE brank tables\n>>>>>>> - SELECT all of these tables (for generate stats)\n>>>>>>> - SET autovacuum_freeze_max_age to low values and restart PostgreSQL\n>>>>>>> - Consumes a lot of XIDs by using txid_curent()\n>>>>>>> - Measure the time it takes for all tables to be VACUUMed\n>>>>>>>\n>>>>>>> For each test case, the following results were obtained by changing\n>>>>>>> autovacuum_max_workers parameters to 1, 2, 3(def) 5 and 10.\n>>>>>>> Also changing num of tables to 1000, 5000, 10000 and 20000.\n>>>>>>>\n>>>>>>> Due to the poor VM environment (2 VCPU/4 GB), the results are a little unstable,\n>>>>>>> but I think it's enough to ask for a trend.\n>>>>>>>\n>>>>>>> ===========================================================================\n>>>>>>> [1.Normal VACUUM case]\n>>>>>>> tables:1000\n>>>>>>> autovacuum_max_workers 1: (HEAD) 20 sec VS (with patch) 20 sec\n>>>>>>> autovacuum_max_workers 2: (HEAD) 18 sec VS (with patch) 16 sec\n>>>>>>> autovacuum_max_workers 3: (HEAD) 18 sec VS (with patch) 16 sec\n>>>>>>> autovacuum_max_workers 5: (HEAD) 19 sec VS (with patch) 17 sec\n>>>>>>> autovacuum_max_workers 10: (HEAD) 19 sec VS (with patch) 17 sec\n>>>>>>>\n>>>>>>> tables:5000\n>>>>>>> autovacuum_max_workers 1: (HEAD) 77 sec VS (with patch) 78 sec\n>>>>>>> autovacuum_max_workers 2: (HEAD) 61 sec VS (with patch) 43 sec\n>>>>>>> autovacuum_max_workers 3: (HEAD) 38 sec VS (with patch) 38 sec\n>>>>>>> autovacuum_max_workers 5: (HEAD) 45 sec VS (with patch) 37 sec\n>>>>>>> autovacuum_max_workers 10: (HEAD) 43 sec VS (with patch) 35 sec\n>>>>>>>\n>>>>>>> tables:10000\n>>>>>>> autovacuum_max_workers 1: (HEAD) 152 sec VS (with patch) 153 sec\n>>>>>>> autovacuum_max_workers 2: (HEAD) 119 sec VS (with patch) 98 sec\n>>>>>>> autovacuum_max_workers 3: (HEAD) 87 sec VS (with patch) 78 sec\n>>>>>>> autovacuum_max_workers 5: (HEAD) 100 sec VS (with patch) 66 sec\n>>>>>>> autovacuum_max_workers 10: (HEAD) 97 sec VS (with patch) 56 sec\n>>>>>>>\n>>>>>>> tables:20000\n>>>>>>> autovacuum_max_workers 1: (HEAD) 338 sec VS (with patch) 339 sec\n>>>>>>> autovacuum_max_workers 2: (HEAD) 231 sec VS (with patch) 229 sec\n>>>>>>> autovacuum_max_workers 3: (HEAD) 220 sec VS (with patch) 191 sec\n>>>>>>> autovacuum_max_workers 5: (HEAD) 234 sec VS (with patch) 147 sec\n>>>>>>> autovacuum_max_workers 10: (HEAD) 320 sec VS (with patch) 113 sec\n>>>>>>>\n>>>>>>> [2.Anti wrap round VACUUM case]\n>>>>>>> tables:1000\n>>>>>>> autovacuum_max_workers 1: (HEAD) 19 sec VS (with patch) 18 sec\n>>>>>>> autovacuum_max_workers 2: (HEAD) 14 sec VS (with patch) 15 sec\n>>>>>>> autovacuum_max_workers 3: (HEAD) 14 sec VS (with patch) 14 sec\n>>>>>>> autovacuum_max_workers 5: (HEAD) 14 sec VS (with patch) 16 sec\n>>>>>>> autovacuum_max_workers 10: (HEAD) 16 sec VS (with patch) 14 sec\n>>>>>>>\n>>>>>>> tables:5000\n>>>>>>> autovacuum_max_workers 1: (HEAD) 69 sec VS (with patch) 69 sec\n>>>>>>> autovacuum_max_workers 2: (HEAD) 66 sec VS (with patch) 47 sec\n>>>>>>> autovacuum_max_workers 3: (HEAD) 59 sec VS (with patch) 37 sec\n>>>>>>> autovacuum_max_workers 5: (HEAD) 39 sec VS (with patch) 28 sec\n>>>>>>> autovacuum_max_workers 10: (HEAD) 39 sec VS (with patch) 29 sec\n>>>>>>>\n>>>>>>> tables:10000\n>>>>>>> autovacuum_max_workers 1: (HEAD) 139 sec VS (with patch) 138 sec\n>>>>>>> autovacuum_max_workers 2: (HEAD) 130 sec VS (with patch) 86 sec\n>>>>>>> autovacuum_max_workers 3: (HEAD) 120 sec VS (with patch) 68 sec\n>>>>>>> autovacuum_max_workers 5: (HEAD) 96 sec VS (with patch) 41 sec\n>>>>>>> autovacuum_max_workers 10: (HEAD) 90 sec VS (with patch) 39 sec\n>>>>>>>\n>>>>>>> tables:20000\n>>>>>>> autovacuum_max_workers 1: (HEAD) 313 sec VS (with patch) 331 sec\n>>>>>>> autovacuum_max_workers 2: (HEAD) 209 sec VS (with patch) 201 sec\n>>>>>>> autovacuum_max_workers 3: (HEAD) 227 sec VS (with patch) 141 sec\n>>>>>>> autovacuum_max_workers 5: (HEAD) 236 sec VS (with patch) 88 sec\n>>>>>>> autovacuum_max_workers 10: (HEAD) 309 sec VS (with patch) 74 sec\n>>>>>>> ===========================================================================\n>>>>>>>\n>>>>>>> The cases without patch, the scalability of the worker has decreased\n>>>>>>> as the number of tables has increased.\n>>>>>>> In fact, the more workers there are, the longer it takes to complete\n>>>>>>> VACUUM to all tables.\n>>>>>>> The cases with patch, it shows good scalability with respect to the\n>>>>>>> number of workers.\n>>>>>>\n>>>>>> It seems a good performance improvement even without the patch of\n>>>>>> shared memory based stats collector.\n>>\n>> Sounds great!\n>>\n>>\n>>>>>>\n>>>>>>>\n>>>>>>> Note that perf top results showed that hash_search_with_hash_value,\n>>>>>>> hash_seq_search and\n>>>>>>> pgstat_read_statsfiles are dominant during VACUUM in all patterns,\n>>>>>>> with or without the patch.\n>>>>>>>\n>>>>>>> Therefore, there is still a need to find ways to optimize the reading\n>>>>>>> of large amounts of stats.\n>>>>>>> However, this patch is effective in its own right, and since there are\n>>>>>>> only a few parts to modify,\n>>>>>>> I think it should be able to be applied to current (preferably\n>>>>>>> pre-v13) PostgreSQL.\n>>>>>>\n>>>>>> +1\n>>>>>>\n>>>>>> +\n>>>>>> + /* We might be better to refresh stats */\n>>>>>> + use_existing_stats = false;\n>>>>>> }\n>>>>>> + else\n>>>>>> + {\n>>>>>>\n>>>>>> - heap_freetuple(classTup);\n>>>>>> + heap_freetuple(classTup);\n>>>>>> + /* The relid has already vacuumed, so we might be better to\n>>>>>> use exiting stats */\n>>>>>> + use_existing_stats = true;\n>>>>>> + }\n>>>>>>\n>>>>>> With that patch, the autovacuum process refreshes the stats in the\n>>>>>> next check if it finds out that the table still needs to be vacuumed.\n>>>>>> But I guess it's not necessarily true because the next table might be\n>>>>>> vacuumed already. So I think we might want to always use the existing\n>>>>>> for the first check. What do you think?\n>>>>> Thanks for your comment.\n>>>>>\n>>>>> If we assume the case where some workers vacuum on large tables\n>>>>> and a single worker vacuum on small tables, the processing\n>>>>> performance of the single worker will be slightly lower if the\n>>>>> existing statistics are checked every time.\n>>>>>\n>>>>> In fact, at first I tried to check the existing stats every time,\n>>>>> but the performance was slightly worse in cases with a small number of workers.\n>>\n>> Do you have this benchmark result?\n>>\n>>\n>>>>> (Checking the existing stats is lightweight , but at high frequency,\n>>>>> it affects processing performance.)\n>>>>> Therefore, at after refresh statistics, determine whether autovac\n>>>>> should use the existing statistics.\n>>>>\n>>>> Yeah, since the test you used uses a lot of small tables, if there are\n>>>> a few workers, checking the existing stats is unlikely to return true\n>>>> (no need to vacuum). So the cost of existing stats check ends up being\n>>>> overhead. Not sure how slow always checking the existing stats was,\n>>>> but given that the shared memory based stats collector patch could\n>>>> improve the performance of refreshing stats, it might be better not to\n>>>> check the existing stats frequently like the patch does. Anyway, I\n>>>> think it’s better to evaluate the performance improvement with other\n>>>> cases too.\n>>> Yeah, I would like to see how much the performance changes in other cases.\n>>> In addition, if the shared-based-stats patch is applied, we won't need to reload\n>>> a huge stats file, so we will just have to check the stats on\n>>> shared-mem every time.\n>>> Perhaps the logic of table_recheck_autovac could be simpler.\n>>>\n>>>>> BTW, I found some typos in comments, so attache a fixed version.\n>>\n>> The patch adds some duplicated codes into table_recheck_autovac().\n>> It's better to make the common function performing them and make\n>> table_recheck_autovac() call that common function, to simplify the code.\n> Thanks for your comment.\n> Hmm.. I've cut out the duplicate part.\n> Attach the patch.\n> Could you confirm that it fits your expecting?\n\nYes, thanks for updataing the patch! Here are another review comments.\n\n+\tshared = pgstat_fetch_stat_dbentry(InvalidOid);\n+\tdbentry = pgstat_fetch_stat_dbentry(MyDatabaseId);\n\nWhen using the existing stats, ISTM that these are not necessary and\nwe can reuse \"shared\" and \"dbentry\" obtained before. Right?\n\n+\t\t/* We might be better to refresh stats */\n+\t\tuse_existing_stats = false;\n\nI think that we should add more comments about why it's better to\nrefresh the stats in this case.\n\n+\t\t/* The relid has already vacuumed, so we might be better to use existing stats */\n+\t\tuse_existing_stats = true;\n\nI think that we should add more comments about why it's better to\nreuse the stats in this case.\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 27 Nov 2020 21:51:16 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: autovac issue with large number of tables" }, { "msg_contents": "Hi, Thanks for you comments.\n\nOn Fri, Nov 27, 2020 at 9:51 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/11/27 18:38, Kasahara Tatsuhito wrote:\n> > Hi,\n> >\n> > On Fri, Nov 27, 2020 at 1:43 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>\n> >>\n> >>\n> >> On 2020/11/26 10:41, Kasahara Tatsuhito wrote:\n> >>> On Wed, Nov 25, 2020 at 8:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >>>>\n> >>>> On Wed, Nov 25, 2020 at 4:18 PM Kasahara Tatsuhito\n> >>>> <kasahara.tatsuhito@gmail.com> wrote:\n> >>>>>\n> >>>>> Hi,\n> >>>>>\n> >>>>> On Wed, Nov 25, 2020 at 2:17 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >>>>>>\n> >>>>>> On Fri, Sep 4, 2020 at 7:50 PM Kasahara Tatsuhito\n> >>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n> >>>>>>>\n> >>>>>>> Hi,\n> >>>>>>>\n> >>>>>>> On Wed, Sep 2, 2020 at 2:10 AM Kasahara Tatsuhito\n> >>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n> >>>>>>>>> I wonder if we could have table_recheck_autovac do two probes of the stats\n> >>>>>>>>> data. First probe the existing stats data, and if it shows the table to\n> >>>>>>>>> be already vacuumed, return immediately. If not, *then* force a stats\n> >>>>>>>>> re-read, and check a second time.\n> >>>>>>>> Does the above mean that the second and subsequent table_recheck_autovac()\n> >>>>>>>> will be improved to first check using the previous refreshed statistics?\n> >>>>>>>> I think that certainly works.\n> >>>>>>>>\n> >>>>>>>> If that's correct, I'll try to create a patch for the PoC\n> >>>>>>>\n> >>>>>>> I still don't know how to reproduce Jim's troubles, but I was able to reproduce\n> >>>>>>> what was probably a very similar problem.\n> >>>>>>>\n> >>>>>>> This problem seems to be more likely to occur in cases where you have\n> >>>>>>> a large number of tables,\n> >>>>>>> i.e., a large amount of stats, and many small tables need VACUUM at\n> >>>>>>> the same time.\n> >>>>>>>\n> >>>>>>> So I followed Tom's advice and created a patch for the PoC.\n> >>>>>>> This patch will enable a flag in the table_recheck_autovac function to use\n> >>>>>>> the existing stats next time if VACUUM (or ANALYZE) has already been done\n> >>>>>>> by another worker on the check after the stats have been updated.\n> >>>>>>> If the tables continue to require VACUUM after the refresh, then a refresh\n> >>>>>>> will be required instead of using the existing statistics.\n> >>>>>>>\n> >>>>>>> I did simple test with HEAD and HEAD + this PoC patch.\n> >>>>>>> The tests were conducted in two cases.\n> >>>>>>> (I changed few configurations. see attached scripts)\n> >>>>>>>\n> >>>>>>> 1. Normal VACUUM case\n> >>>>>>> - SET autovacuum = off\n> >>>>>>> - CREATE tables with 100 rows\n> >>>>>>> - DELETE 90 rows for each tables\n> >>>>>>> - SET autovacuum = on and restart PostgreSQL\n> >>>>>>> - Measure the time it takes for all tables to be VACUUMed\n> >>>>>>>\n> >>>>>>> 2. Anti wrap round VACUUM case\n> >>>>>>> - CREATE brank tables\n> >>>>>>> - SELECT all of these tables (for generate stats)\n> >>>>>>> - SET autovacuum_freeze_max_age to low values and restart PostgreSQL\n> >>>>>>> - Consumes a lot of XIDs by using txid_curent()\n> >>>>>>> - Measure the time it takes for all tables to be VACUUMed\n> >>>>>>>\n> >>>>>>> For each test case, the following results were obtained by changing\n> >>>>>>> autovacuum_max_workers parameters to 1, 2, 3(def) 5 and 10.\n> >>>>>>> Also changing num of tables to 1000, 5000, 10000 and 20000.\n> >>>>>>>\n> >>>>>>> Due to the poor VM environment (2 VCPU/4 GB), the results are a little unstable,\n> >>>>>>> but I think it's enough to ask for a trend.\n> >>>>>>>\n> >>>>>>> ===========================================================================\n> >>>>>>> [1.Normal VACUUM case]\n> >>>>>>> tables:1000\n> >>>>>>> autovacuum_max_workers 1: (HEAD) 20 sec VS (with patch) 20 sec\n> >>>>>>> autovacuum_max_workers 2: (HEAD) 18 sec VS (with patch) 16 sec\n> >>>>>>> autovacuum_max_workers 3: (HEAD) 18 sec VS (with patch) 16 sec\n> >>>>>>> autovacuum_max_workers 5: (HEAD) 19 sec VS (with patch) 17 sec\n> >>>>>>> autovacuum_max_workers 10: (HEAD) 19 sec VS (with patch) 17 sec\n> >>>>>>>\n> >>>>>>> tables:5000\n> >>>>>>> autovacuum_max_workers 1: (HEAD) 77 sec VS (with patch) 78 sec\n> >>>>>>> autovacuum_max_workers 2: (HEAD) 61 sec VS (with patch) 43 sec\n> >>>>>>> autovacuum_max_workers 3: (HEAD) 38 sec VS (with patch) 38 sec\n> >>>>>>> autovacuum_max_workers 5: (HEAD) 45 sec VS (with patch) 37 sec\n> >>>>>>> autovacuum_max_workers 10: (HEAD) 43 sec VS (with patch) 35 sec\n> >>>>>>>\n> >>>>>>> tables:10000\n> >>>>>>> autovacuum_max_workers 1: (HEAD) 152 sec VS (with patch) 153 sec\n> >>>>>>> autovacuum_max_workers 2: (HEAD) 119 sec VS (with patch) 98 sec\n> >>>>>>> autovacuum_max_workers 3: (HEAD) 87 sec VS (with patch) 78 sec\n> >>>>>>> autovacuum_max_workers 5: (HEAD) 100 sec VS (with patch) 66 sec\n> >>>>>>> autovacuum_max_workers 10: (HEAD) 97 sec VS (with patch) 56 sec\n> >>>>>>>\n> >>>>>>> tables:20000\n> >>>>>>> autovacuum_max_workers 1: (HEAD) 338 sec VS (with patch) 339 sec\n> >>>>>>> autovacuum_max_workers 2: (HEAD) 231 sec VS (with patch) 229 sec\n> >>>>>>> autovacuum_max_workers 3: (HEAD) 220 sec VS (with patch) 191 sec\n> >>>>>>> autovacuum_max_workers 5: (HEAD) 234 sec VS (with patch) 147 sec\n> >>>>>>> autovacuum_max_workers 10: (HEAD) 320 sec VS (with patch) 113 sec\n> >>>>>>>\n> >>>>>>> [2.Anti wrap round VACUUM case]\n> >>>>>>> tables:1000\n> >>>>>>> autovacuum_max_workers 1: (HEAD) 19 sec VS (with patch) 18 sec\n> >>>>>>> autovacuum_max_workers 2: (HEAD) 14 sec VS (with patch) 15 sec\n> >>>>>>> autovacuum_max_workers 3: (HEAD) 14 sec VS (with patch) 14 sec\n> >>>>>>> autovacuum_max_workers 5: (HEAD) 14 sec VS (with patch) 16 sec\n> >>>>>>> autovacuum_max_workers 10: (HEAD) 16 sec VS (with patch) 14 sec\n> >>>>>>>\n> >>>>>>> tables:5000\n> >>>>>>> autovacuum_max_workers 1: (HEAD) 69 sec VS (with patch) 69 sec\n> >>>>>>> autovacuum_max_workers 2: (HEAD) 66 sec VS (with patch) 47 sec\n> >>>>>>> autovacuum_max_workers 3: (HEAD) 59 sec VS (with patch) 37 sec\n> >>>>>>> autovacuum_max_workers 5: (HEAD) 39 sec VS (with patch) 28 sec\n> >>>>>>> autovacuum_max_workers 10: (HEAD) 39 sec VS (with patch) 29 sec\n> >>>>>>>\n> >>>>>>> tables:10000\n> >>>>>>> autovacuum_max_workers 1: (HEAD) 139 sec VS (with patch) 138 sec\n> >>>>>>> autovacuum_max_workers 2: (HEAD) 130 sec VS (with patch) 86 sec\n> >>>>>>> autovacuum_max_workers 3: (HEAD) 120 sec VS (with patch) 68 sec\n> >>>>>>> autovacuum_max_workers 5: (HEAD) 96 sec VS (with patch) 41 sec\n> >>>>>>> autovacuum_max_workers 10: (HEAD) 90 sec VS (with patch) 39 sec\n> >>>>>>>\n> >>>>>>> tables:20000\n> >>>>>>> autovacuum_max_workers 1: (HEAD) 313 sec VS (with patch) 331 sec\n> >>>>>>> autovacuum_max_workers 2: (HEAD) 209 sec VS (with patch) 201 sec\n> >>>>>>> autovacuum_max_workers 3: (HEAD) 227 sec VS (with patch) 141 sec\n> >>>>>>> autovacuum_max_workers 5: (HEAD) 236 sec VS (with patch) 88 sec\n> >>>>>>> autovacuum_max_workers 10: (HEAD) 309 sec VS (with patch) 74 sec\n> >>>>>>> ===========================================================================\n> >>>>>>>\n> >>>>>>> The cases without patch, the scalability of the worker has decreased\n> >>>>>>> as the number of tables has increased.\n> >>>>>>> In fact, the more workers there are, the longer it takes to complete\n> >>>>>>> VACUUM to all tables.\n> >>>>>>> The cases with patch, it shows good scalability with respect to the\n> >>>>>>> number of workers.\n> >>>>>>\n> >>>>>> It seems a good performance improvement even without the patch of\n> >>>>>> shared memory based stats collector.\n> >>\n> >> Sounds great!\n> >>\n> >>\n> >>>>>>\n> >>>>>>>\n> >>>>>>> Note that perf top results showed that hash_search_with_hash_value,\n> >>>>>>> hash_seq_search and\n> >>>>>>> pgstat_read_statsfiles are dominant during VACUUM in all patterns,\n> >>>>>>> with or without the patch.\n> >>>>>>>\n> >>>>>>> Therefore, there is still a need to find ways to optimize the reading\n> >>>>>>> of large amounts of stats.\n> >>>>>>> However, this patch is effective in its own right, and since there are\n> >>>>>>> only a few parts to modify,\n> >>>>>>> I think it should be able to be applied to current (preferably\n> >>>>>>> pre-v13) PostgreSQL.\n> >>>>>>\n> >>>>>> +1\n> >>>>>>\n> >>>>>> +\n> >>>>>> + /* We might be better to refresh stats */\n> >>>>>> + use_existing_stats = false;\n> >>>>>> }\n> >>>>>> + else\n> >>>>>> + {\n> >>>>>>\n> >>>>>> - heap_freetuple(classTup);\n> >>>>>> + heap_freetuple(classTup);\n> >>>>>> + /* The relid has already vacuumed, so we might be better to\n> >>>>>> use exiting stats */\n> >>>>>> + use_existing_stats = true;\n> >>>>>> + }\n> >>>>>>\n> >>>>>> With that patch, the autovacuum process refreshes the stats in the\n> >>>>>> next check if it finds out that the table still needs to be vacuumed.\n> >>>>>> But I guess it's not necessarily true because the next table might be\n> >>>>>> vacuumed already. So I think we might want to always use the existing\n> >>>>>> for the first check. What do you think?\n> >>>>> Thanks for your comment.\n> >>>>>\n> >>>>> If we assume the case where some workers vacuum on large tables\n> >>>>> and a single worker vacuum on small tables, the processing\n> >>>>> performance of the single worker will be slightly lower if the\n> >>>>> existing statistics are checked every time.\n> >>>>>\n> >>>>> In fact, at first I tried to check the existing stats every time,\n> >>>>> but the performance was slightly worse in cases with a small number of workers.\n> >>\n> >> Do you have this benchmark result?\n> >>\n> >>\n> >>>>> (Checking the existing stats is lightweight , but at high frequency,\n> >>>>> it affects processing performance.)\n> >>>>> Therefore, at after refresh statistics, determine whether autovac\n> >>>>> should use the existing statistics.\n> >>>>\n> >>>> Yeah, since the test you used uses a lot of small tables, if there are\n> >>>> a few workers, checking the existing stats is unlikely to return true\n> >>>> (no need to vacuum). So the cost of existing stats check ends up being\n> >>>> overhead. Not sure how slow always checking the existing stats was,\n> >>>> but given that the shared memory based stats collector patch could\n> >>>> improve the performance of refreshing stats, it might be better not to\n> >>>> check the existing stats frequently like the patch does. Anyway, I\n> >>>> think it’s better to evaluate the performance improvement with other\n> >>>> cases too.\n> >>> Yeah, I would like to see how much the performance changes in other cases.\n> >>> In addition, if the shared-based-stats patch is applied, we won't need to reload\n> >>> a huge stats file, so we will just have to check the stats on\n> >>> shared-mem every time.\n> >>> Perhaps the logic of table_recheck_autovac could be simpler.\n> >>>\n> >>>>> BTW, I found some typos in comments, so attache a fixed version.\n> >>\n> >> The patch adds some duplicated codes into table_recheck_autovac().\n> >> It's better to make the common function performing them and make\n> >> table_recheck_autovac() call that common function, to simplify the code.\n> > Thanks for your comment.\n> > Hmm.. I've cut out the duplicate part.\n> > Attach the patch.\n> > Could you confirm that it fits your expecting?\n>\n> Yes, thanks for updataing the patch! Here are another review comments.\n>\n> + shared = pgstat_fetch_stat_dbentry(InvalidOid);\n> + dbentry = pgstat_fetch_stat_dbentry(MyDatabaseId);\n>\n> When using the existing stats, ISTM that these are not necessary and\n> we can reuse \"shared\" and \"dbentry\" obtained before. Right?\nYeah, but unless autovac_refresh_stats() is called, these functions\nread the information from the\nlocal hash table without re-read the stats file, so the process is very light.\nTherefore, I think, it is better to keep the current logic to keep the\ncode simple.\n\n>\n> + /* We might be better to refresh stats */\n> + use_existing_stats = false;\n>\n> I think that we should add more comments about why it's better to\n> refresh the stats in this case.\n>\n> + /* The relid has already vacuumed, so we might be better to use existing stats */\n> + use_existing_stats = true;\n>\n> I think that we should add more comments about why it's better to\n> reuse the stats in this case.\nI added comments.\n\nAttache the patch.\n\nBest regards,\n\n>\n> Regards,\n>\n>\n> --\n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n\n\n\n-- \nTatsuhito Kasahara\nkasahara.tatsuhito _at_ gmail.com", "msg_date": "Sun, 29 Nov 2020 22:34:16 +0900", "msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>", "msg_from_op": false, "msg_subject": "Re: autovac issue with large number of tables" }, { "msg_contents": "On Sun, Nov 29, 2020 at 10:34 PM Kasahara Tatsuhito\n<kasahara.tatsuhito@gmail.com> wrote:\n>\n> Hi, Thanks for you comments.\n>\n> On Fri, Nov 27, 2020 at 9:51 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >\n> >\n> >\n> > On 2020/11/27 18:38, Kasahara Tatsuhito wrote:\n> > > Hi,\n> > >\n> > > On Fri, Nov 27, 2020 at 1:43 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > >>\n> > >>\n> > >>\n> > >> On 2020/11/26 10:41, Kasahara Tatsuhito wrote:\n> > >>> On Wed, Nov 25, 2020 at 8:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >>>>\n> > >>>> On Wed, Nov 25, 2020 at 4:18 PM Kasahara Tatsuhito\n> > >>>> <kasahara.tatsuhito@gmail.com> wrote:\n> > >>>>>\n> > >>>>> Hi,\n> > >>>>>\n> > >>>>> On Wed, Nov 25, 2020 at 2:17 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >>>>>>\n> > >>>>>> On Fri, Sep 4, 2020 at 7:50 PM Kasahara Tatsuhito\n> > >>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n> > >>>>>>>\n> > >>>>>>> Hi,\n> > >>>>>>>\n> > >>>>>>> On Wed, Sep 2, 2020 at 2:10 AM Kasahara Tatsuhito\n> > >>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n> > >>>>>>>>> I wonder if we could have table_recheck_autovac do two probes of the stats\n> > >>>>>>>>> data. First probe the existing stats data, and if it shows the table to\n> > >>>>>>>>> be already vacuumed, return immediately. If not, *then* force a stats\n> > >>>>>>>>> re-read, and check a second time.\n> > >>>>>>>> Does the above mean that the second and subsequent table_recheck_autovac()\n> > >>>>>>>> will be improved to first check using the previous refreshed statistics?\n> > >>>>>>>> I think that certainly works.\n> > >>>>>>>>\n> > >>>>>>>> If that's correct, I'll try to create a patch for the PoC\n> > >>>>>>>\n> > >>>>>>> I still don't know how to reproduce Jim's troubles, but I was able to reproduce\n> > >>>>>>> what was probably a very similar problem.\n> > >>>>>>>\n> > >>>>>>> This problem seems to be more likely to occur in cases where you have\n> > >>>>>>> a large number of tables,\n> > >>>>>>> i.e., a large amount of stats, and many small tables need VACUUM at\n> > >>>>>>> the same time.\n> > >>>>>>>\n> > >>>>>>> So I followed Tom's advice and created a patch for the PoC.\n> > >>>>>>> This patch will enable a flag in the table_recheck_autovac function to use\n> > >>>>>>> the existing stats next time if VACUUM (or ANALYZE) has already been done\n> > >>>>>>> by another worker on the check after the stats have been updated.\n> > >>>>>>> If the tables continue to require VACUUM after the refresh, then a refresh\n> > >>>>>>> will be required instead of using the existing statistics.\n> > >>>>>>>\n> > >>>>>>> I did simple test with HEAD and HEAD + this PoC patch.\n> > >>>>>>> The tests were conducted in two cases.\n> > >>>>>>> (I changed few configurations. see attached scripts)\n> > >>>>>>>\n> > >>>>>>> 1. Normal VACUUM case\n> > >>>>>>> - SET autovacuum = off\n> > >>>>>>> - CREATE tables with 100 rows\n> > >>>>>>> - DELETE 90 rows for each tables\n> > >>>>>>> - SET autovacuum = on and restart PostgreSQL\n> > >>>>>>> - Measure the time it takes for all tables to be VACUUMed\n> > >>>>>>>\n> > >>>>>>> 2. Anti wrap round VACUUM case\n> > >>>>>>> - CREATE brank tables\n> > >>>>>>> - SELECT all of these tables (for generate stats)\n> > >>>>>>> - SET autovacuum_freeze_max_age to low values and restart PostgreSQL\n> > >>>>>>> - Consumes a lot of XIDs by using txid_curent()\n> > >>>>>>> - Measure the time it takes for all tables to be VACUUMed\n> > >>>>>>>\n> > >>>>>>> For each test case, the following results were obtained by changing\n> > >>>>>>> autovacuum_max_workers parameters to 1, 2, 3(def) 5 and 10.\n> > >>>>>>> Also changing num of tables to 1000, 5000, 10000 and 20000.\n> > >>>>>>>\n> > >>>>>>> Due to the poor VM environment (2 VCPU/4 GB), the results are a little unstable,\n> > >>>>>>> but I think it's enough to ask for a trend.\n> > >>>>>>>\n> > >>>>>>> ===========================================================================\n> > >>>>>>> [1.Normal VACUUM case]\n> > >>>>>>> tables:1000\n> > >>>>>>> autovacuum_max_workers 1: (HEAD) 20 sec VS (with patch) 20 sec\n> > >>>>>>> autovacuum_max_workers 2: (HEAD) 18 sec VS (with patch) 16 sec\n> > >>>>>>> autovacuum_max_workers 3: (HEAD) 18 sec VS (with patch) 16 sec\n> > >>>>>>> autovacuum_max_workers 5: (HEAD) 19 sec VS (with patch) 17 sec\n> > >>>>>>> autovacuum_max_workers 10: (HEAD) 19 sec VS (with patch) 17 sec\n> > >>>>>>>\n> > >>>>>>> tables:5000\n> > >>>>>>> autovacuum_max_workers 1: (HEAD) 77 sec VS (with patch) 78 sec\n> > >>>>>>> autovacuum_max_workers 2: (HEAD) 61 sec VS (with patch) 43 sec\n> > >>>>>>> autovacuum_max_workers 3: (HEAD) 38 sec VS (with patch) 38 sec\n> > >>>>>>> autovacuum_max_workers 5: (HEAD) 45 sec VS (with patch) 37 sec\n> > >>>>>>> autovacuum_max_workers 10: (HEAD) 43 sec VS (with patch) 35 sec\n> > >>>>>>>\n> > >>>>>>> tables:10000\n> > >>>>>>> autovacuum_max_workers 1: (HEAD) 152 sec VS (with patch) 153 sec\n> > >>>>>>> autovacuum_max_workers 2: (HEAD) 119 sec VS (with patch) 98 sec\n> > >>>>>>> autovacuum_max_workers 3: (HEAD) 87 sec VS (with patch) 78 sec\n> > >>>>>>> autovacuum_max_workers 5: (HEAD) 100 sec VS (with patch) 66 sec\n> > >>>>>>> autovacuum_max_workers 10: (HEAD) 97 sec VS (with patch) 56 sec\n> > >>>>>>>\n> > >>>>>>> tables:20000\n> > >>>>>>> autovacuum_max_workers 1: (HEAD) 338 sec VS (with patch) 339 sec\n> > >>>>>>> autovacuum_max_workers 2: (HEAD) 231 sec VS (with patch) 229 sec\n> > >>>>>>> autovacuum_max_workers 3: (HEAD) 220 sec VS (with patch) 191 sec\n> > >>>>>>> autovacuum_max_workers 5: (HEAD) 234 sec VS (with patch) 147 sec\n> > >>>>>>> autovacuum_max_workers 10: (HEAD) 320 sec VS (with patch) 113 sec\n> > >>>>>>>\n> > >>>>>>> [2.Anti wrap round VACUUM case]\n> > >>>>>>> tables:1000\n> > >>>>>>> autovacuum_max_workers 1: (HEAD) 19 sec VS (with patch) 18 sec\n> > >>>>>>> autovacuum_max_workers 2: (HEAD) 14 sec VS (with patch) 15 sec\n> > >>>>>>> autovacuum_max_workers 3: (HEAD) 14 sec VS (with patch) 14 sec\n> > >>>>>>> autovacuum_max_workers 5: (HEAD) 14 sec VS (with patch) 16 sec\n> > >>>>>>> autovacuum_max_workers 10: (HEAD) 16 sec VS (with patch) 14 sec\n> > >>>>>>>\n> > >>>>>>> tables:5000\n> > >>>>>>> autovacuum_max_workers 1: (HEAD) 69 sec VS (with patch) 69 sec\n> > >>>>>>> autovacuum_max_workers 2: (HEAD) 66 sec VS (with patch) 47 sec\n> > >>>>>>> autovacuum_max_workers 3: (HEAD) 59 sec VS (with patch) 37 sec\n> > >>>>>>> autovacuum_max_workers 5: (HEAD) 39 sec VS (with patch) 28 sec\n> > >>>>>>> autovacuum_max_workers 10: (HEAD) 39 sec VS (with patch) 29 sec\n> > >>>>>>>\n> > >>>>>>> tables:10000\n> > >>>>>>> autovacuum_max_workers 1: (HEAD) 139 sec VS (with patch) 138 sec\n> > >>>>>>> autovacuum_max_workers 2: (HEAD) 130 sec VS (with patch) 86 sec\n> > >>>>>>> autovacuum_max_workers 3: (HEAD) 120 sec VS (with patch) 68 sec\n> > >>>>>>> autovacuum_max_workers 5: (HEAD) 96 sec VS (with patch) 41 sec\n> > >>>>>>> autovacuum_max_workers 10: (HEAD) 90 sec VS (with patch) 39 sec\n> > >>>>>>>\n> > >>>>>>> tables:20000\n> > >>>>>>> autovacuum_max_workers 1: (HEAD) 313 sec VS (with patch) 331 sec\n> > >>>>>>> autovacuum_max_workers 2: (HEAD) 209 sec VS (with patch) 201 sec\n> > >>>>>>> autovacuum_max_workers 3: (HEAD) 227 sec VS (with patch) 141 sec\n> > >>>>>>> autovacuum_max_workers 5: (HEAD) 236 sec VS (with patch) 88 sec\n> > >>>>>>> autovacuum_max_workers 10: (HEAD) 309 sec VS (with patch) 74 sec\n> > >>>>>>> ===========================================================================\n> > >>>>>>>\n> > >>>>>>> The cases without patch, the scalability of the worker has decreased\n> > >>>>>>> as the number of tables has increased.\n> > >>>>>>> In fact, the more workers there are, the longer it takes to complete\n> > >>>>>>> VACUUM to all tables.\n> > >>>>>>> The cases with patch, it shows good scalability with respect to the\n> > >>>>>>> number of workers.\n> > >>>>>>\n> > >>>>>> It seems a good performance improvement even without the patch of\n> > >>>>>> shared memory based stats collector.\n> > >>\n> > >> Sounds great!\n> > >>\n> > >>\n> > >>>>>>\n> > >>>>>>>\n> > >>>>>>> Note that perf top results showed that hash_search_with_hash_value,\n> > >>>>>>> hash_seq_search and\n> > >>>>>>> pgstat_read_statsfiles are dominant during VACUUM in all patterns,\n> > >>>>>>> with or without the patch.\n> > >>>>>>>\n> > >>>>>>> Therefore, there is still a need to find ways to optimize the reading\n> > >>>>>>> of large amounts of stats.\n> > >>>>>>> However, this patch is effective in its own right, and since there are\n> > >>>>>>> only a few parts to modify,\n> > >>>>>>> I think it should be able to be applied to current (preferably\n> > >>>>>>> pre-v13) PostgreSQL.\n> > >>>>>>\n> > >>>>>> +1\n> > >>>>>>\n> > >>>>>> +\n> > >>>>>> + /* We might be better to refresh stats */\n> > >>>>>> + use_existing_stats = false;\n> > >>>>>> }\n> > >>>>>> + else\n> > >>>>>> + {\n> > >>>>>>\n> > >>>>>> - heap_freetuple(classTup);\n> > >>>>>> + heap_freetuple(classTup);\n> > >>>>>> + /* The relid has already vacuumed, so we might be better to\n> > >>>>>> use exiting stats */\n> > >>>>>> + use_existing_stats = true;\n> > >>>>>> + }\n> > >>>>>>\n> > >>>>>> With that patch, the autovacuum process refreshes the stats in the\n> > >>>>>> next check if it finds out that the table still needs to be vacuumed.\n> > >>>>>> But I guess it's not necessarily true because the next table might be\n> > >>>>>> vacuumed already. So I think we might want to always use the existing\n> > >>>>>> for the first check. What do you think?\n> > >>>>> Thanks for your comment.\n> > >>>>>\n> > >>>>> If we assume the case where some workers vacuum on large tables\n> > >>>>> and a single worker vacuum on small tables, the processing\n> > >>>>> performance of the single worker will be slightly lower if the\n> > >>>>> existing statistics are checked every time.\n> > >>>>>\n> > >>>>> In fact, at first I tried to check the existing stats every time,\n> > >>>>> but the performance was slightly worse in cases with a small number of workers.\n> > >>\n> > >> Do you have this benchmark result?\n> > >>\n> > >>\n> > >>>>> (Checking the existing stats is lightweight , but at high frequency,\n> > >>>>> it affects processing performance.)\n> > >>>>> Therefore, at after refresh statistics, determine whether autovac\n> > >>>>> should use the existing statistics.\n> > >>>>\n> > >>>> Yeah, since the test you used uses a lot of small tables, if there are\n> > >>>> a few workers, checking the existing stats is unlikely to return true\n> > >>>> (no need to vacuum). So the cost of existing stats check ends up being\n> > >>>> overhead. Not sure how slow always checking the existing stats was,\n> > >>>> but given that the shared memory based stats collector patch could\n> > >>>> improve the performance of refreshing stats, it might be better not to\n> > >>>> check the existing stats frequently like the patch does. Anyway, I\n> > >>>> think it’s better to evaluate the performance improvement with other\n> > >>>> cases too.\n> > >>> Yeah, I would like to see how much the performance changes in other cases.\n> > >>> In addition, if the shared-based-stats patch is applied, we won't need to reload\n> > >>> a huge stats file, so we will just have to check the stats on\n> > >>> shared-mem every time.\n> > >>> Perhaps the logic of table_recheck_autovac could be simpler.\n> > >>>\n> > >>>>> BTW, I found some typos in comments, so attache a fixed version.\n> > >>\n> > >> The patch adds some duplicated codes into table_recheck_autovac().\n> > >> It's better to make the common function performing them and make\n> > >> table_recheck_autovac() call that common function, to simplify the code.\n> > > Thanks for your comment.\n> > > Hmm.. I've cut out the duplicate part.\n> > > Attach the patch.\n> > > Could you confirm that it fits your expecting?\n> >\n> > Yes, thanks for updataing the patch! Here are another review comments.\n> >\n> > + shared = pgstat_fetch_stat_dbentry(InvalidOid);\n> > + dbentry = pgstat_fetch_stat_dbentry(MyDatabaseId);\n> >\n> > When using the existing stats, ISTM that these are not necessary and\n> > we can reuse \"shared\" and \"dbentry\" obtained before. Right?\n> Yeah, but unless autovac_refresh_stats() is called, these functions\n> read the information from the\n> local hash table without re-read the stats file, so the process is very light.\n> Therefore, I think, it is better to keep the current logic to keep the\n> code simple.\n>\n> >\n> > + /* We might be better to refresh stats */\n> > + use_existing_stats = false;\n> >\n> > I think that we should add more comments about why it's better to\n> > refresh the stats in this case.\n> >\n> > + /* The relid has already vacuumed, so we might be better to use existing stats */\n> > + use_existing_stats = true;\n> >\n> > I think that we should add more comments about why it's better to\n> > reuse the stats in this case.\n> I added comments.\n>\n> Attache the patch.\n>\n\nThank you for updating the patch. Here are some small comments on the\nlatest (v4) patch.\n\n+ * So if the last time we checked a table that was already vacuumed after\n+ * refres stats, check the current statistics before refreshing it.\n+ */\n\ns/refres/refresh/\n\n-----\n+/* Counter to determine if statistics should be refreshed */\n+static bool use_existing_stats = false;\n+\n\nI think 'use_existing_stats' can be declared within table_recheck_autovac().\n\n-----\nWhile testing the performance, I realized that the statistics are\nreset every time vacuumed one table, leading to re-reading the stats\nfile even if 'use_existing_stats' is true. Please refer that vacuum()\neventually calls AtEOXact_PgStat() which calls to\npgstat_clear_snapshot(). I believe that's why the performance of the\nmethod of always checking the existing stats wasn’t good as expected.\nSo if we save the statistics somewhere and use it for rechecking, the\nresults of the performance benchmark will differ between these two\nmethods.\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 30 Nov 2020 10:43:09 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: autovac issue with large number of tables" }, { "msg_contents": "\n\nOn 2020/11/30 10:43, Masahiko Sawada wrote:\n> On Sun, Nov 29, 2020 at 10:34 PM Kasahara Tatsuhito\n> <kasahara.tatsuhito@gmail.com> wrote:\n>>\n>> Hi, Thanks for you comments.\n>>\n>> On Fri, Nov 27, 2020 at 9:51 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>\n>>>\n>>>\n>>> On 2020/11/27 18:38, Kasahara Tatsuhito wrote:\n>>>> Hi,\n>>>>\n>>>> On Fri, Nov 27, 2020 at 1:43 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>\n>>>>>\n>>>>>\n>>>>> On 2020/11/26 10:41, Kasahara Tatsuhito wrote:\n>>>>>> On Wed, Nov 25, 2020 at 8:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>>>>>>\n>>>>>>> On Wed, Nov 25, 2020 at 4:18 PM Kasahara Tatsuhito\n>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n>>>>>>>>\n>>>>>>>> Hi,\n>>>>>>>>\n>>>>>>>> On Wed, Nov 25, 2020 at 2:17 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>>>>>>>>\n>>>>>>>>> On Fri, Sep 4, 2020 at 7:50 PM Kasahara Tatsuhito\n>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n>>>>>>>>>>\n>>>>>>>>>> Hi,\n>>>>>>>>>>\n>>>>>>>>>> On Wed, Sep 2, 2020 at 2:10 AM Kasahara Tatsuhito\n>>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n>>>>>>>>>>>> I wonder if we could have table_recheck_autovac do two probes of the stats\n>>>>>>>>>>>> data. First probe the existing stats data, and if it shows the table to\n>>>>>>>>>>>> be already vacuumed, return immediately. If not, *then* force a stats\n>>>>>>>>>>>> re-read, and check a second time.\n>>>>>>>>>>> Does the above mean that the second and subsequent table_recheck_autovac()\n>>>>>>>>>>> will be improved to first check using the previous refreshed statistics?\n>>>>>>>>>>> I think that certainly works.\n>>>>>>>>>>>\n>>>>>>>>>>> If that's correct, I'll try to create a patch for the PoC\n>>>>>>>>>>\n>>>>>>>>>> I still don't know how to reproduce Jim's troubles, but I was able to reproduce\n>>>>>>>>>> what was probably a very similar problem.\n>>>>>>>>>>\n>>>>>>>>>> This problem seems to be more likely to occur in cases where you have\n>>>>>>>>>> a large number of tables,\n>>>>>>>>>> i.e., a large amount of stats, and many small tables need VACUUM at\n>>>>>>>>>> the same time.\n>>>>>>>>>>\n>>>>>>>>>> So I followed Tom's advice and created a patch for the PoC.\n>>>>>>>>>> This patch will enable a flag in the table_recheck_autovac function to use\n>>>>>>>>>> the existing stats next time if VACUUM (or ANALYZE) has already been done\n>>>>>>>>>> by another worker on the check after the stats have been updated.\n>>>>>>>>>> If the tables continue to require VACUUM after the refresh, then a refresh\n>>>>>>>>>> will be required instead of using the existing statistics.\n>>>>>>>>>>\n>>>>>>>>>> I did simple test with HEAD and HEAD + this PoC patch.\n>>>>>>>>>> The tests were conducted in two cases.\n>>>>>>>>>> (I changed few configurations. see attached scripts)\n>>>>>>>>>>\n>>>>>>>>>> 1. Normal VACUUM case\n>>>>>>>>>> - SET autovacuum = off\n>>>>>>>>>> - CREATE tables with 100 rows\n>>>>>>>>>> - DELETE 90 rows for each tables\n>>>>>>>>>> - SET autovacuum = on and restart PostgreSQL\n>>>>>>>>>> - Measure the time it takes for all tables to be VACUUMed\n>>>>>>>>>>\n>>>>>>>>>> 2. Anti wrap round VACUUM case\n>>>>>>>>>> - CREATE brank tables\n>>>>>>>>>> - SELECT all of these tables (for generate stats)\n>>>>>>>>>> - SET autovacuum_freeze_max_age to low values and restart PostgreSQL\n>>>>>>>>>> - Consumes a lot of XIDs by using txid_curent()\n>>>>>>>>>> - Measure the time it takes for all tables to be VACUUMed\n>>>>>>>>>>\n>>>>>>>>>> For each test case, the following results were obtained by changing\n>>>>>>>>>> autovacuum_max_workers parameters to 1, 2, 3(def) 5 and 10.\n>>>>>>>>>> Also changing num of tables to 1000, 5000, 10000 and 20000.\n>>>>>>>>>>\n>>>>>>>>>> Due to the poor VM environment (2 VCPU/4 GB), the results are a little unstable,\n>>>>>>>>>> but I think it's enough to ask for a trend.\n>>>>>>>>>>\n>>>>>>>>>> ===========================================================================\n>>>>>>>>>> [1.Normal VACUUM case]\n>>>>>>>>>> tables:1000\n>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 20 sec VS (with patch) 20 sec\n>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 18 sec VS (with patch) 16 sec\n>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 18 sec VS (with patch) 16 sec\n>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 19 sec VS (with patch) 17 sec\n>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 19 sec VS (with patch) 17 sec\n>>>>>>>>>>\n>>>>>>>>>> tables:5000\n>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 77 sec VS (with patch) 78 sec\n>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 61 sec VS (with patch) 43 sec\n>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 38 sec VS (with patch) 38 sec\n>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 45 sec VS (with patch) 37 sec\n>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 43 sec VS (with patch) 35 sec\n>>>>>>>>>>\n>>>>>>>>>> tables:10000\n>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 152 sec VS (with patch) 153 sec\n>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 119 sec VS (with patch) 98 sec\n>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 87 sec VS (with patch) 78 sec\n>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 100 sec VS (with patch) 66 sec\n>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 97 sec VS (with patch) 56 sec\n>>>>>>>>>>\n>>>>>>>>>> tables:20000\n>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 338 sec VS (with patch) 339 sec\n>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 231 sec VS (with patch) 229 sec\n>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 220 sec VS (with patch) 191 sec\n>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 234 sec VS (with patch) 147 sec\n>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 320 sec VS (with patch) 113 sec\n>>>>>>>>>>\n>>>>>>>>>> [2.Anti wrap round VACUUM case]\n>>>>>>>>>> tables:1000\n>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 19 sec VS (with patch) 18 sec\n>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 14 sec VS (with patch) 15 sec\n>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 14 sec VS (with patch) 14 sec\n>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 14 sec VS (with patch) 16 sec\n>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 16 sec VS (with patch) 14 sec\n>>>>>>>>>>\n>>>>>>>>>> tables:5000\n>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 69 sec VS (with patch) 69 sec\n>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 66 sec VS (with patch) 47 sec\n>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 59 sec VS (with patch) 37 sec\n>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 39 sec VS (with patch) 28 sec\n>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 39 sec VS (with patch) 29 sec\n>>>>>>>>>>\n>>>>>>>>>> tables:10000\n>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 139 sec VS (with patch) 138 sec\n>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 130 sec VS (with patch) 86 sec\n>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 120 sec VS (with patch) 68 sec\n>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 96 sec VS (with patch) 41 sec\n>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 90 sec VS (with patch) 39 sec\n>>>>>>>>>>\n>>>>>>>>>> tables:20000\n>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 313 sec VS (with patch) 331 sec\n>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 209 sec VS (with patch) 201 sec\n>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 227 sec VS (with patch) 141 sec\n>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 236 sec VS (with patch) 88 sec\n>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 309 sec VS (with patch) 74 sec\n>>>>>>>>>> ===========================================================================\n>>>>>>>>>>\n>>>>>>>>>> The cases without patch, the scalability of the worker has decreased\n>>>>>>>>>> as the number of tables has increased.\n>>>>>>>>>> In fact, the more workers there are, the longer it takes to complete\n>>>>>>>>>> VACUUM to all tables.\n>>>>>>>>>> The cases with patch, it shows good scalability with respect to the\n>>>>>>>>>> number of workers.\n>>>>>>>>>\n>>>>>>>>> It seems a good performance improvement even without the patch of\n>>>>>>>>> shared memory based stats collector.\n>>>>>\n>>>>> Sounds great!\n>>>>>\n>>>>>\n>>>>>>>>>\n>>>>>>>>>>\n>>>>>>>>>> Note that perf top results showed that hash_search_with_hash_value,\n>>>>>>>>>> hash_seq_search and\n>>>>>>>>>> pgstat_read_statsfiles are dominant during VACUUM in all patterns,\n>>>>>>>>>> with or without the patch.\n>>>>>>>>>>\n>>>>>>>>>> Therefore, there is still a need to find ways to optimize the reading\n>>>>>>>>>> of large amounts of stats.\n>>>>>>>>>> However, this patch is effective in its own right, and since there are\n>>>>>>>>>> only a few parts to modify,\n>>>>>>>>>> I think it should be able to be applied to current (preferably\n>>>>>>>>>> pre-v13) PostgreSQL.\n>>>>>>>>>\n>>>>>>>>> +1\n>>>>>>>>>\n>>>>>>>>> +\n>>>>>>>>> + /* We might be better to refresh stats */\n>>>>>>>>> + use_existing_stats = false;\n>>>>>>>>> }\n>>>>>>>>> + else\n>>>>>>>>> + {\n>>>>>>>>>\n>>>>>>>>> - heap_freetuple(classTup);\n>>>>>>>>> + heap_freetuple(classTup);\n>>>>>>>>> + /* The relid has already vacuumed, so we might be better to\n>>>>>>>>> use exiting stats */\n>>>>>>>>> + use_existing_stats = true;\n>>>>>>>>> + }\n>>>>>>>>>\n>>>>>>>>> With that patch, the autovacuum process refreshes the stats in the\n>>>>>>>>> next check if it finds out that the table still needs to be vacuumed.\n>>>>>>>>> But I guess it's not necessarily true because the next table might be\n>>>>>>>>> vacuumed already. So I think we might want to always use the existing\n>>>>>>>>> for the first check. What do you think?\n>>>>>>>> Thanks for your comment.\n>>>>>>>>\n>>>>>>>> If we assume the case where some workers vacuum on large tables\n>>>>>>>> and a single worker vacuum on small tables, the processing\n>>>>>>>> performance of the single worker will be slightly lower if the\n>>>>>>>> existing statistics are checked every time.\n>>>>>>>>\n>>>>>>>> In fact, at first I tried to check the existing stats every time,\n>>>>>>>> but the performance was slightly worse in cases with a small number of workers.\n>>>>>\n>>>>> Do you have this benchmark result?\n>>>>>\n>>>>>\n>>>>>>>> (Checking the existing stats is lightweight , but at high frequency,\n>>>>>>>> it affects processing performance.)\n>>>>>>>> Therefore, at after refresh statistics, determine whether autovac\n>>>>>>>> should use the existing statistics.\n>>>>>>>\n>>>>>>> Yeah, since the test you used uses a lot of small tables, if there are\n>>>>>>> a few workers, checking the existing stats is unlikely to return true\n>>>>>>> (no need to vacuum). So the cost of existing stats check ends up being\n>>>>>>> overhead. Not sure how slow always checking the existing stats was,\n>>>>>>> but given that the shared memory based stats collector patch could\n>>>>>>> improve the performance of refreshing stats, it might be better not to\n>>>>>>> check the existing stats frequently like the patch does. Anyway, I\n>>>>>>> think it’s better to evaluate the performance improvement with other\n>>>>>>> cases too.\n>>>>>> Yeah, I would like to see how much the performance changes in other cases.\n>>>>>> In addition, if the shared-based-stats patch is applied, we won't need to reload\n>>>>>> a huge stats file, so we will just have to check the stats on\n>>>>>> shared-mem every time.\n>>>>>> Perhaps the logic of table_recheck_autovac could be simpler.\n>>>>>>\n>>>>>>>> BTW, I found some typos in comments, so attache a fixed version.\n>>>>>\n>>>>> The patch adds some duplicated codes into table_recheck_autovac().\n>>>>> It's better to make the common function performing them and make\n>>>>> table_recheck_autovac() call that common function, to simplify the code.\n>>>> Thanks for your comment.\n>>>> Hmm.. I've cut out the duplicate part.\n>>>> Attach the patch.\n>>>> Could you confirm that it fits your expecting?\n>>>\n>>> Yes, thanks for updataing the patch! Here are another review comments.\n>>>\n>>> + shared = pgstat_fetch_stat_dbentry(InvalidOid);\n>>> + dbentry = pgstat_fetch_stat_dbentry(MyDatabaseId);\n>>>\n>>> When using the existing stats, ISTM that these are not necessary and\n>>> we can reuse \"shared\" and \"dbentry\" obtained before. Right?\n>> Yeah, but unless autovac_refresh_stats() is called, these functions\n>> read the information from the\n>> local hash table without re-read the stats file, so the process is very light.\n>> Therefore, I think, it is better to keep the current logic to keep the\n>> code simple.\n>>\n>>>\n>>> + /* We might be better to refresh stats */\n>>> + use_existing_stats = false;\n>>>\n>>> I think that we should add more comments about why it's better to\n>>> refresh the stats in this case.\n>>>\n>>> + /* The relid has already vacuumed, so we might be better to use existing stats */\n>>> + use_existing_stats = true;\n>>>\n>>> I think that we should add more comments about why it's better to\n>>> reuse the stats in this case.\n>> I added comments.\n>>\n>> Attache the patch.\n>>\n> \n> Thank you for updating the patch. Here are some small comments on the\n> latest (v4) patch.\n> \n> + * So if the last time we checked a table that was already vacuumed after\n> + * refres stats, check the current statistics before refreshing it.\n> + */\n> \n> s/refres/refresh/\n> \n> -----\n> +/* Counter to determine if statistics should be refreshed */\n> +static bool use_existing_stats = false;\n> +\n> \n> I think 'use_existing_stats' can be declared within table_recheck_autovac().\n> \n> -----\n> While testing the performance, I realized that the statistics are\n> reset every time vacuumed one table, leading to re-reading the stats\n> file even if 'use_existing_stats' is true. Please refer that vacuum()\n> eventually calls AtEOXact_PgStat() which calls to\n> pgstat_clear_snapshot().\n\nGood catch!\n\n\n> I believe that's why the performance of the\n> method of always checking the existing stats wasn’t good as expected.\n> So if we save the statistics somewhere and use it for rechecking, the\n> results of the performance benchmark will differ between these two\n> methods.\n\nOr it's simpler to make autovacuum worker skip calling\npgstat_clear_snapshot() in AtEOXact_PgStat()?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 30 Nov 2020 20:59:03 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: autovac issue with large number of tables" }, { "msg_contents": "Hi,\n\nOn Mon, Nov 30, 2020 at 8:59 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/11/30 10:43, Masahiko Sawada wrote:\n> > On Sun, Nov 29, 2020 at 10:34 PM Kasahara Tatsuhito\n> > <kasahara.tatsuhito@gmail.com> wrote:\n> >>\n> >> Hi, Thanks for you comments.\n> >>\n> >> On Fri, Nov 27, 2020 at 9:51 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>\n> >>>\n> >>>\n> >>> On 2020/11/27 18:38, Kasahara Tatsuhito wrote:\n> >>>> Hi,\n> >>>>\n> >>>> On Fri, Nov 27, 2020 at 1:43 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>>\n> >>>>>\n> >>>>>\n> >>>>> On 2020/11/26 10:41, Kasahara Tatsuhito wrote:\n> >>>>>> On Wed, Nov 25, 2020 at 8:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >>>>>>>\n> >>>>>>> On Wed, Nov 25, 2020 at 4:18 PM Kasahara Tatsuhito\n> >>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n> >>>>>>>>\n> >>>>>>>> Hi,\n> >>>>>>>>\n> >>>>>>>> On Wed, Nov 25, 2020 at 2:17 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >>>>>>>>>\n> >>>>>>>>> On Fri, Sep 4, 2020 at 7:50 PM Kasahara Tatsuhito\n> >>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n> >>>>>>>>>>\n> >>>>>>>>>> Hi,\n> >>>>>>>>>>\n> >>>>>>>>>> On Wed, Sep 2, 2020 at 2:10 AM Kasahara Tatsuhito\n> >>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n> >>>>>>>>>>>> I wonder if we could have table_recheck_autovac do two probes of the stats\n> >>>>>>>>>>>> data. First probe the existing stats data, and if it shows the table to\n> >>>>>>>>>>>> be already vacuumed, return immediately. If not, *then* force a stats\n> >>>>>>>>>>>> re-read, and check a second time.\n> >>>>>>>>>>> Does the above mean that the second and subsequent table_recheck_autovac()\n> >>>>>>>>>>> will be improved to first check using the previous refreshed statistics?\n> >>>>>>>>>>> I think that certainly works.\n> >>>>>>>>>>>\n> >>>>>>>>>>> If that's correct, I'll try to create a patch for the PoC\n> >>>>>>>>>>\n> >>>>>>>>>> I still don't know how to reproduce Jim's troubles, but I was able to reproduce\n> >>>>>>>>>> what was probably a very similar problem.\n> >>>>>>>>>>\n> >>>>>>>>>> This problem seems to be more likely to occur in cases where you have\n> >>>>>>>>>> a large number of tables,\n> >>>>>>>>>> i.e., a large amount of stats, and many small tables need VACUUM at\n> >>>>>>>>>> the same time.\n> >>>>>>>>>>\n> >>>>>>>>>> So I followed Tom's advice and created a patch for the PoC.\n> >>>>>>>>>> This patch will enable a flag in the table_recheck_autovac function to use\n> >>>>>>>>>> the existing stats next time if VACUUM (or ANALYZE) has already been done\n> >>>>>>>>>> by another worker on the check after the stats have been updated.\n> >>>>>>>>>> If the tables continue to require VACUUM after the refresh, then a refresh\n> >>>>>>>>>> will be required instead of using the existing statistics.\n> >>>>>>>>>>\n> >>>>>>>>>> I did simple test with HEAD and HEAD + this PoC patch.\n> >>>>>>>>>> The tests were conducted in two cases.\n> >>>>>>>>>> (I changed few configurations. see attached scripts)\n> >>>>>>>>>>\n> >>>>>>>>>> 1. Normal VACUUM case\n> >>>>>>>>>> - SET autovacuum = off\n> >>>>>>>>>> - CREATE tables with 100 rows\n> >>>>>>>>>> - DELETE 90 rows for each tables\n> >>>>>>>>>> - SET autovacuum = on and restart PostgreSQL\n> >>>>>>>>>> - Measure the time it takes for all tables to be VACUUMed\n> >>>>>>>>>>\n> >>>>>>>>>> 2. Anti wrap round VACUUM case\n> >>>>>>>>>> - CREATE brank tables\n> >>>>>>>>>> - SELECT all of these tables (for generate stats)\n> >>>>>>>>>> - SET autovacuum_freeze_max_age to low values and restart PostgreSQL\n> >>>>>>>>>> - Consumes a lot of XIDs by using txid_curent()\n> >>>>>>>>>> - Measure the time it takes for all tables to be VACUUMed\n> >>>>>>>>>>\n> >>>>>>>>>> For each test case, the following results were obtained by changing\n> >>>>>>>>>> autovacuum_max_workers parameters to 1, 2, 3(def) 5 and 10.\n> >>>>>>>>>> Also changing num of tables to 1000, 5000, 10000 and 20000.\n> >>>>>>>>>>\n> >>>>>>>>>> Due to the poor VM environment (2 VCPU/4 GB), the results are a little unstable,\n> >>>>>>>>>> but I think it's enough to ask for a trend.\n> >>>>>>>>>>\n> >>>>>>>>>> ===========================================================================\n> >>>>>>>>>> [1.Normal VACUUM case]\n> >>>>>>>>>> tables:1000\n> >>>>>>>>>> autovacuum_max_workers 1: (HEAD) 20 sec VS (with patch) 20 sec\n> >>>>>>>>>> autovacuum_max_workers 2: (HEAD) 18 sec VS (with patch) 16 sec\n> >>>>>>>>>> autovacuum_max_workers 3: (HEAD) 18 sec VS (with patch) 16 sec\n> >>>>>>>>>> autovacuum_max_workers 5: (HEAD) 19 sec VS (with patch) 17 sec\n> >>>>>>>>>> autovacuum_max_workers 10: (HEAD) 19 sec VS (with patch) 17 sec\n> >>>>>>>>>>\n> >>>>>>>>>> tables:5000\n> >>>>>>>>>> autovacuum_max_workers 1: (HEAD) 77 sec VS (with patch) 78 sec\n> >>>>>>>>>> autovacuum_max_workers 2: (HEAD) 61 sec VS (with patch) 43 sec\n> >>>>>>>>>> autovacuum_max_workers 3: (HEAD) 38 sec VS (with patch) 38 sec\n> >>>>>>>>>> autovacuum_max_workers 5: (HEAD) 45 sec VS (with patch) 37 sec\n> >>>>>>>>>> autovacuum_max_workers 10: (HEAD) 43 sec VS (with patch) 35 sec\n> >>>>>>>>>>\n> >>>>>>>>>> tables:10000\n> >>>>>>>>>> autovacuum_max_workers 1: (HEAD) 152 sec VS (with patch) 153 sec\n> >>>>>>>>>> autovacuum_max_workers 2: (HEAD) 119 sec VS (with patch) 98 sec\n> >>>>>>>>>> autovacuum_max_workers 3: (HEAD) 87 sec VS (with patch) 78 sec\n> >>>>>>>>>> autovacuum_max_workers 5: (HEAD) 100 sec VS (with patch) 66 sec\n> >>>>>>>>>> autovacuum_max_workers 10: (HEAD) 97 sec VS (with patch) 56 sec\n> >>>>>>>>>>\n> >>>>>>>>>> tables:20000\n> >>>>>>>>>> autovacuum_max_workers 1: (HEAD) 338 sec VS (with patch) 339 sec\n> >>>>>>>>>> autovacuum_max_workers 2: (HEAD) 231 sec VS (with patch) 229 sec\n> >>>>>>>>>> autovacuum_max_workers 3: (HEAD) 220 sec VS (with patch) 191 sec\n> >>>>>>>>>> autovacuum_max_workers 5: (HEAD) 234 sec VS (with patch) 147 sec\n> >>>>>>>>>> autovacuum_max_workers 10: (HEAD) 320 sec VS (with patch) 113 sec\n> >>>>>>>>>>\n> >>>>>>>>>> [2.Anti wrap round VACUUM case]\n> >>>>>>>>>> tables:1000\n> >>>>>>>>>> autovacuum_max_workers 1: (HEAD) 19 sec VS (with patch) 18 sec\n> >>>>>>>>>> autovacuum_max_workers 2: (HEAD) 14 sec VS (with patch) 15 sec\n> >>>>>>>>>> autovacuum_max_workers 3: (HEAD) 14 sec VS (with patch) 14 sec\n> >>>>>>>>>> autovacuum_max_workers 5: (HEAD) 14 sec VS (with patch) 16 sec\n> >>>>>>>>>> autovacuum_max_workers 10: (HEAD) 16 sec VS (with patch) 14 sec\n> >>>>>>>>>>\n> >>>>>>>>>> tables:5000\n> >>>>>>>>>> autovacuum_max_workers 1: (HEAD) 69 sec VS (with patch) 69 sec\n> >>>>>>>>>> autovacuum_max_workers 2: (HEAD) 66 sec VS (with patch) 47 sec\n> >>>>>>>>>> autovacuum_max_workers 3: (HEAD) 59 sec VS (with patch) 37 sec\n> >>>>>>>>>> autovacuum_max_workers 5: (HEAD) 39 sec VS (with patch) 28 sec\n> >>>>>>>>>> autovacuum_max_workers 10: (HEAD) 39 sec VS (with patch) 29 sec\n> >>>>>>>>>>\n> >>>>>>>>>> tables:10000\n> >>>>>>>>>> autovacuum_max_workers 1: (HEAD) 139 sec VS (with patch) 138 sec\n> >>>>>>>>>> autovacuum_max_workers 2: (HEAD) 130 sec VS (with patch) 86 sec\n> >>>>>>>>>> autovacuum_max_workers 3: (HEAD) 120 sec VS (with patch) 68 sec\n> >>>>>>>>>> autovacuum_max_workers 5: (HEAD) 96 sec VS (with patch) 41 sec\n> >>>>>>>>>> autovacuum_max_workers 10: (HEAD) 90 sec VS (with patch) 39 sec\n> >>>>>>>>>>\n> >>>>>>>>>> tables:20000\n> >>>>>>>>>> autovacuum_max_workers 1: (HEAD) 313 sec VS (with patch) 331 sec\n> >>>>>>>>>> autovacuum_max_workers 2: (HEAD) 209 sec VS (with patch) 201 sec\n> >>>>>>>>>> autovacuum_max_workers 3: (HEAD) 227 sec VS (with patch) 141 sec\n> >>>>>>>>>> autovacuum_max_workers 5: (HEAD) 236 sec VS (with patch) 88 sec\n> >>>>>>>>>> autovacuum_max_workers 10: (HEAD) 309 sec VS (with patch) 74 sec\n> >>>>>>>>>> ===========================================================================\n> >>>>>>>>>>\n> >>>>>>>>>> The cases without patch, the scalability of the worker has decreased\n> >>>>>>>>>> as the number of tables has increased.\n> >>>>>>>>>> In fact, the more workers there are, the longer it takes to complete\n> >>>>>>>>>> VACUUM to all tables.\n> >>>>>>>>>> The cases with patch, it shows good scalability with respect to the\n> >>>>>>>>>> number of workers.\n> >>>>>>>>>\n> >>>>>>>>> It seems a good performance improvement even without the patch of\n> >>>>>>>>> shared memory based stats collector.\n> >>>>>\n> >>>>> Sounds great!\n> >>>>>\n> >>>>>\n> >>>>>>>>>\n> >>>>>>>>>>\n> >>>>>>>>>> Note that perf top results showed that hash_search_with_hash_value,\n> >>>>>>>>>> hash_seq_search and\n> >>>>>>>>>> pgstat_read_statsfiles are dominant during VACUUM in all patterns,\n> >>>>>>>>>> with or without the patch.\n> >>>>>>>>>>\n> >>>>>>>>>> Therefore, there is still a need to find ways to optimize the reading\n> >>>>>>>>>> of large amounts of stats.\n> >>>>>>>>>> However, this patch is effective in its own right, and since there are\n> >>>>>>>>>> only a few parts to modify,\n> >>>>>>>>>> I think it should be able to be applied to current (preferably\n> >>>>>>>>>> pre-v13) PostgreSQL.\n> >>>>>>>>>\n> >>>>>>>>> +1\n> >>>>>>>>>\n> >>>>>>>>> +\n> >>>>>>>>> + /* We might be better to refresh stats */\n> >>>>>>>>> + use_existing_stats = false;\n> >>>>>>>>> }\n> >>>>>>>>> + else\n> >>>>>>>>> + {\n> >>>>>>>>>\n> >>>>>>>>> - heap_freetuple(classTup);\n> >>>>>>>>> + heap_freetuple(classTup);\n> >>>>>>>>> + /* The relid has already vacuumed, so we might be better to\n> >>>>>>>>> use exiting stats */\n> >>>>>>>>> + use_existing_stats = true;\n> >>>>>>>>> + }\n> >>>>>>>>>\n> >>>>>>>>> With that patch, the autovacuum process refreshes the stats in the\n> >>>>>>>>> next check if it finds out that the table still needs to be vacuumed.\n> >>>>>>>>> But I guess it's not necessarily true because the next table might be\n> >>>>>>>>> vacuumed already. So I think we might want to always use the existing\n> >>>>>>>>> for the first check. What do you think?\n> >>>>>>>> Thanks for your comment.\n> >>>>>>>>\n> >>>>>>>> If we assume the case where some workers vacuum on large tables\n> >>>>>>>> and a single worker vacuum on small tables, the processing\n> >>>>>>>> performance of the single worker will be slightly lower if the\n> >>>>>>>> existing statistics are checked every time.\n> >>>>>>>>\n> >>>>>>>> In fact, at first I tried to check the existing stats every time,\n> >>>>>>>> but the performance was slightly worse in cases with a small number of workers.\n> >>>>>\n> >>>>> Do you have this benchmark result?\n> >>>>>\n> >>>>>\n> >>>>>>>> (Checking the existing stats is lightweight , but at high frequency,\n> >>>>>>>> it affects processing performance.)\n> >>>>>>>> Therefore, at after refresh statistics, determine whether autovac\n> >>>>>>>> should use the existing statistics.\n> >>>>>>>\n> >>>>>>> Yeah, since the test you used uses a lot of small tables, if there are\n> >>>>>>> a few workers, checking the existing stats is unlikely to return true\n> >>>>>>> (no need to vacuum). So the cost of existing stats check ends up being\n> >>>>>>> overhead. Not sure how slow always checking the existing stats was,\n> >>>>>>> but given that the shared memory based stats collector patch could\n> >>>>>>> improve the performance of refreshing stats, it might be better not to\n> >>>>>>> check the existing stats frequently like the patch does. Anyway, I\n> >>>>>>> think it’s better to evaluate the performance improvement with other\n> >>>>>>> cases too.\n> >>>>>> Yeah, I would like to see how much the performance changes in other cases.\n> >>>>>> In addition, if the shared-based-stats patch is applied, we won't need to reload\n> >>>>>> a huge stats file, so we will just have to check the stats on\n> >>>>>> shared-mem every time.\n> >>>>>> Perhaps the logic of table_recheck_autovac could be simpler.\n> >>>>>>\n> >>>>>>>> BTW, I found some typos in comments, so attache a fixed version.\n> >>>>>\n> >>>>> The patch adds some duplicated codes into table_recheck_autovac().\n> >>>>> It's better to make the common function performing them and make\n> >>>>> table_recheck_autovac() call that common function, to simplify the code.\n> >>>> Thanks for your comment.\n> >>>> Hmm.. I've cut out the duplicate part.\n> >>>> Attach the patch.\n> >>>> Could you confirm that it fits your expecting?\n> >>>\n> >>> Yes, thanks for updataing the patch! Here are another review comments.\n> >>>\n> >>> + shared = pgstat_fetch_stat_dbentry(InvalidOid);\n> >>> + dbentry = pgstat_fetch_stat_dbentry(MyDatabaseId);\n> >>>\n> >>> When using the existing stats, ISTM that these are not necessary and\n> >>> we can reuse \"shared\" and \"dbentry\" obtained before. Right?\n> >> Yeah, but unless autovac_refresh_stats() is called, these functions\n> >> read the information from the\n> >> local hash table without re-read the stats file, so the process is very light.\n> >> Therefore, I think, it is better to keep the current logic to keep the\n> >> code simple.\n> >>\n> >>>\n> >>> + /* We might be better to refresh stats */\n> >>> + use_existing_stats = false;\n> >>>\n> >>> I think that we should add more comments about why it's better to\n> >>> refresh the stats in this case.\n> >>>\n> >>> + /* The relid has already vacuumed, so we might be better to use existing stats */\n> >>> + use_existing_stats = true;\n> >>>\n> >>> I think that we should add more comments about why it's better to\n> >>> reuse the stats in this case.\n> >> I added comments.\n> >>\n> >> Attache the patch.\n> >>\n> >\n> > Thank you for updating the patch. Here are some small comments on the\n> > latest (v4) patch.\n> >\n> > + * So if the last time we checked a table that was already vacuumed after\n> > + * refres stats, check the current statistics before refreshing it.\n> > + */\n> >\n> > s/refres/refresh/\nThanks! fixed.\nAttached the patch.\n\n> >\n> > -----\n> > +/* Counter to determine if statistics should be refreshed */\n> > +static bool use_existing_stats = false;\n> > +\n> >\n> > I think 'use_existing_stats' can be declared within table_recheck_autovac().\n> >\n> > -----\n> > While testing the performance, I realized that the statistics are\n> > reset every time vacuumed one table, leading to re-reading the stats\n> > file even if 'use_existing_stats' is true. Please refer that vacuum()\n> > eventually calls AtEOXact_PgStat() which calls to\n> > pgstat_clear_snapshot().\n>\n> Good catch!\n>\n>\n> > I believe that's why the performance of the\n> > method of always checking the existing stats wasn’t good as expected.\n> > So if we save the statistics somewhere and use it for rechecking, the\n> > results of the performance benchmark will differ between these two\n> > methods.\nThanks for you checks.\nBut, if a worker did vacuum(), that means this worker had determined\nneed vacuum in the\ntable_recheck_autovac(). So, use_existing_stats set to false, and next\ntime, refresh stats.\nTherefore I think the current patch is fine, as we want to avoid\nunnecessary refreshing of\nstatistics before the actual vacuum(), right?\n\n> Or it's simpler to make autovacuum worker skip calling\n> pgstat_clear_snapshot() in AtEOXact_PgStat()?\nHmm. IMO the side effects are a bit scary, so I think it's fine the way it is.\n\nBest regards,\n\n> Regards,\n>\n> --\n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n\n\n\n-- \nTatsuhito Kasahara\nkasahara.tatsuhito _at_ gmail.com", "msg_date": "Tue, 1 Dec 2020 13:48:41 +0900", "msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>", "msg_from_op": false, "msg_subject": "Re: autovac issue with large number of tables" }, { "msg_contents": "On Tue, Dec 1, 2020 at 1:48 PM Kasahara Tatsuhito\n<kasahara.tatsuhito@gmail.com> wrote:\n>\n> Hi,\n>\n> On Mon, Nov 30, 2020 at 8:59 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >\n> >\n> >\n> > On 2020/11/30 10:43, Masahiko Sawada wrote:\n> > > On Sun, Nov 29, 2020 at 10:34 PM Kasahara Tatsuhito\n> > > <kasahara.tatsuhito@gmail.com> wrote:\n> > >>\n> > >> Hi, Thanks for you comments.\n> > >>\n> > >> On Fri, Nov 27, 2020 at 9:51 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > >>>\n> > >>>\n> > >>>\n> > >>> On 2020/11/27 18:38, Kasahara Tatsuhito wrote:\n> > >>>> Hi,\n> > >>>>\n> > >>>> On Fri, Nov 27, 2020 at 1:43 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > >>>>>\n> > >>>>>\n> > >>>>>\n> > >>>>> On 2020/11/26 10:41, Kasahara Tatsuhito wrote:\n> > >>>>>> On Wed, Nov 25, 2020 at 8:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >>>>>>>\n> > >>>>>>> On Wed, Nov 25, 2020 at 4:18 PM Kasahara Tatsuhito\n> > >>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n> > >>>>>>>>\n> > >>>>>>>> Hi,\n> > >>>>>>>>\n> > >>>>>>>> On Wed, Nov 25, 2020 at 2:17 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >>>>>>>>>\n> > >>>>>>>>> On Fri, Sep 4, 2020 at 7:50 PM Kasahara Tatsuhito\n> > >>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n> > >>>>>>>>>>\n> > >>>>>>>>>> Hi,\n> > >>>>>>>>>>\n> > >>>>>>>>>> On Wed, Sep 2, 2020 at 2:10 AM Kasahara Tatsuhito\n> > >>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n> > >>>>>>>>>>>> I wonder if we could have table_recheck_autovac do two probes of the stats\n> > >>>>>>>>>>>> data. First probe the existing stats data, and if it shows the table to\n> > >>>>>>>>>>>> be already vacuumed, return immediately. If not, *then* force a stats\n> > >>>>>>>>>>>> re-read, and check a second time.\n> > >>>>>>>>>>> Does the above mean that the second and subsequent table_recheck_autovac()\n> > >>>>>>>>>>> will be improved to first check using the previous refreshed statistics?\n> > >>>>>>>>>>> I think that certainly works.\n> > >>>>>>>>>>>\n> > >>>>>>>>>>> If that's correct, I'll try to create a patch for the PoC\n> > >>>>>>>>>>\n> > >>>>>>>>>> I still don't know how to reproduce Jim's troubles, but I was able to reproduce\n> > >>>>>>>>>> what was probably a very similar problem.\n> > >>>>>>>>>>\n> > >>>>>>>>>> This problem seems to be more likely to occur in cases where you have\n> > >>>>>>>>>> a large number of tables,\n> > >>>>>>>>>> i.e., a large amount of stats, and many small tables need VACUUM at\n> > >>>>>>>>>> the same time.\n> > >>>>>>>>>>\n> > >>>>>>>>>> So I followed Tom's advice and created a patch for the PoC.\n> > >>>>>>>>>> This patch will enable a flag in the table_recheck_autovac function to use\n> > >>>>>>>>>> the existing stats next time if VACUUM (or ANALYZE) has already been done\n> > >>>>>>>>>> by another worker on the check after the stats have been updated.\n> > >>>>>>>>>> If the tables continue to require VACUUM after the refresh, then a refresh\n> > >>>>>>>>>> will be required instead of using the existing statistics.\n> > >>>>>>>>>>\n> > >>>>>>>>>> I did simple test with HEAD and HEAD + this PoC patch.\n> > >>>>>>>>>> The tests were conducted in two cases.\n> > >>>>>>>>>> (I changed few configurations. see attached scripts)\n> > >>>>>>>>>>\n> > >>>>>>>>>> 1. Normal VACUUM case\n> > >>>>>>>>>> - SET autovacuum = off\n> > >>>>>>>>>> - CREATE tables with 100 rows\n> > >>>>>>>>>> - DELETE 90 rows for each tables\n> > >>>>>>>>>> - SET autovacuum = on and restart PostgreSQL\n> > >>>>>>>>>> - Measure the time it takes for all tables to be VACUUMed\n> > >>>>>>>>>>\n> > >>>>>>>>>> 2. Anti wrap round VACUUM case\n> > >>>>>>>>>> - CREATE brank tables\n> > >>>>>>>>>> - SELECT all of these tables (for generate stats)\n> > >>>>>>>>>> - SET autovacuum_freeze_max_age to low values and restart PostgreSQL\n> > >>>>>>>>>> - Consumes a lot of XIDs by using txid_curent()\n> > >>>>>>>>>> - Measure the time it takes for all tables to be VACUUMed\n> > >>>>>>>>>>\n> > >>>>>>>>>> For each test case, the following results were obtained by changing\n> > >>>>>>>>>> autovacuum_max_workers parameters to 1, 2, 3(def) 5 and 10.\n> > >>>>>>>>>> Also changing num of tables to 1000, 5000, 10000 and 20000.\n> > >>>>>>>>>>\n> > >>>>>>>>>> Due to the poor VM environment (2 VCPU/4 GB), the results are a little unstable,\n> > >>>>>>>>>> but I think it's enough to ask for a trend.\n> > >>>>>>>>>>\n> > >>>>>>>>>> ===========================================================================\n> > >>>>>>>>>> [1.Normal VACUUM case]\n> > >>>>>>>>>> tables:1000\n> > >>>>>>>>>> autovacuum_max_workers 1: (HEAD) 20 sec VS (with patch) 20 sec\n> > >>>>>>>>>> autovacuum_max_workers 2: (HEAD) 18 sec VS (with patch) 16 sec\n> > >>>>>>>>>> autovacuum_max_workers 3: (HEAD) 18 sec VS (with patch) 16 sec\n> > >>>>>>>>>> autovacuum_max_workers 5: (HEAD) 19 sec VS (with patch) 17 sec\n> > >>>>>>>>>> autovacuum_max_workers 10: (HEAD) 19 sec VS (with patch) 17 sec\n> > >>>>>>>>>>\n> > >>>>>>>>>> tables:5000\n> > >>>>>>>>>> autovacuum_max_workers 1: (HEAD) 77 sec VS (with patch) 78 sec\n> > >>>>>>>>>> autovacuum_max_workers 2: (HEAD) 61 sec VS (with patch) 43 sec\n> > >>>>>>>>>> autovacuum_max_workers 3: (HEAD) 38 sec VS (with patch) 38 sec\n> > >>>>>>>>>> autovacuum_max_workers 5: (HEAD) 45 sec VS (with patch) 37 sec\n> > >>>>>>>>>> autovacuum_max_workers 10: (HEAD) 43 sec VS (with patch) 35 sec\n> > >>>>>>>>>>\n> > >>>>>>>>>> tables:10000\n> > >>>>>>>>>> autovacuum_max_workers 1: (HEAD) 152 sec VS (with patch) 153 sec\n> > >>>>>>>>>> autovacuum_max_workers 2: (HEAD) 119 sec VS (with patch) 98 sec\n> > >>>>>>>>>> autovacuum_max_workers 3: (HEAD) 87 sec VS (with patch) 78 sec\n> > >>>>>>>>>> autovacuum_max_workers 5: (HEAD) 100 sec VS (with patch) 66 sec\n> > >>>>>>>>>> autovacuum_max_workers 10: (HEAD) 97 sec VS (with patch) 56 sec\n> > >>>>>>>>>>\n> > >>>>>>>>>> tables:20000\n> > >>>>>>>>>> autovacuum_max_workers 1: (HEAD) 338 sec VS (with patch) 339 sec\n> > >>>>>>>>>> autovacuum_max_workers 2: (HEAD) 231 sec VS (with patch) 229 sec\n> > >>>>>>>>>> autovacuum_max_workers 3: (HEAD) 220 sec VS (with patch) 191 sec\n> > >>>>>>>>>> autovacuum_max_workers 5: (HEAD) 234 sec VS (with patch) 147 sec\n> > >>>>>>>>>> autovacuum_max_workers 10: (HEAD) 320 sec VS (with patch) 113 sec\n> > >>>>>>>>>>\n> > >>>>>>>>>> [2.Anti wrap round VACUUM case]\n> > >>>>>>>>>> tables:1000\n> > >>>>>>>>>> autovacuum_max_workers 1: (HEAD) 19 sec VS (with patch) 18 sec\n> > >>>>>>>>>> autovacuum_max_workers 2: (HEAD) 14 sec VS (with patch) 15 sec\n> > >>>>>>>>>> autovacuum_max_workers 3: (HEAD) 14 sec VS (with patch) 14 sec\n> > >>>>>>>>>> autovacuum_max_workers 5: (HEAD) 14 sec VS (with patch) 16 sec\n> > >>>>>>>>>> autovacuum_max_workers 10: (HEAD) 16 sec VS (with patch) 14 sec\n> > >>>>>>>>>>\n> > >>>>>>>>>> tables:5000\n> > >>>>>>>>>> autovacuum_max_workers 1: (HEAD) 69 sec VS (with patch) 69 sec\n> > >>>>>>>>>> autovacuum_max_workers 2: (HEAD) 66 sec VS (with patch) 47 sec\n> > >>>>>>>>>> autovacuum_max_workers 3: (HEAD) 59 sec VS (with patch) 37 sec\n> > >>>>>>>>>> autovacuum_max_workers 5: (HEAD) 39 sec VS (with patch) 28 sec\n> > >>>>>>>>>> autovacuum_max_workers 10: (HEAD) 39 sec VS (with patch) 29 sec\n> > >>>>>>>>>>\n> > >>>>>>>>>> tables:10000\n> > >>>>>>>>>> autovacuum_max_workers 1: (HEAD) 139 sec VS (with patch) 138 sec\n> > >>>>>>>>>> autovacuum_max_workers 2: (HEAD) 130 sec VS (with patch) 86 sec\n> > >>>>>>>>>> autovacuum_max_workers 3: (HEAD) 120 sec VS (with patch) 68 sec\n> > >>>>>>>>>> autovacuum_max_workers 5: (HEAD) 96 sec VS (with patch) 41 sec\n> > >>>>>>>>>> autovacuum_max_workers 10: (HEAD) 90 sec VS (with patch) 39 sec\n> > >>>>>>>>>>\n> > >>>>>>>>>> tables:20000\n> > >>>>>>>>>> autovacuum_max_workers 1: (HEAD) 313 sec VS (with patch) 331 sec\n> > >>>>>>>>>> autovacuum_max_workers 2: (HEAD) 209 sec VS (with patch) 201 sec\n> > >>>>>>>>>> autovacuum_max_workers 3: (HEAD) 227 sec VS (with patch) 141 sec\n> > >>>>>>>>>> autovacuum_max_workers 5: (HEAD) 236 sec VS (with patch) 88 sec\n> > >>>>>>>>>> autovacuum_max_workers 10: (HEAD) 309 sec VS (with patch) 74 sec\n> > >>>>>>>>>> ===========================================================================\n> > >>>>>>>>>>\n> > >>>>>>>>>> The cases without patch, the scalability of the worker has decreased\n> > >>>>>>>>>> as the number of tables has increased.\n> > >>>>>>>>>> In fact, the more workers there are, the longer it takes to complete\n> > >>>>>>>>>> VACUUM to all tables.\n> > >>>>>>>>>> The cases with patch, it shows good scalability with respect to the\n> > >>>>>>>>>> number of workers.\n> > >>>>>>>>>\n> > >>>>>>>>> It seems a good performance improvement even without the patch of\n> > >>>>>>>>> shared memory based stats collector.\n> > >>>>>\n> > >>>>> Sounds great!\n> > >>>>>\n> > >>>>>\n> > >>>>>>>>>\n> > >>>>>>>>>>\n> > >>>>>>>>>> Note that perf top results showed that hash_search_with_hash_value,\n> > >>>>>>>>>> hash_seq_search and\n> > >>>>>>>>>> pgstat_read_statsfiles are dominant during VACUUM in all patterns,\n> > >>>>>>>>>> with or without the patch.\n> > >>>>>>>>>>\n> > >>>>>>>>>> Therefore, there is still a need to find ways to optimize the reading\n> > >>>>>>>>>> of large amounts of stats.\n> > >>>>>>>>>> However, this patch is effective in its own right, and since there are\n> > >>>>>>>>>> only a few parts to modify,\n> > >>>>>>>>>> I think it should be able to be applied to current (preferably\n> > >>>>>>>>>> pre-v13) PostgreSQL.\n> > >>>>>>>>>\n> > >>>>>>>>> +1\n> > >>>>>>>>>\n> > >>>>>>>>> +\n> > >>>>>>>>> + /* We might be better to refresh stats */\n> > >>>>>>>>> + use_existing_stats = false;\n> > >>>>>>>>> }\n> > >>>>>>>>> + else\n> > >>>>>>>>> + {\n> > >>>>>>>>>\n> > >>>>>>>>> - heap_freetuple(classTup);\n> > >>>>>>>>> + heap_freetuple(classTup);\n> > >>>>>>>>> + /* The relid has already vacuumed, so we might be better to\n> > >>>>>>>>> use exiting stats */\n> > >>>>>>>>> + use_existing_stats = true;\n> > >>>>>>>>> + }\n> > >>>>>>>>>\n> > >>>>>>>>> With that patch, the autovacuum process refreshes the stats in the\n> > >>>>>>>>> next check if it finds out that the table still needs to be vacuumed.\n> > >>>>>>>>> But I guess it's not necessarily true because the next table might be\n> > >>>>>>>>> vacuumed already. So I think we might want to always use the existing\n> > >>>>>>>>> for the first check. What do you think?\n> > >>>>>>>> Thanks for your comment.\n> > >>>>>>>>\n> > >>>>>>>> If we assume the case where some workers vacuum on large tables\n> > >>>>>>>> and a single worker vacuum on small tables, the processing\n> > >>>>>>>> performance of the single worker will be slightly lower if the\n> > >>>>>>>> existing statistics are checked every time.\n> > >>>>>>>>\n> > >>>>>>>> In fact, at first I tried to check the existing stats every time,\n> > >>>>>>>> but the performance was slightly worse in cases with a small number of workers.\n> > >>>>>\n> > >>>>> Do you have this benchmark result?\n> > >>>>>\n> > >>>>>\n> > >>>>>>>> (Checking the existing stats is lightweight , but at high frequency,\n> > >>>>>>>> it affects processing performance.)\n> > >>>>>>>> Therefore, at after refresh statistics, determine whether autovac\n> > >>>>>>>> should use the existing statistics.\n> > >>>>>>>\n> > >>>>>>> Yeah, since the test you used uses a lot of small tables, if there are\n> > >>>>>>> a few workers, checking the existing stats is unlikely to return true\n> > >>>>>>> (no need to vacuum). So the cost of existing stats check ends up being\n> > >>>>>>> overhead. Not sure how slow always checking the existing stats was,\n> > >>>>>>> but given that the shared memory based stats collector patch could\n> > >>>>>>> improve the performance of refreshing stats, it might be better not to\n> > >>>>>>> check the existing stats frequently like the patch does. Anyway, I\n> > >>>>>>> think it’s better to evaluate the performance improvement with other\n> > >>>>>>> cases too.\n> > >>>>>> Yeah, I would like to see how much the performance changes in other cases.\n> > >>>>>> In addition, if the shared-based-stats patch is applied, we won't need to reload\n> > >>>>>> a huge stats file, so we will just have to check the stats on\n> > >>>>>> shared-mem every time.\n> > >>>>>> Perhaps the logic of table_recheck_autovac could be simpler.\n> > >>>>>>\n> > >>>>>>>> BTW, I found some typos in comments, so attache a fixed version.\n> > >>>>>\n> > >>>>> The patch adds some duplicated codes into table_recheck_autovac().\n> > >>>>> It's better to make the common function performing them and make\n> > >>>>> table_recheck_autovac() call that common function, to simplify the code.\n> > >>>> Thanks for your comment.\n> > >>>> Hmm.. I've cut out the duplicate part.\n> > >>>> Attach the patch.\n> > >>>> Could you confirm that it fits your expecting?\n> > >>>\n> > >>> Yes, thanks for updataing the patch! Here are another review comments.\n> > >>>\n> > >>> + shared = pgstat_fetch_stat_dbentry(InvalidOid);\n> > >>> + dbentry = pgstat_fetch_stat_dbentry(MyDatabaseId);\n> > >>>\n> > >>> When using the existing stats, ISTM that these are not necessary and\n> > >>> we can reuse \"shared\" and \"dbentry\" obtained before. Right?\n> > >> Yeah, but unless autovac_refresh_stats() is called, these functions\n> > >> read the information from the\n> > >> local hash table without re-read the stats file, so the process is very light.\n> > >> Therefore, I think, it is better to keep the current logic to keep the\n> > >> code simple.\n> > >>\n> > >>>\n> > >>> + /* We might be better to refresh stats */\n> > >>> + use_existing_stats = false;\n> > >>>\n> > >>> I think that we should add more comments about why it's better to\n> > >>> refresh the stats in this case.\n> > >>>\n> > >>> + /* The relid has already vacuumed, so we might be better to use existing stats */\n> > >>> + use_existing_stats = true;\n> > >>>\n> > >>> I think that we should add more comments about why it's better to\n> > >>> reuse the stats in this case.\n> > >> I added comments.\n> > >>\n> > >> Attache the patch.\n> > >>\n> > >\n> > > Thank you for updating the patch. Here are some small comments on the\n> > > latest (v4) patch.\n> > >\n> > > + * So if the last time we checked a table that was already vacuumed after\n> > > + * refres stats, check the current statistics before refreshing it.\n> > > + */\n> > >\n> > > s/refres/refresh/\n> Thanks! fixed.\n> Attached the patch.\n>\n> > >\n> > > -----\n> > > +/* Counter to determine if statistics should be refreshed */\n> > > +static bool use_existing_stats = false;\n> > > +\n> > >\n> > > I think 'use_existing_stats' can be declared within table_recheck_autovac().\n> > >\n> > > -----\n> > > While testing the performance, I realized that the statistics are\n> > > reset every time vacuumed one table, leading to re-reading the stats\n> > > file even if 'use_existing_stats' is true. Please refer that vacuum()\n> > > eventually calls AtEOXact_PgStat() which calls to\n> > > pgstat_clear_snapshot().\n> >\n> > Good catch!\n> >\n> >\n> > > I believe that's why the performance of the\n> > > method of always checking the existing stats wasn’t good as expected.\n> > > So if we save the statistics somewhere and use it for rechecking, the\n> > > results of the performance benchmark will differ between these two\n> > > methods.\n> Thanks for you checks.\n> But, if a worker did vacuum(), that means this worker had determined\n> need vacuum in the\n> table_recheck_autovac(). So, use_existing_stats set to false, and next\n> time, refresh stats.\n> Therefore I think the current patch is fine, as we want to avoid\n> unnecessary refreshing of\n> statistics before the actual vacuum(), right?\n\nYes, you're right.\n\nWhen I benchmarked the performance of the method of always checking\nexisting stats I edited your patch so that it sets use_existing_stats\n= true even if the second check is false (i.g., vacuum is needed).\nAnd the result I got was worse than expected especially in the case of\na few autovacuum workers. But it doesn't evaluate the performance of\nthat method rightly as the stats snapshot is cleared every time\nvacuum. Given you had similar results, I guess you used a similar way\nwhen evaluating it, is it right? If so, it’s better to fix this issue\nand see how the performance benchmark results will differ.\n\nFor example, the results of the test case with 10000 tables and 1\nautovacuum worker I reported before was:\n\n10000 tables:\n autovac_workers 1 : 158s,157s, 290s\n\nBut after fixing that issue in the third method (always checking the\nexisting stats), the results are:\n\n10000 tables:\n autovac_workers 1 : 157s,157s, 160s\n\nRegards,\n\n--\nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 1 Dec 2020 16:23:36 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: autovac issue with large number of tables" }, { "msg_contents": "\n\nOn 2020/12/01 16:23, Masahiko Sawada wrote:\n> On Tue, Dec 1, 2020 at 1:48 PM Kasahara Tatsuhito\n> <kasahara.tatsuhito@gmail.com> wrote:\n>>\n>> Hi,\n>>\n>> On Mon, Nov 30, 2020 at 8:59 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>\n>>>\n>>>\n>>> On 2020/11/30 10:43, Masahiko Sawada wrote:\n>>>> On Sun, Nov 29, 2020 at 10:34 PM Kasahara Tatsuhito\n>>>> <kasahara.tatsuhito@gmail.com> wrote:\n>>>>>\n>>>>> Hi, Thanks for you comments.\n>>>>>\n>>>>> On Fri, Nov 27, 2020 at 9:51 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>>> On 2020/11/27 18:38, Kasahara Tatsuhito wrote:\n>>>>>>> Hi,\n>>>>>>>\n>>>>>>> On Fri, Nov 27, 2020 at 1:43 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>>>\n>>>>>>>>\n>>>>>>>>\n>>>>>>>> On 2020/11/26 10:41, Kasahara Tatsuhito wrote:\n>>>>>>>>> On Wed, Nov 25, 2020 at 8:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>>>>>>>>>\n>>>>>>>>>> On Wed, Nov 25, 2020 at 4:18 PM Kasahara Tatsuhito\n>>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n>>>>>>>>>>>\n>>>>>>>>>>> Hi,\n>>>>>>>>>>>\n>>>>>>>>>>> On Wed, Nov 25, 2020 at 2:17 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>>>>>>>>>>>\n>>>>>>>>>>>> On Fri, Sep 4, 2020 at 7:50 PM Kasahara Tatsuhito\n>>>>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n>>>>>>>>>>>>>\n>>>>>>>>>>>>> Hi,\n>>>>>>>>>>>>>\n>>>>>>>>>>>>> On Wed, Sep 2, 2020 at 2:10 AM Kasahara Tatsuhito\n>>>>>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n>>>>>>>>>>>>>>> I wonder if we could have table_recheck_autovac do two probes of the stats\n>>>>>>>>>>>>>>> data. First probe the existing stats data, and if it shows the table to\n>>>>>>>>>>>>>>> be already vacuumed, return immediately. If not, *then* force a stats\n>>>>>>>>>>>>>>> re-read, and check a second time.\n>>>>>>>>>>>>>> Does the above mean that the second and subsequent table_recheck_autovac()\n>>>>>>>>>>>>>> will be improved to first check using the previous refreshed statistics?\n>>>>>>>>>>>>>> I think that certainly works.\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>> If that's correct, I'll try to create a patch for the PoC\n>>>>>>>>>>>>>\n>>>>>>>>>>>>> I still don't know how to reproduce Jim's troubles, but I was able to reproduce\n>>>>>>>>>>>>> what was probably a very similar problem.\n>>>>>>>>>>>>>\n>>>>>>>>>>>>> This problem seems to be more likely to occur in cases where you have\n>>>>>>>>>>>>> a large number of tables,\n>>>>>>>>>>>>> i.e., a large amount of stats, and many small tables need VACUUM at\n>>>>>>>>>>>>> the same time.\n>>>>>>>>>>>>>\n>>>>>>>>>>>>> So I followed Tom's advice and created a patch for the PoC.\n>>>>>>>>>>>>> This patch will enable a flag in the table_recheck_autovac function to use\n>>>>>>>>>>>>> the existing stats next time if VACUUM (or ANALYZE) has already been done\n>>>>>>>>>>>>> by another worker on the check after the stats have been updated.\n>>>>>>>>>>>>> If the tables continue to require VACUUM after the refresh, then a refresh\n>>>>>>>>>>>>> will be required instead of using the existing statistics.\n>>>>>>>>>>>>>\n>>>>>>>>>>>>> I did simple test with HEAD and HEAD + this PoC patch.\n>>>>>>>>>>>>> The tests were conducted in two cases.\n>>>>>>>>>>>>> (I changed few configurations. see attached scripts)\n>>>>>>>>>>>>>\n>>>>>>>>>>>>> 1. Normal VACUUM case\n>>>>>>>>>>>>> - SET autovacuum = off\n>>>>>>>>>>>>> - CREATE tables with 100 rows\n>>>>>>>>>>>>> - DELETE 90 rows for each tables\n>>>>>>>>>>>>> - SET autovacuum = on and restart PostgreSQL\n>>>>>>>>>>>>> - Measure the time it takes for all tables to be VACUUMed\n>>>>>>>>>>>>>\n>>>>>>>>>>>>> 2. Anti wrap round VACUUM case\n>>>>>>>>>>>>> - CREATE brank tables\n>>>>>>>>>>>>> - SELECT all of these tables (for generate stats)\n>>>>>>>>>>>>> - SET autovacuum_freeze_max_age to low values and restart PostgreSQL\n>>>>>>>>>>>>> - Consumes a lot of XIDs by using txid_curent()\n>>>>>>>>>>>>> - Measure the time it takes for all tables to be VACUUMed\n>>>>>>>>>>>>>\n>>>>>>>>>>>>> For each test case, the following results were obtained by changing\n>>>>>>>>>>>>> autovacuum_max_workers parameters to 1, 2, 3(def) 5 and 10.\n>>>>>>>>>>>>> Also changing num of tables to 1000, 5000, 10000 and 20000.\n>>>>>>>>>>>>>\n>>>>>>>>>>>>> Due to the poor VM environment (2 VCPU/4 GB), the results are a little unstable,\n>>>>>>>>>>>>> but I think it's enough to ask for a trend.\n>>>>>>>>>>>>>\n>>>>>>>>>>>>> ===========================================================================\n>>>>>>>>>>>>> [1.Normal VACUUM case]\n>>>>>>>>>>>>> tables:1000\n>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 20 sec VS (with patch) 20 sec\n>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 18 sec VS (with patch) 16 sec\n>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 18 sec VS (with patch) 16 sec\n>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 19 sec VS (with patch) 17 sec\n>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 19 sec VS (with patch) 17 sec\n>>>>>>>>>>>>>\n>>>>>>>>>>>>> tables:5000\n>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 77 sec VS (with patch) 78 sec\n>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 61 sec VS (with patch) 43 sec\n>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 38 sec VS (with patch) 38 sec\n>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 45 sec VS (with patch) 37 sec\n>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 43 sec VS (with patch) 35 sec\n>>>>>>>>>>>>>\n>>>>>>>>>>>>> tables:10000\n>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 152 sec VS (with patch) 153 sec\n>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 119 sec VS (with patch) 98 sec\n>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 87 sec VS (with patch) 78 sec\n>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 100 sec VS (with patch) 66 sec\n>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 97 sec VS (with patch) 56 sec\n>>>>>>>>>>>>>\n>>>>>>>>>>>>> tables:20000\n>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 338 sec VS (with patch) 339 sec\n>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 231 sec VS (with patch) 229 sec\n>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 220 sec VS (with patch) 191 sec\n>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 234 sec VS (with patch) 147 sec\n>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 320 sec VS (with patch) 113 sec\n>>>>>>>>>>>>>\n>>>>>>>>>>>>> [2.Anti wrap round VACUUM case]\n>>>>>>>>>>>>> tables:1000\n>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 19 sec VS (with patch) 18 sec\n>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 14 sec VS (with patch) 15 sec\n>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 14 sec VS (with patch) 14 sec\n>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 14 sec VS (with patch) 16 sec\n>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 16 sec VS (with patch) 14 sec\n>>>>>>>>>>>>>\n>>>>>>>>>>>>> tables:5000\n>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 69 sec VS (with patch) 69 sec\n>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 66 sec VS (with patch) 47 sec\n>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 59 sec VS (with patch) 37 sec\n>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 39 sec VS (with patch) 28 sec\n>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 39 sec VS (with patch) 29 sec\n>>>>>>>>>>>>>\n>>>>>>>>>>>>> tables:10000\n>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 139 sec VS (with patch) 138 sec\n>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 130 sec VS (with patch) 86 sec\n>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 120 sec VS (with patch) 68 sec\n>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 96 sec VS (with patch) 41 sec\n>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 90 sec VS (with patch) 39 sec\n>>>>>>>>>>>>>\n>>>>>>>>>>>>> tables:20000\n>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 313 sec VS (with patch) 331 sec\n>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 209 sec VS (with patch) 201 sec\n>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 227 sec VS (with patch) 141 sec\n>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 236 sec VS (with patch) 88 sec\n>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 309 sec VS (with patch) 74 sec\n>>>>>>>>>>>>> ===========================================================================\n>>>>>>>>>>>>>\n>>>>>>>>>>>>> The cases without patch, the scalability of the worker has decreased\n>>>>>>>>>>>>> as the number of tables has increased.\n>>>>>>>>>>>>> In fact, the more workers there are, the longer it takes to complete\n>>>>>>>>>>>>> VACUUM to all tables.\n>>>>>>>>>>>>> The cases with patch, it shows good scalability with respect to the\n>>>>>>>>>>>>> number of workers.\n>>>>>>>>>>>>\n>>>>>>>>>>>> It seems a good performance improvement even without the patch of\n>>>>>>>>>>>> shared memory based stats collector.\n>>>>>>>>\n>>>>>>>> Sounds great!\n>>>>>>>>\n>>>>>>>>\n>>>>>>>>>>>>\n>>>>>>>>>>>>>\n>>>>>>>>>>>>> Note that perf top results showed that hash_search_with_hash_value,\n>>>>>>>>>>>>> hash_seq_search and\n>>>>>>>>>>>>> pgstat_read_statsfiles are dominant during VACUUM in all patterns,\n>>>>>>>>>>>>> with or without the patch.\n>>>>>>>>>>>>>\n>>>>>>>>>>>>> Therefore, there is still a need to find ways to optimize the reading\n>>>>>>>>>>>>> of large amounts of stats.\n>>>>>>>>>>>>> However, this patch is effective in its own right, and since there are\n>>>>>>>>>>>>> only a few parts to modify,\n>>>>>>>>>>>>> I think it should be able to be applied to current (preferably\n>>>>>>>>>>>>> pre-v13) PostgreSQL.\n>>>>>>>>>>>>\n>>>>>>>>>>>> +1\n>>>>>>>>>>>>\n>>>>>>>>>>>> +\n>>>>>>>>>>>> + /* We might be better to refresh stats */\n>>>>>>>>>>>> + use_existing_stats = false;\n>>>>>>>>>>>> }\n>>>>>>>>>>>> + else\n>>>>>>>>>>>> + {\n>>>>>>>>>>>>\n>>>>>>>>>>>> - heap_freetuple(classTup);\n>>>>>>>>>>>> + heap_freetuple(classTup);\n>>>>>>>>>>>> + /* The relid has already vacuumed, so we might be better to\n>>>>>>>>>>>> use exiting stats */\n>>>>>>>>>>>> + use_existing_stats = true;\n>>>>>>>>>>>> + }\n>>>>>>>>>>>>\n>>>>>>>>>>>> With that patch, the autovacuum process refreshes the stats in the\n>>>>>>>>>>>> next check if it finds out that the table still needs to be vacuumed.\n>>>>>>>>>>>> But I guess it's not necessarily true because the next table might be\n>>>>>>>>>>>> vacuumed already. So I think we might want to always use the existing\n>>>>>>>>>>>> for the first check. What do you think?\n>>>>>>>>>>> Thanks for your comment.\n>>>>>>>>>>>\n>>>>>>>>>>> If we assume the case where some workers vacuum on large tables\n>>>>>>>>>>> and a single worker vacuum on small tables, the processing\n>>>>>>>>>>> performance of the single worker will be slightly lower if the\n>>>>>>>>>>> existing statistics are checked every time.\n>>>>>>>>>>>\n>>>>>>>>>>> In fact, at first I tried to check the existing stats every time,\n>>>>>>>>>>> but the performance was slightly worse in cases with a small number of workers.\n>>>>>>>>\n>>>>>>>> Do you have this benchmark result?\n>>>>>>>>\n>>>>>>>>\n>>>>>>>>>>> (Checking the existing stats is lightweight , but at high frequency,\n>>>>>>>>>>> it affects processing performance.)\n>>>>>>>>>>> Therefore, at after refresh statistics, determine whether autovac\n>>>>>>>>>>> should use the existing statistics.\n>>>>>>>>>>\n>>>>>>>>>> Yeah, since the test you used uses a lot of small tables, if there are\n>>>>>>>>>> a few workers, checking the existing stats is unlikely to return true\n>>>>>>>>>> (no need to vacuum). So the cost of existing stats check ends up being\n>>>>>>>>>> overhead. Not sure how slow always checking the existing stats was,\n>>>>>>>>>> but given that the shared memory based stats collector patch could\n>>>>>>>>>> improve the performance of refreshing stats, it might be better not to\n>>>>>>>>>> check the existing stats frequently like the patch does. Anyway, I\n>>>>>>>>>> think it’s better to evaluate the performance improvement with other\n>>>>>>>>>> cases too.\n>>>>>>>>> Yeah, I would like to see how much the performance changes in other cases.\n>>>>>>>>> In addition, if the shared-based-stats patch is applied, we won't need to reload\n>>>>>>>>> a huge stats file, so we will just have to check the stats on\n>>>>>>>>> shared-mem every time.\n>>>>>>>>> Perhaps the logic of table_recheck_autovac could be simpler.\n>>>>>>>>>\n>>>>>>>>>>> BTW, I found some typos in comments, so attache a fixed version.\n>>>>>>>>\n>>>>>>>> The patch adds some duplicated codes into table_recheck_autovac().\n>>>>>>>> It's better to make the common function performing them and make\n>>>>>>>> table_recheck_autovac() call that common function, to simplify the code.\n>>>>>>> Thanks for your comment.\n>>>>>>> Hmm.. I've cut out the duplicate part.\n>>>>>>> Attach the patch.\n>>>>>>> Could you confirm that it fits your expecting?\n>>>>>>\n>>>>>> Yes, thanks for updataing the patch! Here are another review comments.\n>>>>>>\n>>>>>> + shared = pgstat_fetch_stat_dbentry(InvalidOid);\n>>>>>> + dbentry = pgstat_fetch_stat_dbentry(MyDatabaseId);\n>>>>>>\n>>>>>> When using the existing stats, ISTM that these are not necessary and\n>>>>>> we can reuse \"shared\" and \"dbentry\" obtained before. Right?\n>>>>> Yeah, but unless autovac_refresh_stats() is called, these functions\n>>>>> read the information from the\n>>>>> local hash table without re-read the stats file, so the process is very light.\n>>>>> Therefore, I think, it is better to keep the current logic to keep the\n>>>>> code simple.\n>>>>>\n>>>>>>\n>>>>>> + /* We might be better to refresh stats */\n>>>>>> + use_existing_stats = false;\n>>>>>>\n>>>>>> I think that we should add more comments about why it's better to\n>>>>>> refresh the stats in this case.\n>>>>>>\n>>>>>> + /* The relid has already vacuumed, so we might be better to use existing stats */\n>>>>>> + use_existing_stats = true;\n>>>>>>\n>>>>>> I think that we should add more comments about why it's better to\n>>>>>> reuse the stats in this case.\n>>>>> I added comments.\n>>>>>\n>>>>> Attache the patch.\n>>>>>\n>>>>\n>>>> Thank you for updating the patch. Here are some small comments on the\n>>>> latest (v4) patch.\n>>>>\n>>>> + * So if the last time we checked a table that was already vacuumed after\n>>>> + * refres stats, check the current statistics before refreshing it.\n>>>> + */\n>>>>\n>>>> s/refres/refresh/\n>> Thanks! fixed.\n>> Attached the patch.\n>>\n>>>>\n>>>> -----\n>>>> +/* Counter to determine if statistics should be refreshed */\n>>>> +static bool use_existing_stats = false;\n>>>> +\n>>>>\n>>>> I think 'use_existing_stats' can be declared within table_recheck_autovac().\n>>>>\n>>>> -----\n>>>> While testing the performance, I realized that the statistics are\n>>>> reset every time vacuumed one table, leading to re-reading the stats\n>>>> file even if 'use_existing_stats' is true. Please refer that vacuum()\n>>>> eventually calls AtEOXact_PgStat() which calls to\n>>>> pgstat_clear_snapshot().\n>>>\n>>> Good catch!\n>>>\n>>>\n>>>> I believe that's why the performance of the\n>>>> method of always checking the existing stats wasn’t good as expected.\n>>>> So if we save the statistics somewhere and use it for rechecking, the\n>>>> results of the performance benchmark will differ between these two\n>>>> methods.\n>> Thanks for you checks.\n>> But, if a worker did vacuum(), that means this worker had determined\n>> need vacuum in the\n>> table_recheck_autovac(). So, use_existing_stats set to false, and next\n>> time, refresh stats.\n>> Therefore I think the current patch is fine, as we want to avoid\n>> unnecessary refreshing of\n>> statistics before the actual vacuum(), right?\n> \n> Yes, you're right.\n> \n> When I benchmarked the performance of the method of always checking\n> existing stats I edited your patch so that it sets use_existing_stats\n> = true even if the second check is false (i.g., vacuum is needed).\n> And the result I got was worse than expected especially in the case of\n> a few autovacuum workers. But it doesn't evaluate the performance of\n> that method rightly as the stats snapshot is cleared every time\n> vacuum. Given you had similar results, I guess you used a similar way\n> when evaluating it, is it right? If so, it’s better to fix this issue\n> and see how the performance benchmark results will differ.\n> \n> For example, the results of the test case with 10000 tables and 1\n> autovacuum worker I reported before was:\n> \n> 10000 tables:\n> autovac_workers 1 : 158s,157s, 290s\n> \n> But after fixing that issue in the third method (always checking the\n> existing stats), the results are:\n\nCould you tell me how you fixed that issue? You copied the stats to\nsomewhere as you suggested or skipped pgstat_clear_snapshot() as\nI suggested?\n\nKasahara-san seems not to like the latter idea because it might\ncause bad side effect. So we should use the former idea?\n\n> \n> 10000 tables:\n> autovac_workers 1 : 157s,157s, 160s\n\nLooks good number!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 1 Dec 2020 16:32:20 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: autovac issue with large number of tables" }, { "msg_contents": "On Tue, Dec 1, 2020 at 4:32 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/12/01 16:23, Masahiko Sawada wrote:\n> > On Tue, Dec 1, 2020 at 1:48 PM Kasahara Tatsuhito\n> > <kasahara.tatsuhito@gmail.com> wrote:\n> >>\n> >> Hi,\n> >>\n> >> On Mon, Nov 30, 2020 at 8:59 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>\n> >>>\n> >>>\n> >>> On 2020/11/30 10:43, Masahiko Sawada wrote:\n> >>>> On Sun, Nov 29, 2020 at 10:34 PM Kasahara Tatsuhito\n> >>>> <kasahara.tatsuhito@gmail.com> wrote:\n> >>>>>\n> >>>>> Hi, Thanks for you comments.\n> >>>>>\n> >>>>> On Fri, Nov 27, 2020 at 9:51 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>>>\n> >>>>>>\n> >>>>>>\n> >>>>>> On 2020/11/27 18:38, Kasahara Tatsuhito wrote:\n> >>>>>>> Hi,\n> >>>>>>>\n> >>>>>>> On Fri, Nov 27, 2020 at 1:43 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>>>>>\n> >>>>>>>>\n> >>>>>>>>\n> >>>>>>>> On 2020/11/26 10:41, Kasahara Tatsuhito wrote:\n> >>>>>>>>> On Wed, Nov 25, 2020 at 8:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >>>>>>>>>>\n> >>>>>>>>>> On Wed, Nov 25, 2020 at 4:18 PM Kasahara Tatsuhito\n> >>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n> >>>>>>>>>>>\n> >>>>>>>>>>> Hi,\n> >>>>>>>>>>>\n> >>>>>>>>>>> On Wed, Nov 25, 2020 at 2:17 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >>>>>>>>>>>>\n> >>>>>>>>>>>> On Fri, Sep 4, 2020 at 7:50 PM Kasahara Tatsuhito\n> >>>>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n> >>>>>>>>>>>>>\n> >>>>>>>>>>>>> Hi,\n> >>>>>>>>>>>>>\n> >>>>>>>>>>>>> On Wed, Sep 2, 2020 at 2:10 AM Kasahara Tatsuhito\n> >>>>>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n> >>>>>>>>>>>>>>> I wonder if we could have table_recheck_autovac do two probes of the stats\n> >>>>>>>>>>>>>>> data. First probe the existing stats data, and if it shows the table to\n> >>>>>>>>>>>>>>> be already vacuumed, return immediately. If not, *then* force a stats\n> >>>>>>>>>>>>>>> re-read, and check a second time.\n> >>>>>>>>>>>>>> Does the above mean that the second and subsequent table_recheck_autovac()\n> >>>>>>>>>>>>>> will be improved to first check using the previous refreshed statistics?\n> >>>>>>>>>>>>>> I think that certainly works.\n> >>>>>>>>>>>>>>\n> >>>>>>>>>>>>>> If that's correct, I'll try to create a patch for the PoC\n> >>>>>>>>>>>>>\n> >>>>>>>>>>>>> I still don't know how to reproduce Jim's troubles, but I was able to reproduce\n> >>>>>>>>>>>>> what was probably a very similar problem.\n> >>>>>>>>>>>>>\n> >>>>>>>>>>>>> This problem seems to be more likely to occur in cases where you have\n> >>>>>>>>>>>>> a large number of tables,\n> >>>>>>>>>>>>> i.e., a large amount of stats, and many small tables need VACUUM at\n> >>>>>>>>>>>>> the same time.\n> >>>>>>>>>>>>>\n> >>>>>>>>>>>>> So I followed Tom's advice and created a patch for the PoC.\n> >>>>>>>>>>>>> This patch will enable a flag in the table_recheck_autovac function to use\n> >>>>>>>>>>>>> the existing stats next time if VACUUM (or ANALYZE) has already been done\n> >>>>>>>>>>>>> by another worker on the check after the stats have been updated.\n> >>>>>>>>>>>>> If the tables continue to require VACUUM after the refresh, then a refresh\n> >>>>>>>>>>>>> will be required instead of using the existing statistics.\n> >>>>>>>>>>>>>\n> >>>>>>>>>>>>> I did simple test with HEAD and HEAD + this PoC patch.\n> >>>>>>>>>>>>> The tests were conducted in two cases.\n> >>>>>>>>>>>>> (I changed few configurations. see attached scripts)\n> >>>>>>>>>>>>>\n> >>>>>>>>>>>>> 1. Normal VACUUM case\n> >>>>>>>>>>>>> - SET autovacuum = off\n> >>>>>>>>>>>>> - CREATE tables with 100 rows\n> >>>>>>>>>>>>> - DELETE 90 rows for each tables\n> >>>>>>>>>>>>> - SET autovacuum = on and restart PostgreSQL\n> >>>>>>>>>>>>> - Measure the time it takes for all tables to be VACUUMed\n> >>>>>>>>>>>>>\n> >>>>>>>>>>>>> 2. Anti wrap round VACUUM case\n> >>>>>>>>>>>>> - CREATE brank tables\n> >>>>>>>>>>>>> - SELECT all of these tables (for generate stats)\n> >>>>>>>>>>>>> - SET autovacuum_freeze_max_age to low values and restart PostgreSQL\n> >>>>>>>>>>>>> - Consumes a lot of XIDs by using txid_curent()\n> >>>>>>>>>>>>> - Measure the time it takes for all tables to be VACUUMed\n> >>>>>>>>>>>>>\n> >>>>>>>>>>>>> For each test case, the following results were obtained by changing\n> >>>>>>>>>>>>> autovacuum_max_workers parameters to 1, 2, 3(def) 5 and 10.\n> >>>>>>>>>>>>> Also changing num of tables to 1000, 5000, 10000 and 20000.\n> >>>>>>>>>>>>>\n> >>>>>>>>>>>>> Due to the poor VM environment (2 VCPU/4 GB), the results are a little unstable,\n> >>>>>>>>>>>>> but I think it's enough to ask for a trend.\n> >>>>>>>>>>>>>\n> >>>>>>>>>>>>> ===========================================================================\n> >>>>>>>>>>>>> [1.Normal VACUUM case]\n> >>>>>>>>>>>>> tables:1000\n> >>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 20 sec VS (with patch) 20 sec\n> >>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 18 sec VS (with patch) 16 sec\n> >>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 18 sec VS (with patch) 16 sec\n> >>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 19 sec VS (with patch) 17 sec\n> >>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 19 sec VS (with patch) 17 sec\n> >>>>>>>>>>>>>\n> >>>>>>>>>>>>> tables:5000\n> >>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 77 sec VS (with patch) 78 sec\n> >>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 61 sec VS (with patch) 43 sec\n> >>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 38 sec VS (with patch) 38 sec\n> >>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 45 sec VS (with patch) 37 sec\n> >>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 43 sec VS (with patch) 35 sec\n> >>>>>>>>>>>>>\n> >>>>>>>>>>>>> tables:10000\n> >>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 152 sec VS (with patch) 153 sec\n> >>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 119 sec VS (with patch) 98 sec\n> >>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 87 sec VS (with patch) 78 sec\n> >>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 100 sec VS (with patch) 66 sec\n> >>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 97 sec VS (with patch) 56 sec\n> >>>>>>>>>>>>>\n> >>>>>>>>>>>>> tables:20000\n> >>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 338 sec VS (with patch) 339 sec\n> >>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 231 sec VS (with patch) 229 sec\n> >>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 220 sec VS (with patch) 191 sec\n> >>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 234 sec VS (with patch) 147 sec\n> >>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 320 sec VS (with patch) 113 sec\n> >>>>>>>>>>>>>\n> >>>>>>>>>>>>> [2.Anti wrap round VACUUM case]\n> >>>>>>>>>>>>> tables:1000\n> >>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 19 sec VS (with patch) 18 sec\n> >>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 14 sec VS (with patch) 15 sec\n> >>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 14 sec VS (with patch) 14 sec\n> >>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 14 sec VS (with patch) 16 sec\n> >>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 16 sec VS (with patch) 14 sec\n> >>>>>>>>>>>>>\n> >>>>>>>>>>>>> tables:5000\n> >>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 69 sec VS (with patch) 69 sec\n> >>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 66 sec VS (with patch) 47 sec\n> >>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 59 sec VS (with patch) 37 sec\n> >>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 39 sec VS (with patch) 28 sec\n> >>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 39 sec VS (with patch) 29 sec\n> >>>>>>>>>>>>>\n> >>>>>>>>>>>>> tables:10000\n> >>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 139 sec VS (with patch) 138 sec\n> >>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 130 sec VS (with patch) 86 sec\n> >>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 120 sec VS (with patch) 68 sec\n> >>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 96 sec VS (with patch) 41 sec\n> >>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 90 sec VS (with patch) 39 sec\n> >>>>>>>>>>>>>\n> >>>>>>>>>>>>> tables:20000\n> >>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 313 sec VS (with patch) 331 sec\n> >>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 209 sec VS (with patch) 201 sec\n> >>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 227 sec VS (with patch) 141 sec\n> >>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 236 sec VS (with patch) 88 sec\n> >>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 309 sec VS (with patch) 74 sec\n> >>>>>>>>>>>>> ===========================================================================\n> >>>>>>>>>>>>>\n> >>>>>>>>>>>>> The cases without patch, the scalability of the worker has decreased\n> >>>>>>>>>>>>> as the number of tables has increased.\n> >>>>>>>>>>>>> In fact, the more workers there are, the longer it takes to complete\n> >>>>>>>>>>>>> VACUUM to all tables.\n> >>>>>>>>>>>>> The cases with patch, it shows good scalability with respect to the\n> >>>>>>>>>>>>> number of workers.\n> >>>>>>>>>>>>\n> >>>>>>>>>>>> It seems a good performance improvement even without the patch of\n> >>>>>>>>>>>> shared memory based stats collector.\n> >>>>>>>>\n> >>>>>>>> Sounds great!\n> >>>>>>>>\n> >>>>>>>>\n> >>>>>>>>>>>>\n> >>>>>>>>>>>>>\n> >>>>>>>>>>>>> Note that perf top results showed that hash_search_with_hash_value,\n> >>>>>>>>>>>>> hash_seq_search and\n> >>>>>>>>>>>>> pgstat_read_statsfiles are dominant during VACUUM in all patterns,\n> >>>>>>>>>>>>> with or without the patch.\n> >>>>>>>>>>>>>\n> >>>>>>>>>>>>> Therefore, there is still a need to find ways to optimize the reading\n> >>>>>>>>>>>>> of large amounts of stats.\n> >>>>>>>>>>>>> However, this patch is effective in its own right, and since there are\n> >>>>>>>>>>>>> only a few parts to modify,\n> >>>>>>>>>>>>> I think it should be able to be applied to current (preferably\n> >>>>>>>>>>>>> pre-v13) PostgreSQL.\n> >>>>>>>>>>>>\n> >>>>>>>>>>>> +1\n> >>>>>>>>>>>>\n> >>>>>>>>>>>> +\n> >>>>>>>>>>>> + /* We might be better to refresh stats */\n> >>>>>>>>>>>> + use_existing_stats = false;\n> >>>>>>>>>>>> }\n> >>>>>>>>>>>> + else\n> >>>>>>>>>>>> + {\n> >>>>>>>>>>>>\n> >>>>>>>>>>>> - heap_freetuple(classTup);\n> >>>>>>>>>>>> + heap_freetuple(classTup);\n> >>>>>>>>>>>> + /* The relid has already vacuumed, so we might be better to\n> >>>>>>>>>>>> use exiting stats */\n> >>>>>>>>>>>> + use_existing_stats = true;\n> >>>>>>>>>>>> + }\n> >>>>>>>>>>>>\n> >>>>>>>>>>>> With that patch, the autovacuum process refreshes the stats in the\n> >>>>>>>>>>>> next check if it finds out that the table still needs to be vacuumed.\n> >>>>>>>>>>>> But I guess it's not necessarily true because the next table might be\n> >>>>>>>>>>>> vacuumed already. So I think we might want to always use the existing\n> >>>>>>>>>>>> for the first check. What do you think?\n> >>>>>>>>>>> Thanks for your comment.\n> >>>>>>>>>>>\n> >>>>>>>>>>> If we assume the case where some workers vacuum on large tables\n> >>>>>>>>>>> and a single worker vacuum on small tables, the processing\n> >>>>>>>>>>> performance of the single worker will be slightly lower if the\n> >>>>>>>>>>> existing statistics are checked every time.\n> >>>>>>>>>>>\n> >>>>>>>>>>> In fact, at first I tried to check the existing stats every time,\n> >>>>>>>>>>> but the performance was slightly worse in cases with a small number of workers.\n> >>>>>>>>\n> >>>>>>>> Do you have this benchmark result?\n> >>>>>>>>\n> >>>>>>>>\n> >>>>>>>>>>> (Checking the existing stats is lightweight , but at high frequency,\n> >>>>>>>>>>> it affects processing performance.)\n> >>>>>>>>>>> Therefore, at after refresh statistics, determine whether autovac\n> >>>>>>>>>>> should use the existing statistics.\n> >>>>>>>>>>\n> >>>>>>>>>> Yeah, since the test you used uses a lot of small tables, if there are\n> >>>>>>>>>> a few workers, checking the existing stats is unlikely to return true\n> >>>>>>>>>> (no need to vacuum). So the cost of existing stats check ends up being\n> >>>>>>>>>> overhead. Not sure how slow always checking the existing stats was,\n> >>>>>>>>>> but given that the shared memory based stats collector patch could\n> >>>>>>>>>> improve the performance of refreshing stats, it might be better not to\n> >>>>>>>>>> check the existing stats frequently like the patch does. Anyway, I\n> >>>>>>>>>> think it’s better to evaluate the performance improvement with other\n> >>>>>>>>>> cases too.\n> >>>>>>>>> Yeah, I would like to see how much the performance changes in other cases.\n> >>>>>>>>> In addition, if the shared-based-stats patch is applied, we won't need to reload\n> >>>>>>>>> a huge stats file, so we will just have to check the stats on\n> >>>>>>>>> shared-mem every time.\n> >>>>>>>>> Perhaps the logic of table_recheck_autovac could be simpler.\n> >>>>>>>>>\n> >>>>>>>>>>> BTW, I found some typos in comments, so attache a fixed version.\n> >>>>>>>>\n> >>>>>>>> The patch adds some duplicated codes into table_recheck_autovac().\n> >>>>>>>> It's better to make the common function performing them and make\n> >>>>>>>> table_recheck_autovac() call that common function, to simplify the code.\n> >>>>>>> Thanks for your comment.\n> >>>>>>> Hmm.. I've cut out the duplicate part.\n> >>>>>>> Attach the patch.\n> >>>>>>> Could you confirm that it fits your expecting?\n> >>>>>>\n> >>>>>> Yes, thanks for updataing the patch! Here are another review comments.\n> >>>>>>\n> >>>>>> + shared = pgstat_fetch_stat_dbentry(InvalidOid);\n> >>>>>> + dbentry = pgstat_fetch_stat_dbentry(MyDatabaseId);\n> >>>>>>\n> >>>>>> When using the existing stats, ISTM that these are not necessary and\n> >>>>>> we can reuse \"shared\" and \"dbentry\" obtained before. Right?\n> >>>>> Yeah, but unless autovac_refresh_stats() is called, these functions\n> >>>>> read the information from the\n> >>>>> local hash table without re-read the stats file, so the process is very light.\n> >>>>> Therefore, I think, it is better to keep the current logic to keep the\n> >>>>> code simple.\n> >>>>>\n> >>>>>>\n> >>>>>> + /* We might be better to refresh stats */\n> >>>>>> + use_existing_stats = false;\n> >>>>>>\n> >>>>>> I think that we should add more comments about why it's better to\n> >>>>>> refresh the stats in this case.\n> >>>>>>\n> >>>>>> + /* The relid has already vacuumed, so we might be better to use existing stats */\n> >>>>>> + use_existing_stats = true;\n> >>>>>>\n> >>>>>> I think that we should add more comments about why it's better to\n> >>>>>> reuse the stats in this case.\n> >>>>> I added comments.\n> >>>>>\n> >>>>> Attache the patch.\n> >>>>>\n> >>>>\n> >>>> Thank you for updating the patch. Here are some small comments on the\n> >>>> latest (v4) patch.\n> >>>>\n> >>>> + * So if the last time we checked a table that was already vacuumed after\n> >>>> + * refres stats, check the current statistics before refreshing it.\n> >>>> + */\n> >>>>\n> >>>> s/refres/refresh/\n> >> Thanks! fixed.\n> >> Attached the patch.\n> >>\n> >>>>\n> >>>> -----\n> >>>> +/* Counter to determine if statistics should be refreshed */\n> >>>> +static bool use_existing_stats = false;\n> >>>> +\n> >>>>\n> >>>> I think 'use_existing_stats' can be declared within table_recheck_autovac().\n> >>>>\n> >>>> -----\n> >>>> While testing the performance, I realized that the statistics are\n> >>>> reset every time vacuumed one table, leading to re-reading the stats\n> >>>> file even if 'use_existing_stats' is true. Please refer that vacuum()\n> >>>> eventually calls AtEOXact_PgStat() which calls to\n> >>>> pgstat_clear_snapshot().\n> >>>\n> >>> Good catch!\n> >>>\n> >>>\n> >>>> I believe that's why the performance of the\n> >>>> method of always checking the existing stats wasn’t good as expected.\n> >>>> So if we save the statistics somewhere and use it for rechecking, the\n> >>>> results of the performance benchmark will differ between these two\n> >>>> methods.\n> >> Thanks for you checks.\n> >> But, if a worker did vacuum(), that means this worker had determined\n> >> need vacuum in the\n> >> table_recheck_autovac(). So, use_existing_stats set to false, and next\n> >> time, refresh stats.\n> >> Therefore I think the current patch is fine, as we want to avoid\n> >> unnecessary refreshing of\n> >> statistics before the actual vacuum(), right?\n> >\n> > Yes, you're right.\n> >\n> > When I benchmarked the performance of the method of always checking\n> > existing stats I edited your patch so that it sets use_existing_stats\n> > = true even if the second check is false (i.g., vacuum is needed).\n> > And the result I got was worse than expected especially in the case of\n> > a few autovacuum workers. But it doesn't evaluate the performance of\n> > that method rightly as the stats snapshot is cleared every time\n> > vacuum. Given you had similar results, I guess you used a similar way\n> > when evaluating it, is it right? If so, it’s better to fix this issue\n> > and see how the performance benchmark results will differ.\n> >\n> > For example, the results of the test case with 10000 tables and 1\n> > autovacuum worker I reported before was:\n> >\n> > 10000 tables:\n> > autovac_workers 1 : 158s,157s, 290s\n> >\n> > But after fixing that issue in the third method (always checking the\n> > existing stats), the results are:\n>\n> Could you tell me how you fixed that issue? You copied the stats to\n> somewhere as you suggested or skipped pgstat_clear_snapshot() as\n> I suggested?\n\nI used the way you suggested in this quick test; skipped\npgstat_clear_snapshot().\n\n>\n> Kasahara-san seems not to like the latter idea because it might\n> cause bad side effect. So we should use the former idea?\n\nNot sure. I'm also concerned about the side effect but I've not checked yet.\n\nSince probably there is no big difference between the two ways in\nterms of performance I'm going to see how the performance benchmark\nresult will change first. Maybe meanwhile we can discuss on these two\nchoices.\n\nRegards,\n\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 1 Dec 2020 17:31:00 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: autovac issue with large number of tables" }, { "msg_contents": "On Tue, Dec 1, 2020 at 5:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Dec 1, 2020 at 4:32 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >\n> >\n> >\n> > On 2020/12/01 16:23, Masahiko Sawada wrote:\n> > > On Tue, Dec 1, 2020 at 1:48 PM Kasahara Tatsuhito\n> > > <kasahara.tatsuhito@gmail.com> wrote:\n> > >>\n> > >> Hi,\n> > >>\n> > >> On Mon, Nov 30, 2020 at 8:59 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > >>>\n> > >>>\n> > >>>\n> > >>> On 2020/11/30 10:43, Masahiko Sawada wrote:\n> > >>>> On Sun, Nov 29, 2020 at 10:34 PM Kasahara Tatsuhito\n> > >>>> <kasahara.tatsuhito@gmail.com> wrote:\n> > >>>>>\n> > >>>>> Hi, Thanks for you comments.\n> > >>>>>\n> > >>>>> On Fri, Nov 27, 2020 at 9:51 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > >>>>>>\n> > >>>>>>\n> > >>>>>>\n> > >>>>>> On 2020/11/27 18:38, Kasahara Tatsuhito wrote:\n> > >>>>>>> Hi,\n> > >>>>>>>\n> > >>>>>>> On Fri, Nov 27, 2020 at 1:43 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > >>>>>>>>\n> > >>>>>>>>\n> > >>>>>>>>\n> > >>>>>>>> On 2020/11/26 10:41, Kasahara Tatsuhito wrote:\n> > >>>>>>>>> On Wed, Nov 25, 2020 at 8:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >>>>>>>>>>\n> > >>>>>>>>>> On Wed, Nov 25, 2020 at 4:18 PM Kasahara Tatsuhito\n> > >>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n> > >>>>>>>>>>>\n> > >>>>>>>>>>> Hi,\n> > >>>>>>>>>>>\n> > >>>>>>>>>>> On Wed, Nov 25, 2020 at 2:17 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >>>>>>>>>>>>\n> > >>>>>>>>>>>> On Fri, Sep 4, 2020 at 7:50 PM Kasahara Tatsuhito\n> > >>>>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n> > >>>>>>>>>>>>>\n> > >>>>>>>>>>>>> Hi,\n> > >>>>>>>>>>>>>\n> > >>>>>>>>>>>>> On Wed, Sep 2, 2020 at 2:10 AM Kasahara Tatsuhito\n> > >>>>>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n> > >>>>>>>>>>>>>>> I wonder if we could have table_recheck_autovac do two probes of the stats\n> > >>>>>>>>>>>>>>> data. First probe the existing stats data, and if it shows the table to\n> > >>>>>>>>>>>>>>> be already vacuumed, return immediately. If not, *then* force a stats\n> > >>>>>>>>>>>>>>> re-read, and check a second time.\n> > >>>>>>>>>>>>>> Does the above mean that the second and subsequent table_recheck_autovac()\n> > >>>>>>>>>>>>>> will be improved to first check using the previous refreshed statistics?\n> > >>>>>>>>>>>>>> I think that certainly works.\n> > >>>>>>>>>>>>>>\n> > >>>>>>>>>>>>>> If that's correct, I'll try to create a patch for the PoC\n> > >>>>>>>>>>>>>\n> > >>>>>>>>>>>>> I still don't know how to reproduce Jim's troubles, but I was able to reproduce\n> > >>>>>>>>>>>>> what was probably a very similar problem.\n> > >>>>>>>>>>>>>\n> > >>>>>>>>>>>>> This problem seems to be more likely to occur in cases where you have\n> > >>>>>>>>>>>>> a large number of tables,\n> > >>>>>>>>>>>>> i.e., a large amount of stats, and many small tables need VACUUM at\n> > >>>>>>>>>>>>> the same time.\n> > >>>>>>>>>>>>>\n> > >>>>>>>>>>>>> So I followed Tom's advice and created a patch for the PoC.\n> > >>>>>>>>>>>>> This patch will enable a flag in the table_recheck_autovac function to use\n> > >>>>>>>>>>>>> the existing stats next time if VACUUM (or ANALYZE) has already been done\n> > >>>>>>>>>>>>> by another worker on the check after the stats have been updated.\n> > >>>>>>>>>>>>> If the tables continue to require VACUUM after the refresh, then a refresh\n> > >>>>>>>>>>>>> will be required instead of using the existing statistics.\n> > >>>>>>>>>>>>>\n> > >>>>>>>>>>>>> I did simple test with HEAD and HEAD + this PoC patch.\n> > >>>>>>>>>>>>> The tests were conducted in two cases.\n> > >>>>>>>>>>>>> (I changed few configurations. see attached scripts)\n> > >>>>>>>>>>>>>\n> > >>>>>>>>>>>>> 1. Normal VACUUM case\n> > >>>>>>>>>>>>> - SET autovacuum = off\n> > >>>>>>>>>>>>> - CREATE tables with 100 rows\n> > >>>>>>>>>>>>> - DELETE 90 rows for each tables\n> > >>>>>>>>>>>>> - SET autovacuum = on and restart PostgreSQL\n> > >>>>>>>>>>>>> - Measure the time it takes for all tables to be VACUUMed\n> > >>>>>>>>>>>>>\n> > >>>>>>>>>>>>> 2. Anti wrap round VACUUM case\n> > >>>>>>>>>>>>> - CREATE brank tables\n> > >>>>>>>>>>>>> - SELECT all of these tables (for generate stats)\n> > >>>>>>>>>>>>> - SET autovacuum_freeze_max_age to low values and restart PostgreSQL\n> > >>>>>>>>>>>>> - Consumes a lot of XIDs by using txid_curent()\n> > >>>>>>>>>>>>> - Measure the time it takes for all tables to be VACUUMed\n> > >>>>>>>>>>>>>\n> > >>>>>>>>>>>>> For each test case, the following results were obtained by changing\n> > >>>>>>>>>>>>> autovacuum_max_workers parameters to 1, 2, 3(def) 5 and 10.\n> > >>>>>>>>>>>>> Also changing num of tables to 1000, 5000, 10000 and 20000.\n> > >>>>>>>>>>>>>\n> > >>>>>>>>>>>>> Due to the poor VM environment (2 VCPU/4 GB), the results are a little unstable,\n> > >>>>>>>>>>>>> but I think it's enough to ask for a trend.\n> > >>>>>>>>>>>>>\n> > >>>>>>>>>>>>> ===========================================================================\n> > >>>>>>>>>>>>> [1.Normal VACUUM case]\n> > >>>>>>>>>>>>> tables:1000\n> > >>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 20 sec VS (with patch) 20 sec\n> > >>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 18 sec VS (with patch) 16 sec\n> > >>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 18 sec VS (with patch) 16 sec\n> > >>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 19 sec VS (with patch) 17 sec\n> > >>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 19 sec VS (with patch) 17 sec\n> > >>>>>>>>>>>>>\n> > >>>>>>>>>>>>> tables:5000\n> > >>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 77 sec VS (with patch) 78 sec\n> > >>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 61 sec VS (with patch) 43 sec\n> > >>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 38 sec VS (with patch) 38 sec\n> > >>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 45 sec VS (with patch) 37 sec\n> > >>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 43 sec VS (with patch) 35 sec\n> > >>>>>>>>>>>>>\n> > >>>>>>>>>>>>> tables:10000\n> > >>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 152 sec VS (with patch) 153 sec\n> > >>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 119 sec VS (with patch) 98 sec\n> > >>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 87 sec VS (with patch) 78 sec\n> > >>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 100 sec VS (with patch) 66 sec\n> > >>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 97 sec VS (with patch) 56 sec\n> > >>>>>>>>>>>>>\n> > >>>>>>>>>>>>> tables:20000\n> > >>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 338 sec VS (with patch) 339 sec\n> > >>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 231 sec VS (with patch) 229 sec\n> > >>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 220 sec VS (with patch) 191 sec\n> > >>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 234 sec VS (with patch) 147 sec\n> > >>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 320 sec VS (with patch) 113 sec\n> > >>>>>>>>>>>>>\n> > >>>>>>>>>>>>> [2.Anti wrap round VACUUM case]\n> > >>>>>>>>>>>>> tables:1000\n> > >>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 19 sec VS (with patch) 18 sec\n> > >>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 14 sec VS (with patch) 15 sec\n> > >>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 14 sec VS (with patch) 14 sec\n> > >>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 14 sec VS (with patch) 16 sec\n> > >>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 16 sec VS (with patch) 14 sec\n> > >>>>>>>>>>>>>\n> > >>>>>>>>>>>>> tables:5000\n> > >>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 69 sec VS (with patch) 69 sec\n> > >>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 66 sec VS (with patch) 47 sec\n> > >>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 59 sec VS (with patch) 37 sec\n> > >>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 39 sec VS (with patch) 28 sec\n> > >>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 39 sec VS (with patch) 29 sec\n> > >>>>>>>>>>>>>\n> > >>>>>>>>>>>>> tables:10000\n> > >>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 139 sec VS (with patch) 138 sec\n> > >>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 130 sec VS (with patch) 86 sec\n> > >>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 120 sec VS (with patch) 68 sec\n> > >>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 96 sec VS (with patch) 41 sec\n> > >>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 90 sec VS (with patch) 39 sec\n> > >>>>>>>>>>>>>\n> > >>>>>>>>>>>>> tables:20000\n> > >>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 313 sec VS (with patch) 331 sec\n> > >>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 209 sec VS (with patch) 201 sec\n> > >>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 227 sec VS (with patch) 141 sec\n> > >>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 236 sec VS (with patch) 88 sec\n> > >>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 309 sec VS (with patch) 74 sec\n> > >>>>>>>>>>>>> ===========================================================================\n> > >>>>>>>>>>>>>\n> > >>>>>>>>>>>>> The cases without patch, the scalability of the worker has decreased\n> > >>>>>>>>>>>>> as the number of tables has increased.\n> > >>>>>>>>>>>>> In fact, the more workers there are, the longer it takes to complete\n> > >>>>>>>>>>>>> VACUUM to all tables.\n> > >>>>>>>>>>>>> The cases with patch, it shows good scalability with respect to the\n> > >>>>>>>>>>>>> number of workers.\n> > >>>>>>>>>>>>\n> > >>>>>>>>>>>> It seems a good performance improvement even without the patch of\n> > >>>>>>>>>>>> shared memory based stats collector.\n> > >>>>>>>>\n> > >>>>>>>> Sounds great!\n> > >>>>>>>>\n> > >>>>>>>>\n> > >>>>>>>>>>>>\n> > >>>>>>>>>>>>>\n> > >>>>>>>>>>>>> Note that perf top results showed that hash_search_with_hash_value,\n> > >>>>>>>>>>>>> hash_seq_search and\n> > >>>>>>>>>>>>> pgstat_read_statsfiles are dominant during VACUUM in all patterns,\n> > >>>>>>>>>>>>> with or without the patch.\n> > >>>>>>>>>>>>>\n> > >>>>>>>>>>>>> Therefore, there is still a need to find ways to optimize the reading\n> > >>>>>>>>>>>>> of large amounts of stats.\n> > >>>>>>>>>>>>> However, this patch is effective in its own right, and since there are\n> > >>>>>>>>>>>>> only a few parts to modify,\n> > >>>>>>>>>>>>> I think it should be able to be applied to current (preferably\n> > >>>>>>>>>>>>> pre-v13) PostgreSQL.\n> > >>>>>>>>>>>>\n> > >>>>>>>>>>>> +1\n> > >>>>>>>>>>>>\n> > >>>>>>>>>>>> +\n> > >>>>>>>>>>>> + /* We might be better to refresh stats */\n> > >>>>>>>>>>>> + use_existing_stats = false;\n> > >>>>>>>>>>>> }\n> > >>>>>>>>>>>> + else\n> > >>>>>>>>>>>> + {\n> > >>>>>>>>>>>>\n> > >>>>>>>>>>>> - heap_freetuple(classTup);\n> > >>>>>>>>>>>> + heap_freetuple(classTup);\n> > >>>>>>>>>>>> + /* The relid has already vacuumed, so we might be better to\n> > >>>>>>>>>>>> use exiting stats */\n> > >>>>>>>>>>>> + use_existing_stats = true;\n> > >>>>>>>>>>>> + }\n> > >>>>>>>>>>>>\n> > >>>>>>>>>>>> With that patch, the autovacuum process refreshes the stats in the\n> > >>>>>>>>>>>> next check if it finds out that the table still needs to be vacuumed.\n> > >>>>>>>>>>>> But I guess it's not necessarily true because the next table might be\n> > >>>>>>>>>>>> vacuumed already. So I think we might want to always use the existing\n> > >>>>>>>>>>>> for the first check. What do you think?\n> > >>>>>>>>>>> Thanks for your comment.\n> > >>>>>>>>>>>\n> > >>>>>>>>>>> If we assume the case where some workers vacuum on large tables\n> > >>>>>>>>>>> and a single worker vacuum on small tables, the processing\n> > >>>>>>>>>>> performance of the single worker will be slightly lower if the\n> > >>>>>>>>>>> existing statistics are checked every time.\n> > >>>>>>>>>>>\n> > >>>>>>>>>>> In fact, at first I tried to check the existing stats every time,\n> > >>>>>>>>>>> but the performance was slightly worse in cases with a small number of workers.\n> > >>>>>>>>\n> > >>>>>>>> Do you have this benchmark result?\n> > >>>>>>>>\n> > >>>>>>>>\n> > >>>>>>>>>>> (Checking the existing stats is lightweight , but at high frequency,\n> > >>>>>>>>>>> it affects processing performance.)\n> > >>>>>>>>>>> Therefore, at after refresh statistics, determine whether autovac\n> > >>>>>>>>>>> should use the existing statistics.\n> > >>>>>>>>>>\n> > >>>>>>>>>> Yeah, since the test you used uses a lot of small tables, if there are\n> > >>>>>>>>>> a few workers, checking the existing stats is unlikely to return true\n> > >>>>>>>>>> (no need to vacuum). So the cost of existing stats check ends up being\n> > >>>>>>>>>> overhead. Not sure how slow always checking the existing stats was,\n> > >>>>>>>>>> but given that the shared memory based stats collector patch could\n> > >>>>>>>>>> improve the performance of refreshing stats, it might be better not to\n> > >>>>>>>>>> check the existing stats frequently like the patch does. Anyway, I\n> > >>>>>>>>>> think it’s better to evaluate the performance improvement with other\n> > >>>>>>>>>> cases too.\n> > >>>>>>>>> Yeah, I would like to see how much the performance changes in other cases.\n> > >>>>>>>>> In addition, if the shared-based-stats patch is applied, we won't need to reload\n> > >>>>>>>>> a huge stats file, so we will just have to check the stats on\n> > >>>>>>>>> shared-mem every time.\n> > >>>>>>>>> Perhaps the logic of table_recheck_autovac could be simpler.\n> > >>>>>>>>>\n> > >>>>>>>>>>> BTW, I found some typos in comments, so attache a fixed version.\n> > >>>>>>>>\n> > >>>>>>>> The patch adds some duplicated codes into table_recheck_autovac().\n> > >>>>>>>> It's better to make the common function performing them and make\n> > >>>>>>>> table_recheck_autovac() call that common function, to simplify the code.\n> > >>>>>>> Thanks for your comment.\n> > >>>>>>> Hmm.. I've cut out the duplicate part.\n> > >>>>>>> Attach the patch.\n> > >>>>>>> Could you confirm that it fits your expecting?\n> > >>>>>>\n> > >>>>>> Yes, thanks for updataing the patch! Here are another review comments.\n> > >>>>>>\n> > >>>>>> + shared = pgstat_fetch_stat_dbentry(InvalidOid);\n> > >>>>>> + dbentry = pgstat_fetch_stat_dbentry(MyDatabaseId);\n> > >>>>>>\n> > >>>>>> When using the existing stats, ISTM that these are not necessary and\n> > >>>>>> we can reuse \"shared\" and \"dbentry\" obtained before. Right?\n> > >>>>> Yeah, but unless autovac_refresh_stats() is called, these functions\n> > >>>>> read the information from the\n> > >>>>> local hash table without re-read the stats file, so the process is very light.\n> > >>>>> Therefore, I think, it is better to keep the current logic to keep the\n> > >>>>> code simple.\n> > >>>>>\n> > >>>>>>\n> > >>>>>> + /* We might be better to refresh stats */\n> > >>>>>> + use_existing_stats = false;\n> > >>>>>>\n> > >>>>>> I think that we should add more comments about why it's better to\n> > >>>>>> refresh the stats in this case.\n> > >>>>>>\n> > >>>>>> + /* The relid has already vacuumed, so we might be better to use existing stats */\n> > >>>>>> + use_existing_stats = true;\n> > >>>>>>\n> > >>>>>> I think that we should add more comments about why it's better to\n> > >>>>>> reuse the stats in this case.\n> > >>>>> I added comments.\n> > >>>>>\n> > >>>>> Attache the patch.\n> > >>>>>\n> > >>>>\n> > >>>> Thank you for updating the patch. Here are some small comments on the\n> > >>>> latest (v4) patch.\n> > >>>>\n> > >>>> + * So if the last time we checked a table that was already vacuumed after\n> > >>>> + * refres stats, check the current statistics before refreshing it.\n> > >>>> + */\n> > >>>>\n> > >>>> s/refres/refresh/\n> > >> Thanks! fixed.\n> > >> Attached the patch.\n> > >>\n> > >>>>\n> > >>>> -----\n> > >>>> +/* Counter to determine if statistics should be refreshed */\n> > >>>> +static bool use_existing_stats = false;\n> > >>>> +\n> > >>>>\n> > >>>> I think 'use_existing_stats' can be declared within table_recheck_autovac().\n> > >>>>\n> > >>>> -----\n> > >>>> While testing the performance, I realized that the statistics are\n> > >>>> reset every time vacuumed one table, leading to re-reading the stats\n> > >>>> file even if 'use_existing_stats' is true. Please refer that vacuum()\n> > >>>> eventually calls AtEOXact_PgStat() which calls to\n> > >>>> pgstat_clear_snapshot().\n> > >>>\n> > >>> Good catch!\n> > >>>\n> > >>>\n> > >>>> I believe that's why the performance of the\n> > >>>> method of always checking the existing stats wasn’t good as expected.\n> > >>>> So if we save the statistics somewhere and use it for rechecking, the\n> > >>>> results of the performance benchmark will differ between these two\n> > >>>> methods.\n> > >> Thanks for you checks.\n> > >> But, if a worker did vacuum(), that means this worker had determined\n> > >> need vacuum in the\n> > >> table_recheck_autovac(). So, use_existing_stats set to false, and next\n> > >> time, refresh stats.\n> > >> Therefore I think the current patch is fine, as we want to avoid\n> > >> unnecessary refreshing of\n> > >> statistics before the actual vacuum(), right?\n> > >\n> > > Yes, you're right.\n> > >\n> > > When I benchmarked the performance of the method of always checking\n> > > existing stats I edited your patch so that it sets use_existing_stats\n> > > = true even if the second check is false (i.g., vacuum is needed).\n> > > And the result I got was worse than expected especially in the case of\n> > > a few autovacuum workers. But it doesn't evaluate the performance of\n> > > that method rightly as the stats snapshot is cleared every time\n> > > vacuum. Given you had similar results, I guess you used a similar way\n> > > when evaluating it, is it right? If so, it’s better to fix this issue\n> > > and see how the performance benchmark results will differ.\n> > >\n> > > For example, the results of the test case with 10000 tables and 1\n> > > autovacuum worker I reported before was:\n> > >\n> > > 10000 tables:\n> > > autovac_workers 1 : 158s,157s, 290s\n> > >\n> > > But after fixing that issue in the third method (always checking the\n> > > existing stats), the results are:\n> >\n> > Could you tell me how you fixed that issue? You copied the stats to\n> > somewhere as you suggested or skipped pgstat_clear_snapshot() as\n> > I suggested?\n>\n> I used the way you suggested in this quick test; skipped\n> pgstat_clear_snapshot().\n>\n> >\n> > Kasahara-san seems not to like the latter idea because it might\n> > cause bad side effect. So we should use the former idea?\n>\n> Not sure. I'm also concerned about the side effect but I've not checked yet.\n>\n> Since probably there is no big difference between the two ways in\n> terms of performance I'm going to see how the performance benchmark\n> result will change first.\n\nI've tested performance improvement again. From the left the execution\ntime of the current HEAD, Kasahara-san's patch, the method of always\nchecking the existing stats (using approach suggested by Fujii-san),\nin seconds.\n\n1000 tables:\n autovac_workers 1 : 13s, 13s, 13s\n autovac_workers 2 : 6s, 4s, 4s\n autovac_workers 3 : 3s, 4s, 3s\n autovac_workers 5 : 3s, 3s, 2s\n autovac_workers 10: 2s, 3s, 2s\n\n5000 tables:\n autovac_workers 1 : 71s, 71s, 72s\n autovac_workers 2 : 37s, 32s, 32s\n autovac_workers 3 : 29s, 26s, 26s\n autovac_workers 5 : 20s, 19s, 18s\n autovac_workers 10: 13s, 8s, 8s\n\n10000 tables:\n autovac_workers 1 : 158s,157s, 159s\n autovac_workers 2 : 80s, 53s, 78s\n autovac_workers 3 : 75s, 67s, 67s\n autovac_workers 5 : 61s, 42s, 42s\n autovac_workers 10: 69s, 26s, 25s\n\n20000 tables:\n autovac_workers 1 : 379s, 380s, 389s\n autovac_workers 2 : 236s, 232s, 233s\n autovac_workers 3 : 222s, 181s, 182s\n autovac_workers 5 : 212s, 132s, 139s\n autovac_workers 10: 317s, 91s, 89s\n\nI don't see a big difference between Kasahara-san's patch and the\nmethod of always checking the existing stats.\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 2 Dec 2020 12:53:51 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: autovac issue with large number of tables" }, { "msg_contents": "\n\nOn 2020/12/02 12:53, Masahiko Sawada wrote:\n> On Tue, Dec 1, 2020 at 5:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>\n>> On Tue, Dec 1, 2020 at 4:32 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>\n>>>\n>>>\n>>> On 2020/12/01 16:23, Masahiko Sawada wrote:\n>>>> On Tue, Dec 1, 2020 at 1:48 PM Kasahara Tatsuhito\n>>>> <kasahara.tatsuhito@gmail.com> wrote:\n>>>>>\n>>>>> Hi,\n>>>>>\n>>>>> On Mon, Nov 30, 2020 at 8:59 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>>> On 2020/11/30 10:43, Masahiko Sawada wrote:\n>>>>>>> On Sun, Nov 29, 2020 at 10:34 PM Kasahara Tatsuhito\n>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n>>>>>>>>\n>>>>>>>> Hi, Thanks for you comments.\n>>>>>>>>\n>>>>>>>> On Fri, Nov 27, 2020 at 9:51 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>>>>\n>>>>>>>>>\n>>>>>>>>>\n>>>>>>>>> On 2020/11/27 18:38, Kasahara Tatsuhito wrote:\n>>>>>>>>>> Hi,\n>>>>>>>>>>\n>>>>>>>>>> On Fri, Nov 27, 2020 at 1:43 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>>>>>>\n>>>>>>>>>>>\n>>>>>>>>>>>\n>>>>>>>>>>> On 2020/11/26 10:41, Kasahara Tatsuhito wrote:\n>>>>>>>>>>>> On Wed, Nov 25, 2020 at 8:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>>>>>>>>>>>>\n>>>>>>>>>>>>> On Wed, Nov 25, 2020 at 4:18 PM Kasahara Tatsuhito\n>>>>>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>> Hi,\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>> On Wed, Nov 25, 2020 at 2:17 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>> On Fri, Sep 4, 2020 at 7:50 PM Kasahara Tatsuhito\n>>>>>>>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>> Hi,\n>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>> On Wed, Sep 2, 2020 at 2:10 AM Kasahara Tatsuhito\n>>>>>>>>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n>>>>>>>>>>>>>>>>>> I wonder if we could have table_recheck_autovac do two probes of the stats\n>>>>>>>>>>>>>>>>>> data. First probe the existing stats data, and if it shows the table to\n>>>>>>>>>>>>>>>>>> be already vacuumed, return immediately. If not, *then* force a stats\n>>>>>>>>>>>>>>>>>> re-read, and check a second time.\n>>>>>>>>>>>>>>>>> Does the above mean that the second and subsequent table_recheck_autovac()\n>>>>>>>>>>>>>>>>> will be improved to first check using the previous refreshed statistics?\n>>>>>>>>>>>>>>>>> I think that certainly works.\n>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>> If that's correct, I'll try to create a patch for the PoC\n>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>> I still don't know how to reproduce Jim's troubles, but I was able to reproduce\n>>>>>>>>>>>>>>>> what was probably a very similar problem.\n>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>> This problem seems to be more likely to occur in cases where you have\n>>>>>>>>>>>>>>>> a large number of tables,\n>>>>>>>>>>>>>>>> i.e., a large amount of stats, and many small tables need VACUUM at\n>>>>>>>>>>>>>>>> the same time.\n>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>> So I followed Tom's advice and created a patch for the PoC.\n>>>>>>>>>>>>>>>> This patch will enable a flag in the table_recheck_autovac function to use\n>>>>>>>>>>>>>>>> the existing stats next time if VACUUM (or ANALYZE) has already been done\n>>>>>>>>>>>>>>>> by another worker on the check after the stats have been updated.\n>>>>>>>>>>>>>>>> If the tables continue to require VACUUM after the refresh, then a refresh\n>>>>>>>>>>>>>>>> will be required instead of using the existing statistics.\n>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>> I did simple test with HEAD and HEAD + this PoC patch.\n>>>>>>>>>>>>>>>> The tests were conducted in two cases.\n>>>>>>>>>>>>>>>> (I changed few configurations. see attached scripts)\n>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>> 1. Normal VACUUM case\n>>>>>>>>>>>>>>>> - SET autovacuum = off\n>>>>>>>>>>>>>>>> - CREATE tables with 100 rows\n>>>>>>>>>>>>>>>> - DELETE 90 rows for each tables\n>>>>>>>>>>>>>>>> - SET autovacuum = on and restart PostgreSQL\n>>>>>>>>>>>>>>>> - Measure the time it takes for all tables to be VACUUMed\n>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>> 2. Anti wrap round VACUUM case\n>>>>>>>>>>>>>>>> - CREATE brank tables\n>>>>>>>>>>>>>>>> - SELECT all of these tables (for generate stats)\n>>>>>>>>>>>>>>>> - SET autovacuum_freeze_max_age to low values and restart PostgreSQL\n>>>>>>>>>>>>>>>> - Consumes a lot of XIDs by using txid_curent()\n>>>>>>>>>>>>>>>> - Measure the time it takes for all tables to be VACUUMed\n>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>> For each test case, the following results were obtained by changing\n>>>>>>>>>>>>>>>> autovacuum_max_workers parameters to 1, 2, 3(def) 5 and 10.\n>>>>>>>>>>>>>>>> Also changing num of tables to 1000, 5000, 10000 and 20000.\n>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>> Due to the poor VM environment (2 VCPU/4 GB), the results are a little unstable,\n>>>>>>>>>>>>>>>> but I think it's enough to ask for a trend.\n>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>> ===========================================================================\n>>>>>>>>>>>>>>>> [1.Normal VACUUM case]\n>>>>>>>>>>>>>>>> tables:1000\n>>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 20 sec VS (with patch) 20 sec\n>>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 18 sec VS (with patch) 16 sec\n>>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 18 sec VS (with patch) 16 sec\n>>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 19 sec VS (with patch) 17 sec\n>>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 19 sec VS (with patch) 17 sec\n>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>> tables:5000\n>>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 77 sec VS (with patch) 78 sec\n>>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 61 sec VS (with patch) 43 sec\n>>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 38 sec VS (with patch) 38 sec\n>>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 45 sec VS (with patch) 37 sec\n>>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 43 sec VS (with patch) 35 sec\n>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>> tables:10000\n>>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 152 sec VS (with patch) 153 sec\n>>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 119 sec VS (with patch) 98 sec\n>>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 87 sec VS (with patch) 78 sec\n>>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 100 sec VS (with patch) 66 sec\n>>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 97 sec VS (with patch) 56 sec\n>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>> tables:20000\n>>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 338 sec VS (with patch) 339 sec\n>>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 231 sec VS (with patch) 229 sec\n>>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 220 sec VS (with patch) 191 sec\n>>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 234 sec VS (with patch) 147 sec\n>>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 320 sec VS (with patch) 113 sec\n>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>> [2.Anti wrap round VACUUM case]\n>>>>>>>>>>>>>>>> tables:1000\n>>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 19 sec VS (with patch) 18 sec\n>>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 14 sec VS (with patch) 15 sec\n>>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 14 sec VS (with patch) 14 sec\n>>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 14 sec VS (with patch) 16 sec\n>>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 16 sec VS (with patch) 14 sec\n>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>> tables:5000\n>>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 69 sec VS (with patch) 69 sec\n>>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 66 sec VS (with patch) 47 sec\n>>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 59 sec VS (with patch) 37 sec\n>>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 39 sec VS (with patch) 28 sec\n>>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 39 sec VS (with patch) 29 sec\n>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>> tables:10000\n>>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 139 sec VS (with patch) 138 sec\n>>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 130 sec VS (with patch) 86 sec\n>>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 120 sec VS (with patch) 68 sec\n>>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 96 sec VS (with patch) 41 sec\n>>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 90 sec VS (with patch) 39 sec\n>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>> tables:20000\n>>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 313 sec VS (with patch) 331 sec\n>>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 209 sec VS (with patch) 201 sec\n>>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 227 sec VS (with patch) 141 sec\n>>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 236 sec VS (with patch) 88 sec\n>>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 309 sec VS (with patch) 74 sec\n>>>>>>>>>>>>>>>> ===========================================================================\n>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>> The cases without patch, the scalability of the worker has decreased\n>>>>>>>>>>>>>>>> as the number of tables has increased.\n>>>>>>>>>>>>>>>> In fact, the more workers there are, the longer it takes to complete\n>>>>>>>>>>>>>>>> VACUUM to all tables.\n>>>>>>>>>>>>>>>> The cases with patch, it shows good scalability with respect to the\n>>>>>>>>>>>>>>>> number of workers.\n>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>> It seems a good performance improvement even without the patch of\n>>>>>>>>>>>>>>> shared memory based stats collector.\n>>>>>>>>>>>\n>>>>>>>>>>> Sounds great!\n>>>>>>>>>>>\n>>>>>>>>>>>\n>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>> Note that perf top results showed that hash_search_with_hash_value,\n>>>>>>>>>>>>>>>> hash_seq_search and\n>>>>>>>>>>>>>>>> pgstat_read_statsfiles are dominant during VACUUM in all patterns,\n>>>>>>>>>>>>>>>> with or without the patch.\n>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>> Therefore, there is still a need to find ways to optimize the reading\n>>>>>>>>>>>>>>>> of large amounts of stats.\n>>>>>>>>>>>>>>>> However, this patch is effective in its own right, and since there are\n>>>>>>>>>>>>>>>> only a few parts to modify,\n>>>>>>>>>>>>>>>> I think it should be able to be applied to current (preferably\n>>>>>>>>>>>>>>>> pre-v13) PostgreSQL.\n>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>> +1\n>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>> +\n>>>>>>>>>>>>>>> + /* We might be better to refresh stats */\n>>>>>>>>>>>>>>> + use_existing_stats = false;\n>>>>>>>>>>>>>>> }\n>>>>>>>>>>>>>>> + else\n>>>>>>>>>>>>>>> + {\n>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>> - heap_freetuple(classTup);\n>>>>>>>>>>>>>>> + heap_freetuple(classTup);\n>>>>>>>>>>>>>>> + /* The relid has already vacuumed, so we might be better to\n>>>>>>>>>>>>>>> use exiting stats */\n>>>>>>>>>>>>>>> + use_existing_stats = true;\n>>>>>>>>>>>>>>> + }\n>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>> With that patch, the autovacuum process refreshes the stats in the\n>>>>>>>>>>>>>>> next check if it finds out that the table still needs to be vacuumed.\n>>>>>>>>>>>>>>> But I guess it's not necessarily true because the next table might be\n>>>>>>>>>>>>>>> vacuumed already. So I think we might want to always use the existing\n>>>>>>>>>>>>>>> for the first check. What do you think?\n>>>>>>>>>>>>>> Thanks for your comment.\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>> If we assume the case where some workers vacuum on large tables\n>>>>>>>>>>>>>> and a single worker vacuum on small tables, the processing\n>>>>>>>>>>>>>> performance of the single worker will be slightly lower if the\n>>>>>>>>>>>>>> existing statistics are checked every time.\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>> In fact, at first I tried to check the existing stats every time,\n>>>>>>>>>>>>>> but the performance was slightly worse in cases with a small number of workers.\n>>>>>>>>>>>\n>>>>>>>>>>> Do you have this benchmark result?\n>>>>>>>>>>>\n>>>>>>>>>>>\n>>>>>>>>>>>>>> (Checking the existing stats is lightweight , but at high frequency,\n>>>>>>>>>>>>>> it affects processing performance.)\n>>>>>>>>>>>>>> Therefore, at after refresh statistics, determine whether autovac\n>>>>>>>>>>>>>> should use the existing statistics.\n>>>>>>>>>>>>>\n>>>>>>>>>>>>> Yeah, since the test you used uses a lot of small tables, if there are\n>>>>>>>>>>>>> a few workers, checking the existing stats is unlikely to return true\n>>>>>>>>>>>>> (no need to vacuum). So the cost of existing stats check ends up being\n>>>>>>>>>>>>> overhead. Not sure how slow always checking the existing stats was,\n>>>>>>>>>>>>> but given that the shared memory based stats collector patch could\n>>>>>>>>>>>>> improve the performance of refreshing stats, it might be better not to\n>>>>>>>>>>>>> check the existing stats frequently like the patch does. Anyway, I\n>>>>>>>>>>>>> think it’s better to evaluate the performance improvement with other\n>>>>>>>>>>>>> cases too.\n>>>>>>>>>>>> Yeah, I would like to see how much the performance changes in other cases.\n>>>>>>>>>>>> In addition, if the shared-based-stats patch is applied, we won't need to reload\n>>>>>>>>>>>> a huge stats file, so we will just have to check the stats on\n>>>>>>>>>>>> shared-mem every time.\n>>>>>>>>>>>> Perhaps the logic of table_recheck_autovac could be simpler.\n>>>>>>>>>>>>\n>>>>>>>>>>>>>> BTW, I found some typos in comments, so attache a fixed version.\n>>>>>>>>>>>\n>>>>>>>>>>> The patch adds some duplicated codes into table_recheck_autovac().\n>>>>>>>>>>> It's better to make the common function performing them and make\n>>>>>>>>>>> table_recheck_autovac() call that common function, to simplify the code.\n>>>>>>>>>> Thanks for your comment.\n>>>>>>>>>> Hmm.. I've cut out the duplicate part.\n>>>>>>>>>> Attach the patch.\n>>>>>>>>>> Could you confirm that it fits your expecting?\n>>>>>>>>>\n>>>>>>>>> Yes, thanks for updataing the patch! Here are another review comments.\n>>>>>>>>>\n>>>>>>>>> + shared = pgstat_fetch_stat_dbentry(InvalidOid);\n>>>>>>>>> + dbentry = pgstat_fetch_stat_dbentry(MyDatabaseId);\n>>>>>>>>>\n>>>>>>>>> When using the existing stats, ISTM that these are not necessary and\n>>>>>>>>> we can reuse \"shared\" and \"dbentry\" obtained before. Right?\n>>>>>>>> Yeah, but unless autovac_refresh_stats() is called, these functions\n>>>>>>>> read the information from the\n>>>>>>>> local hash table without re-read the stats file, so the process is very light.\n>>>>>>>> Therefore, I think, it is better to keep the current logic to keep the\n>>>>>>>> code simple.\n>>>>>>>>\n>>>>>>>>>\n>>>>>>>>> + /* We might be better to refresh stats */\n>>>>>>>>> + use_existing_stats = false;\n>>>>>>>>>\n>>>>>>>>> I think that we should add more comments about why it's better to\n>>>>>>>>> refresh the stats in this case.\n>>>>>>>>>\n>>>>>>>>> + /* The relid has already vacuumed, so we might be better to use existing stats */\n>>>>>>>>> + use_existing_stats = true;\n>>>>>>>>>\n>>>>>>>>> I think that we should add more comments about why it's better to\n>>>>>>>>> reuse the stats in this case.\n>>>>>>>> I added comments.\n>>>>>>>>\n>>>>>>>> Attache the patch.\n>>>>>>>>\n>>>>>>>\n>>>>>>> Thank you for updating the patch. Here are some small comments on the\n>>>>>>> latest (v4) patch.\n>>>>>>>\n>>>>>>> + * So if the last time we checked a table that was already vacuumed after\n>>>>>>> + * refres stats, check the current statistics before refreshing it.\n>>>>>>> + */\n>>>>>>>\n>>>>>>> s/refres/refresh/\n>>>>> Thanks! fixed.\n>>>>> Attached the patch.\n>>>>>\n>>>>>>>\n>>>>>>> -----\n>>>>>>> +/* Counter to determine if statistics should be refreshed */\n>>>>>>> +static bool use_existing_stats = false;\n>>>>>>> +\n>>>>>>>\n>>>>>>> I think 'use_existing_stats' can be declared within table_recheck_autovac().\n>>>>>>>\n>>>>>>> -----\n>>>>>>> While testing the performance, I realized that the statistics are\n>>>>>>> reset every time vacuumed one table, leading to re-reading the stats\n>>>>>>> file even if 'use_existing_stats' is true. Please refer that vacuum()\n>>>>>>> eventually calls AtEOXact_PgStat() which calls to\n>>>>>>> pgstat_clear_snapshot().\n>>>>>>\n>>>>>> Good catch!\n>>>>>>\n>>>>>>\n>>>>>>> I believe that's why the performance of the\n>>>>>>> method of always checking the existing stats wasn’t good as expected.\n>>>>>>> So if we save the statistics somewhere and use it for rechecking, the\n>>>>>>> results of the performance benchmark will differ between these two\n>>>>>>> methods.\n>>>>> Thanks for you checks.\n>>>>> But, if a worker did vacuum(), that means this worker had determined\n>>>>> need vacuum in the\n>>>>> table_recheck_autovac(). So, use_existing_stats set to false, and next\n>>>>> time, refresh stats.\n>>>>> Therefore I think the current patch is fine, as we want to avoid\n>>>>> unnecessary refreshing of\n>>>>> statistics before the actual vacuum(), right?\n>>>>\n>>>> Yes, you're right.\n>>>>\n>>>> When I benchmarked the performance of the method of always checking\n>>>> existing stats I edited your patch so that it sets use_existing_stats\n>>>> = true even if the second check is false (i.g., vacuum is needed).\n>>>> And the result I got was worse than expected especially in the case of\n>>>> a few autovacuum workers. But it doesn't evaluate the performance of\n>>>> that method rightly as the stats snapshot is cleared every time\n>>>> vacuum. Given you had similar results, I guess you used a similar way\n>>>> when evaluating it, is it right? If so, it’s better to fix this issue\n>>>> and see how the performance benchmark results will differ.\n>>>>\n>>>> For example, the results of the test case with 10000 tables and 1\n>>>> autovacuum worker I reported before was:\n>>>>\n>>>> 10000 tables:\n>>>> autovac_workers 1 : 158s,157s, 290s\n>>>>\n>>>> But after fixing that issue in the third method (always checking the\n>>>> existing stats), the results are:\n>>>\n>>> Could you tell me how you fixed that issue? You copied the stats to\n>>> somewhere as you suggested or skipped pgstat_clear_snapshot() as\n>>> I suggested?\n>>\n>> I used the way you suggested in this quick test; skipped\n>> pgstat_clear_snapshot().\n>>\n>>>\n>>> Kasahara-san seems not to like the latter idea because it might\n>>> cause bad side effect. So we should use the former idea?\n>>\n>> Not sure. I'm also concerned about the side effect but I've not checked yet.\n>>\n>> Since probably there is no big difference between the two ways in\n>> terms of performance I'm going to see how the performance benchmark\n>> result will change first.\n> \n> I've tested performance improvement again. From the left the execution\n> time of the current HEAD, Kasahara-san's patch, the method of always\n> checking the existing stats (using approach suggested by Fujii-san),\n> in seconds.\n> \n> 1000 tables:\n> autovac_workers 1 : 13s, 13s, 13s\n> autovac_workers 2 : 6s, 4s, 4s\n> autovac_workers 3 : 3s, 4s, 3s\n> autovac_workers 5 : 3s, 3s, 2s\n> autovac_workers 10: 2s, 3s, 2s\n> \n> 5000 tables:\n> autovac_workers 1 : 71s, 71s, 72s\n> autovac_workers 2 : 37s, 32s, 32s\n> autovac_workers 3 : 29s, 26s, 26s\n> autovac_workers 5 : 20s, 19s, 18s\n> autovac_workers 10: 13s, 8s, 8s\n> \n> 10000 tables:\n> autovac_workers 1 : 158s,157s, 159s\n> autovac_workers 2 : 80s, 53s, 78s\n> autovac_workers 3 : 75s, 67s, 67s\n> autovac_workers 5 : 61s, 42s, 42s\n> autovac_workers 10: 69s, 26s, 25s\n> \n> 20000 tables:\n> autovac_workers 1 : 379s, 380s, 389s\n> autovac_workers 2 : 236s, 232s, 233s\n> autovac_workers 3 : 222s, 181s, 182s\n> autovac_workers 5 : 212s, 132s, 139s\n> autovac_workers 10: 317s, 91s, 89s\n> \n> I don't see a big difference between Kasahara-san's patch and the\n> method of always checking the existing stats.\n\nThanks for doing the benchmark!\n\nThis benchmark result makes me think that we don't need to tweak\nAtEOXact_PgStat() and can use Kasahara-san approach.\nThat's good news :)\n\n+\t\t/*\n+\t\t * The relid had not yet been vacuumed. That means, it is unlikely that the\n+\t\t * stats that this worker currently has are updated by other worker's.\n+\t\t * So we might be better to refresh the stats in the next this recheck.\n+\t\t */\n+\t\tuse_existing_stats = false;\n\nI think that this comment should be changed to something like\nthe following. Thought?\n\n When we decide to do vacuum or analyze, the existing stats cannot\n be reused in the next cycle because it's cleared at the end of vacuum\n or analyze (by AtEOXact_PgStat()).\n\n+\t\t/*\n+\t\t * The relid had already vacuumed. That means, that for the stats that this\n+\t\t * worker currently has, the info of tables that this worker will process may\n+\t\t * have been updated by other workers with information that has already been\n+\t\t * vacuumed or analyzed.\n+\t\t * So we might be better to reuse the existing stats in the next this recheck.\n+\t\t */\n+\t\tuse_existing_stats = true;\n\nMaybe it's better to change this comment to something like the following?\n\n If neither vacuum nor analyze is necessary, the existing stats is\n not cleared and can be reused in the next cycle.\n\n+\tif (use_existing_stats)\n+\t{\n+\t\trecheck_relation_needs_vacanalyze(relid, classForm, avopts,\n+\t\t\t\t\t\t\t\t\t effective_multixact_freeze_max_age,\n+\t\t\t\t\t\t\t\t\t &dovacuum, &doanalyze, &wraparound);\n\nPersonally I'd like to add the assertion test checking \"pgStatDBHash != NULL\"\nhere, to guarantee that there is the existing stats to reuse when\nuse_existing_stats==true. Because if the future changes of autovacuum\ncode will break that assumption, it's not easy to detect that breakage\nwithout that assertion test. Thought?\n\n+\tshared = pgstat_fetch_stat_dbentry(InvalidOid);\n+\tdbentry = pgstat_fetch_stat_dbentry(MyDatabaseId);\n\nIf classForm->relisshared is true, only the former needs to be executed.\nOtherwise, only the latter needs to be executed. Right?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 2 Dec 2020 15:33:05 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: autovac issue with large number of tables" }, { "msg_contents": "Hi\n\nOn Wed, Dec 2, 2020 at 3:33 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/12/02 12:53, Masahiko Sawada wrote:\n> > On Tue, Dec 1, 2020 at 5:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >>\n> >> On Tue, Dec 1, 2020 at 4:32 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>\n> >>>\n> >>>\n> >>> On 2020/12/01 16:23, Masahiko Sawada wrote:\n> >>>> On Tue, Dec 1, 2020 at 1:48 PM Kasahara Tatsuhito\n> >>>> <kasahara.tatsuhito@gmail.com> wrote:\n> >>>>>\n> >>>>> Hi,\n> >>>>>\n> >>>>> On Mon, Nov 30, 2020 at 8:59 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>>>\n> >>>>>>\n> >>>>>>\n> >>>>>> On 2020/11/30 10:43, Masahiko Sawada wrote:\n> >>>>>>> On Sun, Nov 29, 2020 at 10:34 PM Kasahara Tatsuhito\n> >>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n> >>>>>>>>\n> >>>>>>>> Hi, Thanks for you comments.\n> >>>>>>>>\n> >>>>>>>> On Fri, Nov 27, 2020 at 9:51 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>>>>>>\n> >>>>>>>>>\n> >>>>>>>>>\n> >>>>>>>>> On 2020/11/27 18:38, Kasahara Tatsuhito wrote:\n> >>>>>>>>>> Hi,\n> >>>>>>>>>>\n> >>>>>>>>>> On Fri, Nov 27, 2020 at 1:43 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>>>>>>>>\n> >>>>>>>>>>>\n> >>>>>>>>>>>\n> >>>>>>>>>>> On 2020/11/26 10:41, Kasahara Tatsuhito wrote:\n> >>>>>>>>>>>> On Wed, Nov 25, 2020 at 8:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >>>>>>>>>>>>>\n> >>>>>>>>>>>>> On Wed, Nov 25, 2020 at 4:18 PM Kasahara Tatsuhito\n> >>>>>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n> >>>>>>>>>>>>>>\n> >>>>>>>>>>>>>> Hi,\n> >>>>>>>>>>>>>>\n> >>>>>>>>>>>>>> On Wed, Nov 25, 2020 at 2:17 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>> On Fri, Sep 4, 2020 at 7:50 PM Kasahara Tatsuhito\n> >>>>>>>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> Hi,\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> On Wed, Sep 2, 2020 at 2:10 AM Kasahara Tatsuhito\n> >>>>>>>>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n> >>>>>>>>>>>>>>>>>> I wonder if we could have table_recheck_autovac do two probes of the stats\n> >>>>>>>>>>>>>>>>>> data. First probe the existing stats data, and if it shows the table to\n> >>>>>>>>>>>>>>>>>> be already vacuumed, return immediately. If not, *then* force a stats\n> >>>>>>>>>>>>>>>>>> re-read, and check a second time.\n> >>>>>>>>>>>>>>>>> Does the above mean that the second and subsequent table_recheck_autovac()\n> >>>>>>>>>>>>>>>>> will be improved to first check using the previous refreshed statistics?\n> >>>>>>>>>>>>>>>>> I think that certainly works.\n> >>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>> If that's correct, I'll try to create a patch for the PoC\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> I still don't know how to reproduce Jim's troubles, but I was able to reproduce\n> >>>>>>>>>>>>>>>> what was probably a very similar problem.\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> This problem seems to be more likely to occur in cases where you have\n> >>>>>>>>>>>>>>>> a large number of tables,\n> >>>>>>>>>>>>>>>> i.e., a large amount of stats, and many small tables need VACUUM at\n> >>>>>>>>>>>>>>>> the same time.\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> So I followed Tom's advice and created a patch for the PoC.\n> >>>>>>>>>>>>>>>> This patch will enable a flag in the table_recheck_autovac function to use\n> >>>>>>>>>>>>>>>> the existing stats next time if VACUUM (or ANALYZE) has already been done\n> >>>>>>>>>>>>>>>> by another worker on the check after the stats have been updated.\n> >>>>>>>>>>>>>>>> If the tables continue to require VACUUM after the refresh, then a refresh\n> >>>>>>>>>>>>>>>> will be required instead of using the existing statistics.\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> I did simple test with HEAD and HEAD + this PoC patch.\n> >>>>>>>>>>>>>>>> The tests were conducted in two cases.\n> >>>>>>>>>>>>>>>> (I changed few configurations. see attached scripts)\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> 1. Normal VACUUM case\n> >>>>>>>>>>>>>>>> - SET autovacuum = off\n> >>>>>>>>>>>>>>>> - CREATE tables with 100 rows\n> >>>>>>>>>>>>>>>> - DELETE 90 rows for each tables\n> >>>>>>>>>>>>>>>> - SET autovacuum = on and restart PostgreSQL\n> >>>>>>>>>>>>>>>> - Measure the time it takes for all tables to be VACUUMed\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> 2. Anti wrap round VACUUM case\n> >>>>>>>>>>>>>>>> - CREATE brank tables\n> >>>>>>>>>>>>>>>> - SELECT all of these tables (for generate stats)\n> >>>>>>>>>>>>>>>> - SET autovacuum_freeze_max_age to low values and restart PostgreSQL\n> >>>>>>>>>>>>>>>> - Consumes a lot of XIDs by using txid_curent()\n> >>>>>>>>>>>>>>>> - Measure the time it takes for all tables to be VACUUMed\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> For each test case, the following results were obtained by changing\n> >>>>>>>>>>>>>>>> autovacuum_max_workers parameters to 1, 2, 3(def) 5 and 10.\n> >>>>>>>>>>>>>>>> Also changing num of tables to 1000, 5000, 10000 and 20000.\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> Due to the poor VM environment (2 VCPU/4 GB), the results are a little unstable,\n> >>>>>>>>>>>>>>>> but I think it's enough to ask for a trend.\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> ===========================================================================\n> >>>>>>>>>>>>>>>> [1.Normal VACUUM case]\n> >>>>>>>>>>>>>>>> tables:1000\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 20 sec VS (with patch) 20 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 18 sec VS (with patch) 16 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 18 sec VS (with patch) 16 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 19 sec VS (with patch) 17 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 19 sec VS (with patch) 17 sec\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> tables:5000\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 77 sec VS (with patch) 78 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 61 sec VS (with patch) 43 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 38 sec VS (with patch) 38 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 45 sec VS (with patch) 37 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 43 sec VS (with patch) 35 sec\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> tables:10000\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 152 sec VS (with patch) 153 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 119 sec VS (with patch) 98 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 87 sec VS (with patch) 78 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 100 sec VS (with patch) 66 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 97 sec VS (with patch) 56 sec\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> tables:20000\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 338 sec VS (with patch) 339 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 231 sec VS (with patch) 229 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 220 sec VS (with patch) 191 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 234 sec VS (with patch) 147 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 320 sec VS (with patch) 113 sec\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> [2.Anti wrap round VACUUM case]\n> >>>>>>>>>>>>>>>> tables:1000\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 19 sec VS (with patch) 18 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 14 sec VS (with patch) 15 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 14 sec VS (with patch) 14 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 14 sec VS (with patch) 16 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 16 sec VS (with patch) 14 sec\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> tables:5000\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 69 sec VS (with patch) 69 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 66 sec VS (with patch) 47 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 59 sec VS (with patch) 37 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 39 sec VS (with patch) 28 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 39 sec VS (with patch) 29 sec\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> tables:10000\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 139 sec VS (with patch) 138 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 130 sec VS (with patch) 86 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 120 sec VS (with patch) 68 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 96 sec VS (with patch) 41 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 90 sec VS (with patch) 39 sec\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> tables:20000\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 313 sec VS (with patch) 331 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 209 sec VS (with patch) 201 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 227 sec VS (with patch) 141 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 236 sec VS (with patch) 88 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 309 sec VS (with patch) 74 sec\n> >>>>>>>>>>>>>>>> ===========================================================================\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> The cases without patch, the scalability of the worker has decreased\n> >>>>>>>>>>>>>>>> as the number of tables has increased.\n> >>>>>>>>>>>>>>>> In fact, the more workers there are, the longer it takes to complete\n> >>>>>>>>>>>>>>>> VACUUM to all tables.\n> >>>>>>>>>>>>>>>> The cases with patch, it shows good scalability with respect to the\n> >>>>>>>>>>>>>>>> number of workers.\n> >>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>> It seems a good performance improvement even without the patch of\n> >>>>>>>>>>>>>>> shared memory based stats collector.\n> >>>>>>>>>>>\n> >>>>>>>>>>> Sounds great!\n> >>>>>>>>>>>\n> >>>>>>>>>>>\n> >>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> Note that perf top results showed that hash_search_with_hash_value,\n> >>>>>>>>>>>>>>>> hash_seq_search and\n> >>>>>>>>>>>>>>>> pgstat_read_statsfiles are dominant during VACUUM in all patterns,\n> >>>>>>>>>>>>>>>> with or without the patch.\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> Therefore, there is still a need to find ways to optimize the reading\n> >>>>>>>>>>>>>>>> of large amounts of stats.\n> >>>>>>>>>>>>>>>> However, this patch is effective in its own right, and since there are\n> >>>>>>>>>>>>>>>> only a few parts to modify,\n> >>>>>>>>>>>>>>>> I think it should be able to be applied to current (preferably\n> >>>>>>>>>>>>>>>> pre-v13) PostgreSQL.\n> >>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>> +1\n> >>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>> +\n> >>>>>>>>>>>>>>> + /* We might be better to refresh stats */\n> >>>>>>>>>>>>>>> + use_existing_stats = false;\n> >>>>>>>>>>>>>>> }\n> >>>>>>>>>>>>>>> + else\n> >>>>>>>>>>>>>>> + {\n> >>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>> - heap_freetuple(classTup);\n> >>>>>>>>>>>>>>> + heap_freetuple(classTup);\n> >>>>>>>>>>>>>>> + /* The relid has already vacuumed, so we might be better to\n> >>>>>>>>>>>>>>> use exiting stats */\n> >>>>>>>>>>>>>>> + use_existing_stats = true;\n> >>>>>>>>>>>>>>> + }\n> >>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>> With that patch, the autovacuum process refreshes the stats in the\n> >>>>>>>>>>>>>>> next check if it finds out that the table still needs to be vacuumed.\n> >>>>>>>>>>>>>>> But I guess it's not necessarily true because the next table might be\n> >>>>>>>>>>>>>>> vacuumed already. So I think we might want to always use the existing\n> >>>>>>>>>>>>>>> for the first check. What do you think?\n> >>>>>>>>>>>>>> Thanks for your comment.\n> >>>>>>>>>>>>>>\n> >>>>>>>>>>>>>> If we assume the case where some workers vacuum on large tables\n> >>>>>>>>>>>>>> and a single worker vacuum on small tables, the processing\n> >>>>>>>>>>>>>> performance of the single worker will be slightly lower if the\n> >>>>>>>>>>>>>> existing statistics are checked every time.\n> >>>>>>>>>>>>>>\n> >>>>>>>>>>>>>> In fact, at first I tried to check the existing stats every time,\n> >>>>>>>>>>>>>> but the performance was slightly worse in cases with a small number of workers.\n> >>>>>>>>>>>\n> >>>>>>>>>>> Do you have this benchmark result?\n> >>>>>>>>>>>\n> >>>>>>>>>>>\n> >>>>>>>>>>>>>> (Checking the existing stats is lightweight , but at high frequency,\n> >>>>>>>>>>>>>> it affects processing performance.)\n> >>>>>>>>>>>>>> Therefore, at after refresh statistics, determine whether autovac\n> >>>>>>>>>>>>>> should use the existing statistics.\n> >>>>>>>>>>>>>\n> >>>>>>>>>>>>> Yeah, since the test you used uses a lot of small tables, if there are\n> >>>>>>>>>>>>> a few workers, checking the existing stats is unlikely to return true\n> >>>>>>>>>>>>> (no need to vacuum). So the cost of existing stats check ends up being\n> >>>>>>>>>>>>> overhead. Not sure how slow always checking the existing stats was,\n> >>>>>>>>>>>>> but given that the shared memory based stats collector patch could\n> >>>>>>>>>>>>> improve the performance of refreshing stats, it might be better not to\n> >>>>>>>>>>>>> check the existing stats frequently like the patch does. Anyway, I\n> >>>>>>>>>>>>> think it’s better to evaluate the performance improvement with other\n> >>>>>>>>>>>>> cases too.\n> >>>>>>>>>>>> Yeah, I would like to see how much the performance changes in other cases.\n> >>>>>>>>>>>> In addition, if the shared-based-stats patch is applied, we won't need to reload\n> >>>>>>>>>>>> a huge stats file, so we will just have to check the stats on\n> >>>>>>>>>>>> shared-mem every time.\n> >>>>>>>>>>>> Perhaps the logic of table_recheck_autovac could be simpler.\n> >>>>>>>>>>>>\n> >>>>>>>>>>>>>> BTW, I found some typos in comments, so attache a fixed version.\n> >>>>>>>>>>>\n> >>>>>>>>>>> The patch adds some duplicated codes into table_recheck_autovac().\n> >>>>>>>>>>> It's better to make the common function performing them and make\n> >>>>>>>>>>> table_recheck_autovac() call that common function, to simplify the code.\n> >>>>>>>>>> Thanks for your comment.\n> >>>>>>>>>> Hmm.. I've cut out the duplicate part.\n> >>>>>>>>>> Attach the patch.\n> >>>>>>>>>> Could you confirm that it fits your expecting?\n> >>>>>>>>>\n> >>>>>>>>> Yes, thanks for updataing the patch! Here are another review comments.\n> >>>>>>>>>\n> >>>>>>>>> + shared = pgstat_fetch_stat_dbentry(InvalidOid);\n> >>>>>>>>> + dbentry = pgstat_fetch_stat_dbentry(MyDatabaseId);\n> >>>>>>>>>\n> >>>>>>>>> When using the existing stats, ISTM that these are not necessary and\n> >>>>>>>>> we can reuse \"shared\" and \"dbentry\" obtained before. Right?\n> >>>>>>>> Yeah, but unless autovac_refresh_stats() is called, these functions\n> >>>>>>>> read the information from the\n> >>>>>>>> local hash table without re-read the stats file, so the process is very light.\n> >>>>>>>> Therefore, I think, it is better to keep the current logic to keep the\n> >>>>>>>> code simple.\n> >>>>>>>>\n> >>>>>>>>>\n> >>>>>>>>> + /* We might be better to refresh stats */\n> >>>>>>>>> + use_existing_stats = false;\n> >>>>>>>>>\n> >>>>>>>>> I think that we should add more comments about why it's better to\n> >>>>>>>>> refresh the stats in this case.\n> >>>>>>>>>\n> >>>>>>>>> + /* The relid has already vacuumed, so we might be better to use existing stats */\n> >>>>>>>>> + use_existing_stats = true;\n> >>>>>>>>>\n> >>>>>>>>> I think that we should add more comments about why it's better to\n> >>>>>>>>> reuse the stats in this case.\n> >>>>>>>> I added comments.\n> >>>>>>>>\n> >>>>>>>> Attache the patch.\n> >>>>>>>>\n> >>>>>>>\n> >>>>>>> Thank you for updating the patch. Here are some small comments on the\n> >>>>>>> latest (v4) patch.\n> >>>>>>>\n> >>>>>>> + * So if the last time we checked a table that was already vacuumed after\n> >>>>>>> + * refres stats, check the current statistics before refreshing it.\n> >>>>>>> + */\n> >>>>>>>\n> >>>>>>> s/refres/refresh/\n> >>>>> Thanks! fixed.\n> >>>>> Attached the patch.\n> >>>>>\n> >>>>>>>\n> >>>>>>> -----\n> >>>>>>> +/* Counter to determine if statistics should be refreshed */\n> >>>>>>> +static bool use_existing_stats = false;\n> >>>>>>> +\n> >>>>>>>\n> >>>>>>> I think 'use_existing_stats' can be declared within table_recheck_autovac().\n> >>>>>>>\n> >>>>>>> -----\n> >>>>>>> While testing the performance, I realized that the statistics are\n> >>>>>>> reset every time vacuumed one table, leading to re-reading the stats\n> >>>>>>> file even if 'use_existing_stats' is true. Please refer that vacuum()\n> >>>>>>> eventually calls AtEOXact_PgStat() which calls to\n> >>>>>>> pgstat_clear_snapshot().\n> >>>>>>\n> >>>>>> Good catch!\n> >>>>>>\n> >>>>>>\n> >>>>>>> I believe that's why the performance of the\n> >>>>>>> method of always checking the existing stats wasn’t good as expected.\n> >>>>>>> So if we save the statistics somewhere and use it for rechecking, the\n> >>>>>>> results of the performance benchmark will differ between these two\n> >>>>>>> methods.\n> >>>>> Thanks for you checks.\n> >>>>> But, if a worker did vacuum(), that means this worker had determined\n> >>>>> need vacuum in the\n> >>>>> table_recheck_autovac(). So, use_existing_stats set to false, and next\n> >>>>> time, refresh stats.\n> >>>>> Therefore I think the current patch is fine, as we want to avoid\n> >>>>> unnecessary refreshing of\n> >>>>> statistics before the actual vacuum(), right?\n> >>>>\n> >>>> Yes, you're right.\n> >>>>\n> >>>> When I benchmarked the performance of the method of always checking\n> >>>> existing stats I edited your patch so that it sets use_existing_stats\n> >>>> = true even if the second check is false (i.g., vacuum is needed).\n> >>>> And the result I got was worse than expected especially in the case of\n> >>>> a few autovacuum workers. But it doesn't evaluate the performance of\n> >>>> that method rightly as the stats snapshot is cleared every time\n> >>>> vacuum. Given you had similar results, I guess you used a similar way\n> >>>> when evaluating it, is it right? If so, it’s better to fix this issue\n> >>>> and see how the performance benchmark results will differ.\n> >>>>\n> >>>> For example, the results of the test case with 10000 tables and 1\n> >>>> autovacuum worker I reported before was:\n> >>>>\n> >>>> 10000 tables:\n> >>>> autovac_workers 1 : 158s,157s, 290s\n> >>>>\n> >>>> But after fixing that issue in the third method (always checking the\n> >>>> existing stats), the results are:\n> >>>\n> >>> Could you tell me how you fixed that issue? You copied the stats to\n> >>> somewhere as you suggested or skipped pgstat_clear_snapshot() as\n> >>> I suggested?\n> >>\n> >> I used the way you suggested in this quick test; skipped\n> >> pgstat_clear_snapshot().\n> >>\n> >>>\n> >>> Kasahara-san seems not to like the latter idea because it might\n> >>> cause bad side effect. So we should use the former idea?\n> >>\n> >> Not sure. I'm also concerned about the side effect but I've not checked yet.\n> >>\n> >> Since probably there is no big difference between the two ways in\n> >> terms of performance I'm going to see how the performance benchmark\n> >> result will change first.\n> >\n> > I've tested performance improvement again. From the left the execution\n> > time of the current HEAD, Kasahara-san's patch, the method of always\n> > checking the existing stats (using approach suggested by Fujii-san),\n> > in seconds.\n> >\n> > 1000 tables:\n> > autovac_workers 1 : 13s, 13s, 13s\n> > autovac_workers 2 : 6s, 4s, 4s\n> > autovac_workers 3 : 3s, 4s, 3s\n> > autovac_workers 5 : 3s, 3s, 2s\n> > autovac_workers 10: 2s, 3s, 2s\n> >\n> > 5000 tables:\n> > autovac_workers 1 : 71s, 71s, 72s\n> > autovac_workers 2 : 37s, 32s, 32s\n> > autovac_workers 3 : 29s, 26s, 26s\n> > autovac_workers 5 : 20s, 19s, 18s\n> > autovac_workers 10: 13s, 8s, 8s\n> >\n> > 10000 tables:\n> > autovac_workers 1 : 158s,157s, 159s\n> > autovac_workers 2 : 80s, 53s, 78s\n> > autovac_workers 3 : 75s, 67s, 67s\n> > autovac_workers 5 : 61s, 42s, 42s\n> > autovac_workers 10: 69s, 26s, 25s\n> >\n> > 20000 tables:\n> > autovac_workers 1 : 379s, 380s, 389s\n> > autovac_workers 2 : 236s, 232s, 233s\n> > autovac_workers 3 : 222s, 181s, 182s\n> > autovac_workers 5 : 212s, 132s, 139s\n> > autovac_workers 10: 317s, 91s, 89s\n> >\n> > I don't see a big difference between Kasahara-san's patch and the\n> > method of always checking the existing stats.\nThanks!\n\n\n> Thanks for doing the benchmark!\n>\n> This benchmark result makes me think that we don't need to tweak\n> AtEOXact_PgStat() and can use Kasahara-san approach.\n> That's good news :)\n>\n> + /*\n> + * The relid had not yet been vacuumed. That means, it is unlikely that the\n> + * stats that this worker currently has are updated by other worker's.\n> + * So we might be better to refresh the stats in the next this recheck.\n> + */\n> + use_existing_stats = false;\n>\n> I think that this comment should be changed to something like\n> the following. Thought?\nI think your comment is more reasonable.\nI replaced the comments.\n\n>\n> When we decide to do vacuum or analyze, the existing stats cannot\n> be reused in the next cycle because it's cleared at the end of vacuum\n> or analyze (by AtEOXact_PgStat()).\n>\n> + /*\n> + * The relid had already vacuumed. That means, that for the stats that this\n> + * worker currently has, the info of tables that this worker will process may\n> + * have been updated by other workers with information that has already been\n> + * vacuumed or analyzed.\n> + * So we might be better to reuse the existing stats in the next this recheck.\n> + */\n> + use_existing_stats = true;\n>\n> Maybe it's better to change this comment to something like the following?\nI replaced the comments.\n\n\n> If neither vacuum nor analyze is necessary, the existing stats is\n> not cleared and can be reused in the next cycle.\n>\n> + if (use_existing_stats)\n> + {\n> + recheck_relation_needs_vacanalyze(relid, classForm, avopts,\n> + effective_multixact_freeze_max_age,\n> + &dovacuum, &doanalyze, &wraparound);\n>\n> Personally I'd like to add the assertion test checking \"pgStatDBHash != NULL\"\n> here, to guarantee that there is the existing stats to reuse when\n> use_existing_stats==true. Because if the future changes of autovacuum\n> code will break that assumption, it's not easy to detect that breakage\n> without that assertion test. Thought?\nI think, it's nice to have.\nBut if do so, we have to add new function to pgstat.c for check\npgStatDBHash is null or not.\nI'm not sure it's a reasonable change.\nAnd, if pgstatDBHash is NULL here, it is not a critical issue, so\nforegoing the addition of the Assert for now.\n\n> + shared = pgstat_fetch_stat_dbentry(InvalidOid);\n> + dbentry = pgstat_fetch_stat_dbentry(MyDatabaseId);\n>\n> If classForm->relisshared is true, only the former needs to be executed.\n> Otherwise, only the latter needs to be executed. Right?\nRight.\nI modified that check classForm->relisshared to execute only one of them.\n\nAttached the patch.\n\nBest regards,\n\n> Regards,\n>\n> --\n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n\n\n\n-- \nTatsuhito Kasahara\nkasahara.tatsuhito _at_ gmail.com", "msg_date": "Wed, 2 Dec 2020 18:50:04 +0900", "msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>", "msg_from_op": false, "msg_subject": "Re: autovac issue with large number of tables" }, { "msg_contents": "On Wed, Dec 2, 2020 at 3:33 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/12/02 12:53, Masahiko Sawada wrote:\n> > On Tue, Dec 1, 2020 at 5:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >>\n> >> On Tue, Dec 1, 2020 at 4:32 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>\n> >>>\n> >>>\n> >>> On 2020/12/01 16:23, Masahiko Sawada wrote:\n> >>>> On Tue, Dec 1, 2020 at 1:48 PM Kasahara Tatsuhito\n> >>>> <kasahara.tatsuhito@gmail.com> wrote:\n> >>>>>\n> >>>>> Hi,\n> >>>>>\n> >>>>> On Mon, Nov 30, 2020 at 8:59 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>>>\n> >>>>>>\n> >>>>>>\n> >>>>>> On 2020/11/30 10:43, Masahiko Sawada wrote:\n> >>>>>>> On Sun, Nov 29, 2020 at 10:34 PM Kasahara Tatsuhito\n> >>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n> >>>>>>>>\n> >>>>>>>> Hi, Thanks for you comments.\n> >>>>>>>>\n> >>>>>>>> On Fri, Nov 27, 2020 at 9:51 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>>>>>>\n> >>>>>>>>>\n> >>>>>>>>>\n> >>>>>>>>> On 2020/11/27 18:38, Kasahara Tatsuhito wrote:\n> >>>>>>>>>> Hi,\n> >>>>>>>>>>\n> >>>>>>>>>> On Fri, Nov 27, 2020 at 1:43 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>>>>>>>>\n> >>>>>>>>>>>\n> >>>>>>>>>>>\n> >>>>>>>>>>> On 2020/11/26 10:41, Kasahara Tatsuhito wrote:\n> >>>>>>>>>>>> On Wed, Nov 25, 2020 at 8:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >>>>>>>>>>>>>\n> >>>>>>>>>>>>> On Wed, Nov 25, 2020 at 4:18 PM Kasahara Tatsuhito\n> >>>>>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n> >>>>>>>>>>>>>>\n> >>>>>>>>>>>>>> Hi,\n> >>>>>>>>>>>>>>\n> >>>>>>>>>>>>>> On Wed, Nov 25, 2020 at 2:17 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>> On Fri, Sep 4, 2020 at 7:50 PM Kasahara Tatsuhito\n> >>>>>>>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> Hi,\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> On Wed, Sep 2, 2020 at 2:10 AM Kasahara Tatsuhito\n> >>>>>>>>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n> >>>>>>>>>>>>>>>>>> I wonder if we could have table_recheck_autovac do two probes of the stats\n> >>>>>>>>>>>>>>>>>> data. First probe the existing stats data, and if it shows the table to\n> >>>>>>>>>>>>>>>>>> be already vacuumed, return immediately. If not, *then* force a stats\n> >>>>>>>>>>>>>>>>>> re-read, and check a second time.\n> >>>>>>>>>>>>>>>>> Does the above mean that the second and subsequent table_recheck_autovac()\n> >>>>>>>>>>>>>>>>> will be improved to first check using the previous refreshed statistics?\n> >>>>>>>>>>>>>>>>> I think that certainly works.\n> >>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>> If that's correct, I'll try to create a patch for the PoC\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> I still don't know how to reproduce Jim's troubles, but I was able to reproduce\n> >>>>>>>>>>>>>>>> what was probably a very similar problem.\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> This problem seems to be more likely to occur in cases where you have\n> >>>>>>>>>>>>>>>> a large number of tables,\n> >>>>>>>>>>>>>>>> i.e., a large amount of stats, and many small tables need VACUUM at\n> >>>>>>>>>>>>>>>> the same time.\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> So I followed Tom's advice and created a patch for the PoC.\n> >>>>>>>>>>>>>>>> This patch will enable a flag in the table_recheck_autovac function to use\n> >>>>>>>>>>>>>>>> the existing stats next time if VACUUM (or ANALYZE) has already been done\n> >>>>>>>>>>>>>>>> by another worker on the check after the stats have been updated.\n> >>>>>>>>>>>>>>>> If the tables continue to require VACUUM after the refresh, then a refresh\n> >>>>>>>>>>>>>>>> will be required instead of using the existing statistics.\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> I did simple test with HEAD and HEAD + this PoC patch.\n> >>>>>>>>>>>>>>>> The tests were conducted in two cases.\n> >>>>>>>>>>>>>>>> (I changed few configurations. see attached scripts)\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> 1. Normal VACUUM case\n> >>>>>>>>>>>>>>>> - SET autovacuum = off\n> >>>>>>>>>>>>>>>> - CREATE tables with 100 rows\n> >>>>>>>>>>>>>>>> - DELETE 90 rows for each tables\n> >>>>>>>>>>>>>>>> - SET autovacuum = on and restart PostgreSQL\n> >>>>>>>>>>>>>>>> - Measure the time it takes for all tables to be VACUUMed\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> 2. Anti wrap round VACUUM case\n> >>>>>>>>>>>>>>>> - CREATE brank tables\n> >>>>>>>>>>>>>>>> - SELECT all of these tables (for generate stats)\n> >>>>>>>>>>>>>>>> - SET autovacuum_freeze_max_age to low values and restart PostgreSQL\n> >>>>>>>>>>>>>>>> - Consumes a lot of XIDs by using txid_curent()\n> >>>>>>>>>>>>>>>> - Measure the time it takes for all tables to be VACUUMed\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> For each test case, the following results were obtained by changing\n> >>>>>>>>>>>>>>>> autovacuum_max_workers parameters to 1, 2, 3(def) 5 and 10.\n> >>>>>>>>>>>>>>>> Also changing num of tables to 1000, 5000, 10000 and 20000.\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> Due to the poor VM environment (2 VCPU/4 GB), the results are a little unstable,\n> >>>>>>>>>>>>>>>> but I think it's enough to ask for a trend.\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> ===========================================================================\n> >>>>>>>>>>>>>>>> [1.Normal VACUUM case]\n> >>>>>>>>>>>>>>>> tables:1000\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 20 sec VS (with patch) 20 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 18 sec VS (with patch) 16 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 18 sec VS (with patch) 16 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 19 sec VS (with patch) 17 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 19 sec VS (with patch) 17 sec\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> tables:5000\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 77 sec VS (with patch) 78 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 61 sec VS (with patch) 43 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 38 sec VS (with patch) 38 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 45 sec VS (with patch) 37 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 43 sec VS (with patch) 35 sec\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> tables:10000\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 152 sec VS (with patch) 153 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 119 sec VS (with patch) 98 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 87 sec VS (with patch) 78 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 100 sec VS (with patch) 66 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 97 sec VS (with patch) 56 sec\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> tables:20000\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 338 sec VS (with patch) 339 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 231 sec VS (with patch) 229 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 220 sec VS (with patch) 191 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 234 sec VS (with patch) 147 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 320 sec VS (with patch) 113 sec\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> [2.Anti wrap round VACUUM case]\n> >>>>>>>>>>>>>>>> tables:1000\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 19 sec VS (with patch) 18 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 14 sec VS (with patch) 15 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 14 sec VS (with patch) 14 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 14 sec VS (with patch) 16 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 16 sec VS (with patch) 14 sec\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> tables:5000\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 69 sec VS (with patch) 69 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 66 sec VS (with patch) 47 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 59 sec VS (with patch) 37 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 39 sec VS (with patch) 28 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 39 sec VS (with patch) 29 sec\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> tables:10000\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 139 sec VS (with patch) 138 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 130 sec VS (with patch) 86 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 120 sec VS (with patch) 68 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 96 sec VS (with patch) 41 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 90 sec VS (with patch) 39 sec\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> tables:20000\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 313 sec VS (with patch) 331 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 209 sec VS (with patch) 201 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 227 sec VS (with patch) 141 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 236 sec VS (with patch) 88 sec\n> >>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 309 sec VS (with patch) 74 sec\n> >>>>>>>>>>>>>>>> ===========================================================================\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> The cases without patch, the scalability of the worker has decreased\n> >>>>>>>>>>>>>>>> as the number of tables has increased.\n> >>>>>>>>>>>>>>>> In fact, the more workers there are, the longer it takes to complete\n> >>>>>>>>>>>>>>>> VACUUM to all tables.\n> >>>>>>>>>>>>>>>> The cases with patch, it shows good scalability with respect to the\n> >>>>>>>>>>>>>>>> number of workers.\n> >>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>> It seems a good performance improvement even without the patch of\n> >>>>>>>>>>>>>>> shared memory based stats collector.\n> >>>>>>>>>>>\n> >>>>>>>>>>> Sounds great!\n> >>>>>>>>>>>\n> >>>>>>>>>>>\n> >>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> Note that perf top results showed that hash_search_with_hash_value,\n> >>>>>>>>>>>>>>>> hash_seq_search and\n> >>>>>>>>>>>>>>>> pgstat_read_statsfiles are dominant during VACUUM in all patterns,\n> >>>>>>>>>>>>>>>> with or without the patch.\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> Therefore, there is still a need to find ways to optimize the reading\n> >>>>>>>>>>>>>>>> of large amounts of stats.\n> >>>>>>>>>>>>>>>> However, this patch is effective in its own right, and since there are\n> >>>>>>>>>>>>>>>> only a few parts to modify,\n> >>>>>>>>>>>>>>>> I think it should be able to be applied to current (preferably\n> >>>>>>>>>>>>>>>> pre-v13) PostgreSQL.\n> >>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>> +1\n> >>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>> +\n> >>>>>>>>>>>>>>> + /* We might be better to refresh stats */\n> >>>>>>>>>>>>>>> + use_existing_stats = false;\n> >>>>>>>>>>>>>>> }\n> >>>>>>>>>>>>>>> + else\n> >>>>>>>>>>>>>>> + {\n> >>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>> - heap_freetuple(classTup);\n> >>>>>>>>>>>>>>> + heap_freetuple(classTup);\n> >>>>>>>>>>>>>>> + /* The relid has already vacuumed, so we might be better to\n> >>>>>>>>>>>>>>> use exiting stats */\n> >>>>>>>>>>>>>>> + use_existing_stats = true;\n> >>>>>>>>>>>>>>> + }\n> >>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>> With that patch, the autovacuum process refreshes the stats in the\n> >>>>>>>>>>>>>>> next check if it finds out that the table still needs to be vacuumed.\n> >>>>>>>>>>>>>>> But I guess it's not necessarily true because the next table might be\n> >>>>>>>>>>>>>>> vacuumed already. So I think we might want to always use the existing\n> >>>>>>>>>>>>>>> for the first check. What do you think?\n> >>>>>>>>>>>>>> Thanks for your comment.\n> >>>>>>>>>>>>>>\n> >>>>>>>>>>>>>> If we assume the case where some workers vacuum on large tables\n> >>>>>>>>>>>>>> and a single worker vacuum on small tables, the processing\n> >>>>>>>>>>>>>> performance of the single worker will be slightly lower if the\n> >>>>>>>>>>>>>> existing statistics are checked every time.\n> >>>>>>>>>>>>>>\n> >>>>>>>>>>>>>> In fact, at first I tried to check the existing stats every time,\n> >>>>>>>>>>>>>> but the performance was slightly worse in cases with a small number of workers.\n> >>>>>>>>>>>\n> >>>>>>>>>>> Do you have this benchmark result?\n> >>>>>>>>>>>\n> >>>>>>>>>>>\n> >>>>>>>>>>>>>> (Checking the existing stats is lightweight , but at high frequency,\n> >>>>>>>>>>>>>> it affects processing performance.)\n> >>>>>>>>>>>>>> Therefore, at after refresh statistics, determine whether autovac\n> >>>>>>>>>>>>>> should use the existing statistics.\n> >>>>>>>>>>>>>\n> >>>>>>>>>>>>> Yeah, since the test you used uses a lot of small tables, if there are\n> >>>>>>>>>>>>> a few workers, checking the existing stats is unlikely to return true\n> >>>>>>>>>>>>> (no need to vacuum). So the cost of existing stats check ends up being\n> >>>>>>>>>>>>> overhead. Not sure how slow always checking the existing stats was,\n> >>>>>>>>>>>>> but given that the shared memory based stats collector patch could\n> >>>>>>>>>>>>> improve the performance of refreshing stats, it might be better not to\n> >>>>>>>>>>>>> check the existing stats frequently like the patch does. Anyway, I\n> >>>>>>>>>>>>> think it’s better to evaluate the performance improvement with other\n> >>>>>>>>>>>>> cases too.\n> >>>>>>>>>>>> Yeah, I would like to see how much the performance changes in other cases.\n> >>>>>>>>>>>> In addition, if the shared-based-stats patch is applied, we won't need to reload\n> >>>>>>>>>>>> a huge stats file, so we will just have to check the stats on\n> >>>>>>>>>>>> shared-mem every time.\n> >>>>>>>>>>>> Perhaps the logic of table_recheck_autovac could be simpler.\n> >>>>>>>>>>>>\n> >>>>>>>>>>>>>> BTW, I found some typos in comments, so attache a fixed version.\n> >>>>>>>>>>>\n> >>>>>>>>>>> The patch adds some duplicated codes into table_recheck_autovac().\n> >>>>>>>>>>> It's better to make the common function performing them and make\n> >>>>>>>>>>> table_recheck_autovac() call that common function, to simplify the code.\n> >>>>>>>>>> Thanks for your comment.\n> >>>>>>>>>> Hmm.. I've cut out the duplicate part.\n> >>>>>>>>>> Attach the patch.\n> >>>>>>>>>> Could you confirm that it fits your expecting?\n> >>>>>>>>>\n> >>>>>>>>> Yes, thanks for updataing the patch! Here are another review comments.\n> >>>>>>>>>\n> >>>>>>>>> + shared = pgstat_fetch_stat_dbentry(InvalidOid);\n> >>>>>>>>> + dbentry = pgstat_fetch_stat_dbentry(MyDatabaseId);\n> >>>>>>>>>\n> >>>>>>>>> When using the existing stats, ISTM that these are not necessary and\n> >>>>>>>>> we can reuse \"shared\" and \"dbentry\" obtained before. Right?\n> >>>>>>>> Yeah, but unless autovac_refresh_stats() is called, these functions\n> >>>>>>>> read the information from the\n> >>>>>>>> local hash table without re-read the stats file, so the process is very light.\n> >>>>>>>> Therefore, I think, it is better to keep the current logic to keep the\n> >>>>>>>> code simple.\n> >>>>>>>>\n> >>>>>>>>>\n> >>>>>>>>> + /* We might be better to refresh stats */\n> >>>>>>>>> + use_existing_stats = false;\n> >>>>>>>>>\n> >>>>>>>>> I think that we should add more comments about why it's better to\n> >>>>>>>>> refresh the stats in this case.\n> >>>>>>>>>\n> >>>>>>>>> + /* The relid has already vacuumed, so we might be better to use existing stats */\n> >>>>>>>>> + use_existing_stats = true;\n> >>>>>>>>>\n> >>>>>>>>> I think that we should add more comments about why it's better to\n> >>>>>>>>> reuse the stats in this case.\n> >>>>>>>> I added comments.\n> >>>>>>>>\n> >>>>>>>> Attache the patch.\n> >>>>>>>>\n> >>>>>>>\n> >>>>>>> Thank you for updating the patch. Here are some small comments on the\n> >>>>>>> latest (v4) patch.\n> >>>>>>>\n> >>>>>>> + * So if the last time we checked a table that was already vacuumed after\n> >>>>>>> + * refres stats, check the current statistics before refreshing it.\n> >>>>>>> + */\n> >>>>>>>\n> >>>>>>> s/refres/refresh/\n> >>>>> Thanks! fixed.\n> >>>>> Attached the patch.\n> >>>>>\n> >>>>>>>\n> >>>>>>> -----\n> >>>>>>> +/* Counter to determine if statistics should be refreshed */\n> >>>>>>> +static bool use_existing_stats = false;\n> >>>>>>> +\n> >>>>>>>\n> >>>>>>> I think 'use_existing_stats' can be declared within table_recheck_autovac().\n> >>>>>>>\n> >>>>>>> -----\n> >>>>>>> While testing the performance, I realized that the statistics are\n> >>>>>>> reset every time vacuumed one table, leading to re-reading the stats\n> >>>>>>> file even if 'use_existing_stats' is true. Please refer that vacuum()\n> >>>>>>> eventually calls AtEOXact_PgStat() which calls to\n> >>>>>>> pgstat_clear_snapshot().\n> >>>>>>\n> >>>>>> Good catch!\n> >>>>>>\n> >>>>>>\n> >>>>>>> I believe that's why the performance of the\n> >>>>>>> method of always checking the existing stats wasn’t good as expected.\n> >>>>>>> So if we save the statistics somewhere and use it for rechecking, the\n> >>>>>>> results of the performance benchmark will differ between these two\n> >>>>>>> methods.\n> >>>>> Thanks for you checks.\n> >>>>> But, if a worker did vacuum(), that means this worker had determined\n> >>>>> need vacuum in the\n> >>>>> table_recheck_autovac(). So, use_existing_stats set to false, and next\n> >>>>> time, refresh stats.\n> >>>>> Therefore I think the current patch is fine, as we want to avoid\n> >>>>> unnecessary refreshing of\n> >>>>> statistics before the actual vacuum(), right?\n> >>>>\n> >>>> Yes, you're right.\n> >>>>\n> >>>> When I benchmarked the performance of the method of always checking\n> >>>> existing stats I edited your patch so that it sets use_existing_stats\n> >>>> = true even if the second check is false (i.g., vacuum is needed).\n> >>>> And the result I got was worse than expected especially in the case of\n> >>>> a few autovacuum workers. But it doesn't evaluate the performance of\n> >>>> that method rightly as the stats snapshot is cleared every time\n> >>>> vacuum. Given you had similar results, I guess you used a similar way\n> >>>> when evaluating it, is it right? If so, it’s better to fix this issue\n> >>>> and see how the performance benchmark results will differ.\n> >>>>\n> >>>> For example, the results of the test case with 10000 tables and 1\n> >>>> autovacuum worker I reported before was:\n> >>>>\n> >>>> 10000 tables:\n> >>>> autovac_workers 1 : 158s,157s, 290s\n> >>>>\n> >>>> But after fixing that issue in the third method (always checking the\n> >>>> existing stats), the results are:\n> >>>\n> >>> Could you tell me how you fixed that issue? You copied the stats to\n> >>> somewhere as you suggested or skipped pgstat_clear_snapshot() as\n> >>> I suggested?\n> >>\n> >> I used the way you suggested in this quick test; skipped\n> >> pgstat_clear_snapshot().\n> >>\n> >>>\n> >>> Kasahara-san seems not to like the latter idea because it might\n> >>> cause bad side effect. So we should use the former idea?\n> >>\n> >> Not sure. I'm also concerned about the side effect but I've not checked yet.\n> >>\n> >> Since probably there is no big difference between the two ways in\n> >> terms of performance I'm going to see how the performance benchmark\n> >> result will change first.\n> >\n> > I've tested performance improvement again. From the left the execution\n> > time of the current HEAD, Kasahara-san's patch, the method of always\n> > checking the existing stats (using approach suggested by Fujii-san),\n> > in seconds.\n> >\n> > 1000 tables:\n> > autovac_workers 1 : 13s, 13s, 13s\n> > autovac_workers 2 : 6s, 4s, 4s\n> > autovac_workers 3 : 3s, 4s, 3s\n> > autovac_workers 5 : 3s, 3s, 2s\n> > autovac_workers 10: 2s, 3s, 2s\n> >\n> > 5000 tables:\n> > autovac_workers 1 : 71s, 71s, 72s\n> > autovac_workers 2 : 37s, 32s, 32s\n> > autovac_workers 3 : 29s, 26s, 26s\n> > autovac_workers 5 : 20s, 19s, 18s\n> > autovac_workers 10: 13s, 8s, 8s\n> >\n> > 10000 tables:\n> > autovac_workers 1 : 158s,157s, 159s\n> > autovac_workers 2 : 80s, 53s, 78s\n> > autovac_workers 3 : 75s, 67s, 67s\n> > autovac_workers 5 : 61s, 42s, 42s\n> > autovac_workers 10: 69s, 26s, 25s\n> >\n> > 20000 tables:\n> > autovac_workers 1 : 379s, 380s, 389s\n> > autovac_workers 2 : 236s, 232s, 233s\n> > autovac_workers 3 : 222s, 181s, 182s\n> > autovac_workers 5 : 212s, 132s, 139s\n> > autovac_workers 10: 317s, 91s, 89s\n> >\n> > I don't see a big difference between Kasahara-san's patch and the\n> > method of always checking the existing stats.\n>\n> Thanks for doing the benchmark!\n>\n> This benchmark result makes me think that we don't need to tweak\n> AtEOXact_PgStat() and can use Kasahara-san approach.\n> That's good news :)\n\nYeah, given that all autovaucum workers have the list of tables to\nvacuum in the same order in most cases, the assumption in\nKasahara-san’s patch that if a worker needs to vacuum a table it’s\nunlikely that it will be able to skip the next table using the current\nsnapshot of stats makes sense to me.\n\nOne small comment on v6 patch:\n\n+ /* When we decide to do vacuum or analyze, the existing stats cannot\n+ * be reused in the next cycle because it's cleared at the end of vacuum\n+ * or analyze (by AtEOXact_PgStat()).\n+ */\n+ use_existing_stats = false;\n\nI think the comment should start on the second line (i.g., \\n is\nneeded after /*).\n\nRegards,\n\n--\nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 2 Dec 2020 19:10:57 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: autovac issue with large number of tables" }, { "msg_contents": "On Wed, Dec 2, 2020 at 7:11 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Dec 2, 2020 at 3:33 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >\n> >\n> >\n> > On 2020/12/02 12:53, Masahiko Sawada wrote:\n> > > On Tue, Dec 1, 2020 at 5:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >>\n> > >> On Tue, Dec 1, 2020 at 4:32 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > >>>\n> > >>>\n> > >>>\n> > >>> On 2020/12/01 16:23, Masahiko Sawada wrote:\n> > >>>> On Tue, Dec 1, 2020 at 1:48 PM Kasahara Tatsuhito\n> > >>>> <kasahara.tatsuhito@gmail.com> wrote:\n> > >>>>>\n> > >>>>> Hi,\n> > >>>>>\n> > >>>>> On Mon, Nov 30, 2020 at 8:59 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > >>>>>>\n> > >>>>>>\n> > >>>>>>\n> > >>>>>> On 2020/11/30 10:43, Masahiko Sawada wrote:\n> > >>>>>>> On Sun, Nov 29, 2020 at 10:34 PM Kasahara Tatsuhito\n> > >>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n> > >>>>>>>>\n> > >>>>>>>> Hi, Thanks for you comments.\n> > >>>>>>>>\n> > >>>>>>>> On Fri, Nov 27, 2020 at 9:51 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > >>>>>>>>>\n> > >>>>>>>>>\n> > >>>>>>>>>\n> > >>>>>>>>> On 2020/11/27 18:38, Kasahara Tatsuhito wrote:\n> > >>>>>>>>>> Hi,\n> > >>>>>>>>>>\n> > >>>>>>>>>> On Fri, Nov 27, 2020 at 1:43 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > >>>>>>>>>>>\n> > >>>>>>>>>>>\n> > >>>>>>>>>>>\n> > >>>>>>>>>>> On 2020/11/26 10:41, Kasahara Tatsuhito wrote:\n> > >>>>>>>>>>>> On Wed, Nov 25, 2020 at 8:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >>>>>>>>>>>>>\n> > >>>>>>>>>>>>> On Wed, Nov 25, 2020 at 4:18 PM Kasahara Tatsuhito\n> > >>>>>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n> > >>>>>>>>>>>>>>\n> > >>>>>>>>>>>>>> Hi,\n> > >>>>>>>>>>>>>>\n> > >>>>>>>>>>>>>> On Wed, Nov 25, 2020 at 2:17 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >>>>>>>>>>>>>>>\n> > >>>>>>>>>>>>>>> On Fri, Sep 4, 2020 at 7:50 PM Kasahara Tatsuhito\n> > >>>>>>>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n> > >>>>>>>>>>>>>>>>\n> > >>>>>>>>>>>>>>>> Hi,\n> > >>>>>>>>>>>>>>>>\n> > >>>>>>>>>>>>>>>> On Wed, Sep 2, 2020 at 2:10 AM Kasahara Tatsuhito\n> > >>>>>>>>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n> > >>>>>>>>>>>>>>>>>> I wonder if we could have table_recheck_autovac do two probes of the stats\n> > >>>>>>>>>>>>>>>>>> data. First probe the existing stats data, and if it shows the table to\n> > >>>>>>>>>>>>>>>>>> be already vacuumed, return immediately. If not, *then* force a stats\n> > >>>>>>>>>>>>>>>>>> re-read, and check a second time.\n> > >>>>>>>>>>>>>>>>> Does the above mean that the second and subsequent table_recheck_autovac()\n> > >>>>>>>>>>>>>>>>> will be improved to first check using the previous refreshed statistics?\n> > >>>>>>>>>>>>>>>>> I think that certainly works.\n> > >>>>>>>>>>>>>>>>>\n> > >>>>>>>>>>>>>>>>> If that's correct, I'll try to create a patch for the PoC\n> > >>>>>>>>>>>>>>>>\n> > >>>>>>>>>>>>>>>> I still don't know how to reproduce Jim's troubles, but I was able to reproduce\n> > >>>>>>>>>>>>>>>> what was probably a very similar problem.\n> > >>>>>>>>>>>>>>>>\n> > >>>>>>>>>>>>>>>> This problem seems to be more likely to occur in cases where you have\n> > >>>>>>>>>>>>>>>> a large number of tables,\n> > >>>>>>>>>>>>>>>> i.e., a large amount of stats, and many small tables need VACUUM at\n> > >>>>>>>>>>>>>>>> the same time.\n> > >>>>>>>>>>>>>>>>\n> > >>>>>>>>>>>>>>>> So I followed Tom's advice and created a patch for the PoC.\n> > >>>>>>>>>>>>>>>> This patch will enable a flag in the table_recheck_autovac function to use\n> > >>>>>>>>>>>>>>>> the existing stats next time if VACUUM (or ANALYZE) has already been done\n> > >>>>>>>>>>>>>>>> by another worker on the check after the stats have been updated.\n> > >>>>>>>>>>>>>>>> If the tables continue to require VACUUM after the refresh, then a refresh\n> > >>>>>>>>>>>>>>>> will be required instead of using the existing statistics.\n> > >>>>>>>>>>>>>>>>\n> > >>>>>>>>>>>>>>>> I did simple test with HEAD and HEAD + this PoC patch.\n> > >>>>>>>>>>>>>>>> The tests were conducted in two cases.\n> > >>>>>>>>>>>>>>>> (I changed few configurations. see attached scripts)\n> > >>>>>>>>>>>>>>>>\n> > >>>>>>>>>>>>>>>> 1. Normal VACUUM case\n> > >>>>>>>>>>>>>>>> - SET autovacuum = off\n> > >>>>>>>>>>>>>>>> - CREATE tables with 100 rows\n> > >>>>>>>>>>>>>>>> - DELETE 90 rows for each tables\n> > >>>>>>>>>>>>>>>> - SET autovacuum = on and restart PostgreSQL\n> > >>>>>>>>>>>>>>>> - Measure the time it takes for all tables to be VACUUMed\n> > >>>>>>>>>>>>>>>>\n> > >>>>>>>>>>>>>>>> 2. Anti wrap round VACUUM case\n> > >>>>>>>>>>>>>>>> - CREATE brank tables\n> > >>>>>>>>>>>>>>>> - SELECT all of these tables (for generate stats)\n> > >>>>>>>>>>>>>>>> - SET autovacuum_freeze_max_age to low values and restart PostgreSQL\n> > >>>>>>>>>>>>>>>> - Consumes a lot of XIDs by using txid_curent()\n> > >>>>>>>>>>>>>>>> - Measure the time it takes for all tables to be VACUUMed\n> > >>>>>>>>>>>>>>>>\n> > >>>>>>>>>>>>>>>> For each test case, the following results were obtained by changing\n> > >>>>>>>>>>>>>>>> autovacuum_max_workers parameters to 1, 2, 3(def) 5 and 10.\n> > >>>>>>>>>>>>>>>> Also changing num of tables to 1000, 5000, 10000 and 20000.\n> > >>>>>>>>>>>>>>>>\n> > >>>>>>>>>>>>>>>> Due to the poor VM environment (2 VCPU/4 GB), the results are a little unstable,\n> > >>>>>>>>>>>>>>>> but I think it's enough to ask for a trend.\n> > >>>>>>>>>>>>>>>>\n> > >>>>>>>>>>>>>>>> ===========================================================================\n> > >>>>>>>>>>>>>>>> [1.Normal VACUUM case]\n> > >>>>>>>>>>>>>>>> tables:1000\n> > >>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 20 sec VS (with patch) 20 sec\n> > >>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 18 sec VS (with patch) 16 sec\n> > >>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 18 sec VS (with patch) 16 sec\n> > >>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 19 sec VS (with patch) 17 sec\n> > >>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 19 sec VS (with patch) 17 sec\n> > >>>>>>>>>>>>>>>>\n> > >>>>>>>>>>>>>>>> tables:5000\n> > >>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 77 sec VS (with patch) 78 sec\n> > >>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 61 sec VS (with patch) 43 sec\n> > >>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 38 sec VS (with patch) 38 sec\n> > >>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 45 sec VS (with patch) 37 sec\n> > >>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 43 sec VS (with patch) 35 sec\n> > >>>>>>>>>>>>>>>>\n> > >>>>>>>>>>>>>>>> tables:10000\n> > >>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 152 sec VS (with patch) 153 sec\n> > >>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 119 sec VS (with patch) 98 sec\n> > >>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 87 sec VS (with patch) 78 sec\n> > >>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 100 sec VS (with patch) 66 sec\n> > >>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 97 sec VS (with patch) 56 sec\n> > >>>>>>>>>>>>>>>>\n> > >>>>>>>>>>>>>>>> tables:20000\n> > >>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 338 sec VS (with patch) 339 sec\n> > >>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 231 sec VS (with patch) 229 sec\n> > >>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 220 sec VS (with patch) 191 sec\n> > >>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 234 sec VS (with patch) 147 sec\n> > >>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 320 sec VS (with patch) 113 sec\n> > >>>>>>>>>>>>>>>>\n> > >>>>>>>>>>>>>>>> [2.Anti wrap round VACUUM case]\n> > >>>>>>>>>>>>>>>> tables:1000\n> > >>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 19 sec VS (with patch) 18 sec\n> > >>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 14 sec VS (with patch) 15 sec\n> > >>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 14 sec VS (with patch) 14 sec\n> > >>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 14 sec VS (with patch) 16 sec\n> > >>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 16 sec VS (with patch) 14 sec\n> > >>>>>>>>>>>>>>>>\n> > >>>>>>>>>>>>>>>> tables:5000\n> > >>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 69 sec VS (with patch) 69 sec\n> > >>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 66 sec VS (with patch) 47 sec\n> > >>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 59 sec VS (with patch) 37 sec\n> > >>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 39 sec VS (with patch) 28 sec\n> > >>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 39 sec VS (with patch) 29 sec\n> > >>>>>>>>>>>>>>>>\n> > >>>>>>>>>>>>>>>> tables:10000\n> > >>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 139 sec VS (with patch) 138 sec\n> > >>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 130 sec VS (with patch) 86 sec\n> > >>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 120 sec VS (with patch) 68 sec\n> > >>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 96 sec VS (with patch) 41 sec\n> > >>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 90 sec VS (with patch) 39 sec\n> > >>>>>>>>>>>>>>>>\n> > >>>>>>>>>>>>>>>> tables:20000\n> > >>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 313 sec VS (with patch) 331 sec\n> > >>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 209 sec VS (with patch) 201 sec\n> > >>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 227 sec VS (with patch) 141 sec\n> > >>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 236 sec VS (with patch) 88 sec\n> > >>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 309 sec VS (with patch) 74 sec\n> > >>>>>>>>>>>>>>>> ===========================================================================\n> > >>>>>>>>>>>>>>>>\n> > >>>>>>>>>>>>>>>> The cases without patch, the scalability of the worker has decreased\n> > >>>>>>>>>>>>>>>> as the number of tables has increased.\n> > >>>>>>>>>>>>>>>> In fact, the more workers there are, the longer it takes to complete\n> > >>>>>>>>>>>>>>>> VACUUM to all tables.\n> > >>>>>>>>>>>>>>>> The cases with patch, it shows good scalability with respect to the\n> > >>>>>>>>>>>>>>>> number of workers.\n> > >>>>>>>>>>>>>>>\n> > >>>>>>>>>>>>>>> It seems a good performance improvement even without the patch of\n> > >>>>>>>>>>>>>>> shared memory based stats collector.\n> > >>>>>>>>>>>\n> > >>>>>>>>>>> Sounds great!\n> > >>>>>>>>>>>\n> > >>>>>>>>>>>\n> > >>>>>>>>>>>>>>>\n> > >>>>>>>>>>>>>>>>\n> > >>>>>>>>>>>>>>>> Note that perf top results showed that hash_search_with_hash_value,\n> > >>>>>>>>>>>>>>>> hash_seq_search and\n> > >>>>>>>>>>>>>>>> pgstat_read_statsfiles are dominant during VACUUM in all patterns,\n> > >>>>>>>>>>>>>>>> with or without the patch.\n> > >>>>>>>>>>>>>>>>\n> > >>>>>>>>>>>>>>>> Therefore, there is still a need to find ways to optimize the reading\n> > >>>>>>>>>>>>>>>> of large amounts of stats.\n> > >>>>>>>>>>>>>>>> However, this patch is effective in its own right, and since there are\n> > >>>>>>>>>>>>>>>> only a few parts to modify,\n> > >>>>>>>>>>>>>>>> I think it should be able to be applied to current (preferably\n> > >>>>>>>>>>>>>>>> pre-v13) PostgreSQL.\n> > >>>>>>>>>>>>>>>\n> > >>>>>>>>>>>>>>> +1\n> > >>>>>>>>>>>>>>>\n> > >>>>>>>>>>>>>>> +\n> > >>>>>>>>>>>>>>> + /* We might be better to refresh stats */\n> > >>>>>>>>>>>>>>> + use_existing_stats = false;\n> > >>>>>>>>>>>>>>> }\n> > >>>>>>>>>>>>>>> + else\n> > >>>>>>>>>>>>>>> + {\n> > >>>>>>>>>>>>>>>\n> > >>>>>>>>>>>>>>> - heap_freetuple(classTup);\n> > >>>>>>>>>>>>>>> + heap_freetuple(classTup);\n> > >>>>>>>>>>>>>>> + /* The relid has already vacuumed, so we might be better to\n> > >>>>>>>>>>>>>>> use exiting stats */\n> > >>>>>>>>>>>>>>> + use_existing_stats = true;\n> > >>>>>>>>>>>>>>> + }\n> > >>>>>>>>>>>>>>>\n> > >>>>>>>>>>>>>>> With that patch, the autovacuum process refreshes the stats in the\n> > >>>>>>>>>>>>>>> next check if it finds out that the table still needs to be vacuumed.\n> > >>>>>>>>>>>>>>> But I guess it's not necessarily true because the next table might be\n> > >>>>>>>>>>>>>>> vacuumed already. So I think we might want to always use the existing\n> > >>>>>>>>>>>>>>> for the first check. What do you think?\n> > >>>>>>>>>>>>>> Thanks for your comment.\n> > >>>>>>>>>>>>>>\n> > >>>>>>>>>>>>>> If we assume the case where some workers vacuum on large tables\n> > >>>>>>>>>>>>>> and a single worker vacuum on small tables, the processing\n> > >>>>>>>>>>>>>> performance of the single worker will be slightly lower if the\n> > >>>>>>>>>>>>>> existing statistics are checked every time.\n> > >>>>>>>>>>>>>>\n> > >>>>>>>>>>>>>> In fact, at first I tried to check the existing stats every time,\n> > >>>>>>>>>>>>>> but the performance was slightly worse in cases with a small number of workers.\n> > >>>>>>>>>>>\n> > >>>>>>>>>>> Do you have this benchmark result?\n> > >>>>>>>>>>>\n> > >>>>>>>>>>>\n> > >>>>>>>>>>>>>> (Checking the existing stats is lightweight , but at high frequency,\n> > >>>>>>>>>>>>>> it affects processing performance.)\n> > >>>>>>>>>>>>>> Therefore, at after refresh statistics, determine whether autovac\n> > >>>>>>>>>>>>>> should use the existing statistics.\n> > >>>>>>>>>>>>>\n> > >>>>>>>>>>>>> Yeah, since the test you used uses a lot of small tables, if there are\n> > >>>>>>>>>>>>> a few workers, checking the existing stats is unlikely to return true\n> > >>>>>>>>>>>>> (no need to vacuum). So the cost of existing stats check ends up being\n> > >>>>>>>>>>>>> overhead. Not sure how slow always checking the existing stats was,\n> > >>>>>>>>>>>>> but given that the shared memory based stats collector patch could\n> > >>>>>>>>>>>>> improve the performance of refreshing stats, it might be better not to\n> > >>>>>>>>>>>>> check the existing stats frequently like the patch does. Anyway, I\n> > >>>>>>>>>>>>> think it’s better to evaluate the performance improvement with other\n> > >>>>>>>>>>>>> cases too.\n> > >>>>>>>>>>>> Yeah, I would like to see how much the performance changes in other cases.\n> > >>>>>>>>>>>> In addition, if the shared-based-stats patch is applied, we won't need to reload\n> > >>>>>>>>>>>> a huge stats file, so we will just have to check the stats on\n> > >>>>>>>>>>>> shared-mem every time.\n> > >>>>>>>>>>>> Perhaps the logic of table_recheck_autovac could be simpler.\n> > >>>>>>>>>>>>\n> > >>>>>>>>>>>>>> BTW, I found some typos in comments, so attache a fixed version.\n> > >>>>>>>>>>>\n> > >>>>>>>>>>> The patch adds some duplicated codes into table_recheck_autovac().\n> > >>>>>>>>>>> It's better to make the common function performing them and make\n> > >>>>>>>>>>> table_recheck_autovac() call that common function, to simplify the code.\n> > >>>>>>>>>> Thanks for your comment.\n> > >>>>>>>>>> Hmm.. I've cut out the duplicate part.\n> > >>>>>>>>>> Attach the patch.\n> > >>>>>>>>>> Could you confirm that it fits your expecting?\n> > >>>>>>>>>\n> > >>>>>>>>> Yes, thanks for updataing the patch! Here are another review comments.\n> > >>>>>>>>>\n> > >>>>>>>>> + shared = pgstat_fetch_stat_dbentry(InvalidOid);\n> > >>>>>>>>> + dbentry = pgstat_fetch_stat_dbentry(MyDatabaseId);\n> > >>>>>>>>>\n> > >>>>>>>>> When using the existing stats, ISTM that these are not necessary and\n> > >>>>>>>>> we can reuse \"shared\" and \"dbentry\" obtained before. Right?\n> > >>>>>>>> Yeah, but unless autovac_refresh_stats() is called, these functions\n> > >>>>>>>> read the information from the\n> > >>>>>>>> local hash table without re-read the stats file, so the process is very light.\n> > >>>>>>>> Therefore, I think, it is better to keep the current logic to keep the\n> > >>>>>>>> code simple.\n> > >>>>>>>>\n> > >>>>>>>>>\n> > >>>>>>>>> + /* We might be better to refresh stats */\n> > >>>>>>>>> + use_existing_stats = false;\n> > >>>>>>>>>\n> > >>>>>>>>> I think that we should add more comments about why it's better to\n> > >>>>>>>>> refresh the stats in this case.\n> > >>>>>>>>>\n> > >>>>>>>>> + /* The relid has already vacuumed, so we might be better to use existing stats */\n> > >>>>>>>>> + use_existing_stats = true;\n> > >>>>>>>>>\n> > >>>>>>>>> I think that we should add more comments about why it's better to\n> > >>>>>>>>> reuse the stats in this case.\n> > >>>>>>>> I added comments.\n> > >>>>>>>>\n> > >>>>>>>> Attache the patch.\n> > >>>>>>>>\n> > >>>>>>>\n> > >>>>>>> Thank you for updating the patch. Here are some small comments on the\n> > >>>>>>> latest (v4) patch.\n> > >>>>>>>\n> > >>>>>>> + * So if the last time we checked a table that was already vacuumed after\n> > >>>>>>> + * refres stats, check the current statistics before refreshing it.\n> > >>>>>>> + */\n> > >>>>>>>\n> > >>>>>>> s/refres/refresh/\n> > >>>>> Thanks! fixed.\n> > >>>>> Attached the patch.\n> > >>>>>\n> > >>>>>>>\n> > >>>>>>> -----\n> > >>>>>>> +/* Counter to determine if statistics should be refreshed */\n> > >>>>>>> +static bool use_existing_stats = false;\n> > >>>>>>> +\n> > >>>>>>>\n> > >>>>>>> I think 'use_existing_stats' can be declared within table_recheck_autovac().\n> > >>>>>>>\n> > >>>>>>> -----\n> > >>>>>>> While testing the performance, I realized that the statistics are\n> > >>>>>>> reset every time vacuumed one table, leading to re-reading the stats\n> > >>>>>>> file even if 'use_existing_stats' is true. Please refer that vacuum()\n> > >>>>>>> eventually calls AtEOXact_PgStat() which calls to\n> > >>>>>>> pgstat_clear_snapshot().\n> > >>>>>>\n> > >>>>>> Good catch!\n> > >>>>>>\n> > >>>>>>\n> > >>>>>>> I believe that's why the performance of the\n> > >>>>>>> method of always checking the existing stats wasn’t good as expected.\n> > >>>>>>> So if we save the statistics somewhere and use it for rechecking, the\n> > >>>>>>> results of the performance benchmark will differ between these two\n> > >>>>>>> methods.\n> > >>>>> Thanks for you checks.\n> > >>>>> But, if a worker did vacuum(), that means this worker had determined\n> > >>>>> need vacuum in the\n> > >>>>> table_recheck_autovac(). So, use_existing_stats set to false, and next\n> > >>>>> time, refresh stats.\n> > >>>>> Therefore I think the current patch is fine, as we want to avoid\n> > >>>>> unnecessary refreshing of\n> > >>>>> statistics before the actual vacuum(), right?\n> > >>>>\n> > >>>> Yes, you're right.\n> > >>>>\n> > >>>> When I benchmarked the performance of the method of always checking\n> > >>>> existing stats I edited your patch so that it sets use_existing_stats\n> > >>>> = true even if the second check is false (i.g., vacuum is needed).\n> > >>>> And the result I got was worse than expected especially in the case of\n> > >>>> a few autovacuum workers. But it doesn't evaluate the performance of\n> > >>>> that method rightly as the stats snapshot is cleared every time\n> > >>>> vacuum. Given you had similar results, I guess you used a similar way\n> > >>>> when evaluating it, is it right? If so, it’s better to fix this issue\n> > >>>> and see how the performance benchmark results will differ.\n> > >>>>\n> > >>>> For example, the results of the test case with 10000 tables and 1\n> > >>>> autovacuum worker I reported before was:\n> > >>>>\n> > >>>> 10000 tables:\n> > >>>> autovac_workers 1 : 158s,157s, 290s\n> > >>>>\n> > >>>> But after fixing that issue in the third method (always checking the\n> > >>>> existing stats), the results are:\n> > >>>\n> > >>> Could you tell me how you fixed that issue? You copied the stats to\n> > >>> somewhere as you suggested or skipped pgstat_clear_snapshot() as\n> > >>> I suggested?\n> > >>\n> > >> I used the way you suggested in this quick test; skipped\n> > >> pgstat_clear_snapshot().\n> > >>\n> > >>>\n> > >>> Kasahara-san seems not to like the latter idea because it might\n> > >>> cause bad side effect. So we should use the former idea?\n> > >>\n> > >> Not sure. I'm also concerned about the side effect but I've not checked yet.\n> > >>\n> > >> Since probably there is no big difference between the two ways in\n> > >> terms of performance I'm going to see how the performance benchmark\n> > >> result will change first.\n> > >\n> > > I've tested performance improvement again. From the left the execution\n> > > time of the current HEAD, Kasahara-san's patch, the method of always\n> > > checking the existing stats (using approach suggested by Fujii-san),\n> > > in seconds.\n> > >\n> > > 1000 tables:\n> > > autovac_workers 1 : 13s, 13s, 13s\n> > > autovac_workers 2 : 6s, 4s, 4s\n> > > autovac_workers 3 : 3s, 4s, 3s\n> > > autovac_workers 5 : 3s, 3s, 2s\n> > > autovac_workers 10: 2s, 3s, 2s\n> > >\n> > > 5000 tables:\n> > > autovac_workers 1 : 71s, 71s, 72s\n> > > autovac_workers 2 : 37s, 32s, 32s\n> > > autovac_workers 3 : 29s, 26s, 26s\n> > > autovac_workers 5 : 20s, 19s, 18s\n> > > autovac_workers 10: 13s, 8s, 8s\n> > >\n> > > 10000 tables:\n> > > autovac_workers 1 : 158s,157s, 159s\n> > > autovac_workers 2 : 80s, 53s, 78s\n> > > autovac_workers 3 : 75s, 67s, 67s\n> > > autovac_workers 5 : 61s, 42s, 42s\n> > > autovac_workers 10: 69s, 26s, 25s\n> > >\n> > > 20000 tables:\n> > > autovac_workers 1 : 379s, 380s, 389s\n> > > autovac_workers 2 : 236s, 232s, 233s\n> > > autovac_workers 3 : 222s, 181s, 182s\n> > > autovac_workers 5 : 212s, 132s, 139s\n> > > autovac_workers 10: 317s, 91s, 89s\n> > >\n> > > I don't see a big difference between Kasahara-san's patch and the\n> > > method of always checking the existing stats.\n> >\n> > Thanks for doing the benchmark!\n> >\n> > This benchmark result makes me think that we don't need to tweak\n> > AtEOXact_PgStat() and can use Kasahara-san approach.\n> > That's good news :)\n>\n> Yeah, given that all autovaucum workers have the list of tables to\n> vacuum in the same order in most cases, the assumption in\n> Kasahara-san’s patch that if a worker needs to vacuum a table it’s\n> unlikely that it will be able to skip the next table using the current\n> snapshot of stats makes sense to me.\n>\n> One small comment on v6 patch:\n>\n> + /* When we decide to do vacuum or analyze, the existing stats cannot\n> + * be reused in the next cycle because it's cleared at the end of vacuum\n> + * or analyze (by AtEOXact_PgStat()).\n> + */\n> + use_existing_stats = false;\n>\n> I think the comment should start on the second line (i.g., \\n is\n> needed after /*).\nOops, thanks.\nFixed.\n\nBest regards,\n\n>\n> Regards,\n>\n> --\n> Masahiko Sawada\n> EnterpriseDB: https://www.enterprisedb.com/\n\n\n\n-- \nTatsuhito Kasahara\nkasahara.tatsuhito _at_ gmail.com", "msg_date": "Thu, 3 Dec 2020 11:46:09 +0900", "msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>", "msg_from_op": false, "msg_subject": "Re: autovac issue with large number of tables" }, { "msg_contents": "On 2020/12/03 11:46, Kasahara Tatsuhito wrote:\n> On Wed, Dec 2, 2020 at 7:11 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>\n>> On Wed, Dec 2, 2020 at 3:33 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>\n>>>\n>>>\n>>> On 2020/12/02 12:53, Masahiko Sawada wrote:\n>>>> On Tue, Dec 1, 2020 at 5:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>>>>\n>>>>> On Tue, Dec 1, 2020 at 4:32 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>>> On 2020/12/01 16:23, Masahiko Sawada wrote:\n>>>>>>> On Tue, Dec 1, 2020 at 1:48 PM Kasahara Tatsuhito\n>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n>>>>>>>>\n>>>>>>>> Hi,\n>>>>>>>>\n>>>>>>>> On Mon, Nov 30, 2020 at 8:59 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>>>>\n>>>>>>>>>\n>>>>>>>>>\n>>>>>>>>> On 2020/11/30 10:43, Masahiko Sawada wrote:\n>>>>>>>>>> On Sun, Nov 29, 2020 at 10:34 PM Kasahara Tatsuhito\n>>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n>>>>>>>>>>>\n>>>>>>>>>>> Hi, Thanks for you comments.\n>>>>>>>>>>>\n>>>>>>>>>>> On Fri, Nov 27, 2020 at 9:51 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>>>>>>>\n>>>>>>>>>>>>\n>>>>>>>>>>>>\n>>>>>>>>>>>> On 2020/11/27 18:38, Kasahara Tatsuhito wrote:\n>>>>>>>>>>>>> Hi,\n>>>>>>>>>>>>>\n>>>>>>>>>>>>> On Fri, Nov 27, 2020 at 1:43 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>> On 2020/11/26 10:41, Kasahara Tatsuhito wrote:\n>>>>>>>>>>>>>>> On Wed, Nov 25, 2020 at 8:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>> On Wed, Nov 25, 2020 at 4:18 PM Kasahara Tatsuhito\n>>>>>>>>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>> Hi,\n>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>> On Wed, Nov 25, 2020 at 2:17 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>> On Fri, Sep 4, 2020 at 7:50 PM Kasahara Tatsuhito\n>>>>>>>>>>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>> Hi,\n>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>> On Wed, Sep 2, 2020 at 2:10 AM Kasahara Tatsuhito\n>>>>>>>>>>>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n>>>>>>>>>>>>>>>>>>>>> I wonder if we could have table_recheck_autovac do two probes of the stats\n>>>>>>>>>>>>>>>>>>>>> data. First probe the existing stats data, and if it shows the table to\n>>>>>>>>>>>>>>>>>>>>> be already vacuumed, return immediately. If not, *then* force a stats\n>>>>>>>>>>>>>>>>>>>>> re-read, and check a second time.\n>>>>>>>>>>>>>>>>>>>> Does the above mean that the second and subsequent table_recheck_autovac()\n>>>>>>>>>>>>>>>>>>>> will be improved to first check using the previous refreshed statistics?\n>>>>>>>>>>>>>>>>>>>> I think that certainly works.\n>>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>>> If that's correct, I'll try to create a patch for the PoC\n>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>> I still don't know how to reproduce Jim's troubles, but I was able to reproduce\n>>>>>>>>>>>>>>>>>>> what was probably a very similar problem.\n>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>> This problem seems to be more likely to occur in cases where you have\n>>>>>>>>>>>>>>>>>>> a large number of tables,\n>>>>>>>>>>>>>>>>>>> i.e., a large amount of stats, and many small tables need VACUUM at\n>>>>>>>>>>>>>>>>>>> the same time.\n>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>> So I followed Tom's advice and created a patch for the PoC.\n>>>>>>>>>>>>>>>>>>> This patch will enable a flag in the table_recheck_autovac function to use\n>>>>>>>>>>>>>>>>>>> the existing stats next time if VACUUM (or ANALYZE) has already been done\n>>>>>>>>>>>>>>>>>>> by another worker on the check after the stats have been updated.\n>>>>>>>>>>>>>>>>>>> If the tables continue to require VACUUM after the refresh, then a refresh\n>>>>>>>>>>>>>>>>>>> will be required instead of using the existing statistics.\n>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>> I did simple test with HEAD and HEAD + this PoC patch.\n>>>>>>>>>>>>>>>>>>> The tests were conducted in two cases.\n>>>>>>>>>>>>>>>>>>> (I changed few configurations. see attached scripts)\n>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>> 1. Normal VACUUM case\n>>>>>>>>>>>>>>>>>>> - SET autovacuum = off\n>>>>>>>>>>>>>>>>>>> - CREATE tables with 100 rows\n>>>>>>>>>>>>>>>>>>> - DELETE 90 rows for each tables\n>>>>>>>>>>>>>>>>>>> - SET autovacuum = on and restart PostgreSQL\n>>>>>>>>>>>>>>>>>>> - Measure the time it takes for all tables to be VACUUMed\n>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>> 2. Anti wrap round VACUUM case\n>>>>>>>>>>>>>>>>>>> - CREATE brank tables\n>>>>>>>>>>>>>>>>>>> - SELECT all of these tables (for generate stats)\n>>>>>>>>>>>>>>>>>>> - SET autovacuum_freeze_max_age to low values and restart PostgreSQL\n>>>>>>>>>>>>>>>>>>> - Consumes a lot of XIDs by using txid_curent()\n>>>>>>>>>>>>>>>>>>> - Measure the time it takes for all tables to be VACUUMed\n>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>> For each test case, the following results were obtained by changing\n>>>>>>>>>>>>>>>>>>> autovacuum_max_workers parameters to 1, 2, 3(def) 5 and 10.\n>>>>>>>>>>>>>>>>>>> Also changing num of tables to 1000, 5000, 10000 and 20000.\n>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>> Due to the poor VM environment (2 VCPU/4 GB), the results are a little unstable,\n>>>>>>>>>>>>>>>>>>> but I think it's enough to ask for a trend.\n>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>> ===========================================================================\n>>>>>>>>>>>>>>>>>>> [1.Normal VACUUM case]\n>>>>>>>>>>>>>>>>>>> tables:1000\n>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 20 sec VS (with patch) 20 sec\n>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 18 sec VS (with patch) 16 sec\n>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 18 sec VS (with patch) 16 sec\n>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 19 sec VS (with patch) 17 sec\n>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 19 sec VS (with patch) 17 sec\n>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>> tables:5000\n>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 77 sec VS (with patch) 78 sec\n>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 61 sec VS (with patch) 43 sec\n>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 38 sec VS (with patch) 38 sec\n>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 45 sec VS (with patch) 37 sec\n>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 43 sec VS (with patch) 35 sec\n>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>> tables:10000\n>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 152 sec VS (with patch) 153 sec\n>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 119 sec VS (with patch) 98 sec\n>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 87 sec VS (with patch) 78 sec\n>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 100 sec VS (with patch) 66 sec\n>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 97 sec VS (with patch) 56 sec\n>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>> tables:20000\n>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 338 sec VS (with patch) 339 sec\n>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 231 sec VS (with patch) 229 sec\n>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 220 sec VS (with patch) 191 sec\n>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 234 sec VS (with patch) 147 sec\n>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 320 sec VS (with patch) 113 sec\n>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>> [2.Anti wrap round VACUUM case]\n>>>>>>>>>>>>>>>>>>> tables:1000\n>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 19 sec VS (with patch) 18 sec\n>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 14 sec VS (with patch) 15 sec\n>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 14 sec VS (with patch) 14 sec\n>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 14 sec VS (with patch) 16 sec\n>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 16 sec VS (with patch) 14 sec\n>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>> tables:5000\n>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 69 sec VS (with patch) 69 sec\n>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 66 sec VS (with patch) 47 sec\n>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 59 sec VS (with patch) 37 sec\n>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 39 sec VS (with patch) 28 sec\n>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 39 sec VS (with patch) 29 sec\n>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>> tables:10000\n>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 139 sec VS (with patch) 138 sec\n>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 130 sec VS (with patch) 86 sec\n>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 120 sec VS (with patch) 68 sec\n>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 96 sec VS (with patch) 41 sec\n>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 90 sec VS (with patch) 39 sec\n>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>> tables:20000\n>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 313 sec VS (with patch) 331 sec\n>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 209 sec VS (with patch) 201 sec\n>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 227 sec VS (with patch) 141 sec\n>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 236 sec VS (with patch) 88 sec\n>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 309 sec VS (with patch) 74 sec\n>>>>>>>>>>>>>>>>>>> ===========================================================================\n>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>> The cases without patch, the scalability of the worker has decreased\n>>>>>>>>>>>>>>>>>>> as the number of tables has increased.\n>>>>>>>>>>>>>>>>>>> In fact, the more workers there are, the longer it takes to complete\n>>>>>>>>>>>>>>>>>>> VACUUM to all tables.\n>>>>>>>>>>>>>>>>>>> The cases with patch, it shows good scalability with respect to the\n>>>>>>>>>>>>>>>>>>> number of workers.\n>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>> It seems a good performance improvement even without the patch of\n>>>>>>>>>>>>>>>>>> shared memory based stats collector.\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>> Sounds great!\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>> Note that perf top results showed that hash_search_with_hash_value,\n>>>>>>>>>>>>>>>>>>> hash_seq_search and\n>>>>>>>>>>>>>>>>>>> pgstat_read_statsfiles are dominant during VACUUM in all patterns,\n>>>>>>>>>>>>>>>>>>> with or without the patch.\n>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>> Therefore, there is still a need to find ways to optimize the reading\n>>>>>>>>>>>>>>>>>>> of large amounts of stats.\n>>>>>>>>>>>>>>>>>>> However, this patch is effective in its own right, and since there are\n>>>>>>>>>>>>>>>>>>> only a few parts to modify,\n>>>>>>>>>>>>>>>>>>> I think it should be able to be applied to current (preferably\n>>>>>>>>>>>>>>>>>>> pre-v13) PostgreSQL.\n>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>> +1\n>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>> +\n>>>>>>>>>>>>>>>>>> + /* We might be better to refresh stats */\n>>>>>>>>>>>>>>>>>> + use_existing_stats = false;\n>>>>>>>>>>>>>>>>>> }\n>>>>>>>>>>>>>>>>>> + else\n>>>>>>>>>>>>>>>>>> + {\n>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>> - heap_freetuple(classTup);\n>>>>>>>>>>>>>>>>>> + heap_freetuple(classTup);\n>>>>>>>>>>>>>>>>>> + /* The relid has already vacuumed, so we might be better to\n>>>>>>>>>>>>>>>>>> use exiting stats */\n>>>>>>>>>>>>>>>>>> + use_existing_stats = true;\n>>>>>>>>>>>>>>>>>> + }\n>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>> With that patch, the autovacuum process refreshes the stats in the\n>>>>>>>>>>>>>>>>>> next check if it finds out that the table still needs to be vacuumed.\n>>>>>>>>>>>>>>>>>> But I guess it's not necessarily true because the next table might be\n>>>>>>>>>>>>>>>>>> vacuumed already. So I think we might want to always use the existing\n>>>>>>>>>>>>>>>>>> for the first check. What do you think?\n>>>>>>>>>>>>>>>>> Thanks for your comment.\n>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>> If we assume the case where some workers vacuum on large tables\n>>>>>>>>>>>>>>>>> and a single worker vacuum on small tables, the processing\n>>>>>>>>>>>>>>>>> performance of the single worker will be slightly lower if the\n>>>>>>>>>>>>>>>>> existing statistics are checked every time.\n>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>> In fact, at first I tried to check the existing stats every time,\n>>>>>>>>>>>>>>>>> but the performance was slightly worse in cases with a small number of workers.\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>> Do you have this benchmark result?\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>> (Checking the existing stats is lightweight , but at high frequency,\n>>>>>>>>>>>>>>>>> it affects processing performance.)\n>>>>>>>>>>>>>>>>> Therefore, at after refresh statistics, determine whether autovac\n>>>>>>>>>>>>>>>>> should use the existing statistics.\n>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>> Yeah, since the test you used uses a lot of small tables, if there are\n>>>>>>>>>>>>>>>> a few workers, checking the existing stats is unlikely to return true\n>>>>>>>>>>>>>>>> (no need to vacuum). So the cost of existing stats check ends up being\n>>>>>>>>>>>>>>>> overhead. Not sure how slow always checking the existing stats was,\n>>>>>>>>>>>>>>>> but given that the shared memory based stats collector patch could\n>>>>>>>>>>>>>>>> improve the performance of refreshing stats, it might be better not to\n>>>>>>>>>>>>>>>> check the existing stats frequently like the patch does. Anyway, I\n>>>>>>>>>>>>>>>> think it’s better to evaluate the performance improvement with other\n>>>>>>>>>>>>>>>> cases too.\n>>>>>>>>>>>>>>> Yeah, I would like to see how much the performance changes in other cases.\n>>>>>>>>>>>>>>> In addition, if the shared-based-stats patch is applied, we won't need to reload\n>>>>>>>>>>>>>>> a huge stats file, so we will just have to check the stats on\n>>>>>>>>>>>>>>> shared-mem every time.\n>>>>>>>>>>>>>>> Perhaps the logic of table_recheck_autovac could be simpler.\n>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>> BTW, I found some typos in comments, so attache a fixed version.\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>> The patch adds some duplicated codes into table_recheck_autovac().\n>>>>>>>>>>>>>> It's better to make the common function performing them and make\n>>>>>>>>>>>>>> table_recheck_autovac() call that common function, to simplify the code.\n>>>>>>>>>>>>> Thanks for your comment.\n>>>>>>>>>>>>> Hmm.. I've cut out the duplicate part.\n>>>>>>>>>>>>> Attach the patch.\n>>>>>>>>>>>>> Could you confirm that it fits your expecting?\n>>>>>>>>>>>>\n>>>>>>>>>>>> Yes, thanks for updataing the patch! Here are another review comments.\n>>>>>>>>>>>>\n>>>>>>>>>>>> + shared = pgstat_fetch_stat_dbentry(InvalidOid);\n>>>>>>>>>>>> + dbentry = pgstat_fetch_stat_dbentry(MyDatabaseId);\n>>>>>>>>>>>>\n>>>>>>>>>>>> When using the existing stats, ISTM that these are not necessary and\n>>>>>>>>>>>> we can reuse \"shared\" and \"dbentry\" obtained before. Right?\n>>>>>>>>>>> Yeah, but unless autovac_refresh_stats() is called, these functions\n>>>>>>>>>>> read the information from the\n>>>>>>>>>>> local hash table without re-read the stats file, so the process is very light.\n>>>>>>>>>>> Therefore, I think, it is better to keep the current logic to keep the\n>>>>>>>>>>> code simple.\n>>>>>>>>>>>\n>>>>>>>>>>>>\n>>>>>>>>>>>> + /* We might be better to refresh stats */\n>>>>>>>>>>>> + use_existing_stats = false;\n>>>>>>>>>>>>\n>>>>>>>>>>>> I think that we should add more comments about why it's better to\n>>>>>>>>>>>> refresh the stats in this case.\n>>>>>>>>>>>>\n>>>>>>>>>>>> + /* The relid has already vacuumed, so we might be better to use existing stats */\n>>>>>>>>>>>> + use_existing_stats = true;\n>>>>>>>>>>>>\n>>>>>>>>>>>> I think that we should add more comments about why it's better to\n>>>>>>>>>>>> reuse the stats in this case.\n>>>>>>>>>>> I added comments.\n>>>>>>>>>>>\n>>>>>>>>>>> Attache the patch.\n>>>>>>>>>>>\n>>>>>>>>>>\n>>>>>>>>>> Thank you for updating the patch. Here are some small comments on the\n>>>>>>>>>> latest (v4) patch.\n>>>>>>>>>>\n>>>>>>>>>> + * So if the last time we checked a table that was already vacuumed after\n>>>>>>>>>> + * refres stats, check the current statistics before refreshing it.\n>>>>>>>>>> + */\n>>>>>>>>>>\n>>>>>>>>>> s/refres/refresh/\n>>>>>>>> Thanks! fixed.\n>>>>>>>> Attached the patch.\n>>>>>>>>\n>>>>>>>>>>\n>>>>>>>>>> -----\n>>>>>>>>>> +/* Counter to determine if statistics should be refreshed */\n>>>>>>>>>> +static bool use_existing_stats = false;\n>>>>>>>>>> +\n>>>>>>>>>>\n>>>>>>>>>> I think 'use_existing_stats' can be declared within table_recheck_autovac().\n>>>>>>>>>>\n>>>>>>>>>> -----\n>>>>>>>>>> While testing the performance, I realized that the statistics are\n>>>>>>>>>> reset every time vacuumed one table, leading to re-reading the stats\n>>>>>>>>>> file even if 'use_existing_stats' is true. Please refer that vacuum()\n>>>>>>>>>> eventually calls AtEOXact_PgStat() which calls to\n>>>>>>>>>> pgstat_clear_snapshot().\n>>>>>>>>>\n>>>>>>>>> Good catch!\n>>>>>>>>>\n>>>>>>>>>\n>>>>>>>>>> I believe that's why the performance of the\n>>>>>>>>>> method of always checking the existing stats wasn’t good as expected.\n>>>>>>>>>> So if we save the statistics somewhere and use it for rechecking, the\n>>>>>>>>>> results of the performance benchmark will differ between these two\n>>>>>>>>>> methods.\n>>>>>>>> Thanks for you checks.\n>>>>>>>> But, if a worker did vacuum(), that means this worker had determined\n>>>>>>>> need vacuum in the\n>>>>>>>> table_recheck_autovac(). So, use_existing_stats set to false, and next\n>>>>>>>> time, refresh stats.\n>>>>>>>> Therefore I think the current patch is fine, as we want to avoid\n>>>>>>>> unnecessary refreshing of\n>>>>>>>> statistics before the actual vacuum(), right?\n>>>>>>>\n>>>>>>> Yes, you're right.\n>>>>>>>\n>>>>>>> When I benchmarked the performance of the method of always checking\n>>>>>>> existing stats I edited your patch so that it sets use_existing_stats\n>>>>>>> = true even if the second check is false (i.g., vacuum is needed).\n>>>>>>> And the result I got was worse than expected especially in the case of\n>>>>>>> a few autovacuum workers. But it doesn't evaluate the performance of\n>>>>>>> that method rightly as the stats snapshot is cleared every time\n>>>>>>> vacuum. Given you had similar results, I guess you used a similar way\n>>>>>>> when evaluating it, is it right? If so, it’s better to fix this issue\n>>>>>>> and see how the performance benchmark results will differ.\n>>>>>>>\n>>>>>>> For example, the results of the test case with 10000 tables and 1\n>>>>>>> autovacuum worker I reported before was:\n>>>>>>>\n>>>>>>> 10000 tables:\n>>>>>>> autovac_workers 1 : 158s,157s, 290s\n>>>>>>>\n>>>>>>> But after fixing that issue in the third method (always checking the\n>>>>>>> existing stats), the results are:\n>>>>>>\n>>>>>> Could you tell me how you fixed that issue? You copied the stats to\n>>>>>> somewhere as you suggested or skipped pgstat_clear_snapshot() as\n>>>>>> I suggested?\n>>>>>\n>>>>> I used the way you suggested in this quick test; skipped\n>>>>> pgstat_clear_snapshot().\n>>>>>\n>>>>>>\n>>>>>> Kasahara-san seems not to like the latter idea because it might\n>>>>>> cause bad side effect. So we should use the former idea?\n>>>>>\n>>>>> Not sure. I'm also concerned about the side effect but I've not checked yet.\n>>>>>\n>>>>> Since probably there is no big difference between the two ways in\n>>>>> terms of performance I'm going to see how the performance benchmark\n>>>>> result will change first.\n>>>>\n>>>> I've tested performance improvement again. From the left the execution\n>>>> time of the current HEAD, Kasahara-san's patch, the method of always\n>>>> checking the existing stats (using approach suggested by Fujii-san),\n>>>> in seconds.\n>>>>\n>>>> 1000 tables:\n>>>> autovac_workers 1 : 13s, 13s, 13s\n>>>> autovac_workers 2 : 6s, 4s, 4s\n>>>> autovac_workers 3 : 3s, 4s, 3s\n>>>> autovac_workers 5 : 3s, 3s, 2s\n>>>> autovac_workers 10: 2s, 3s, 2s\n>>>>\n>>>> 5000 tables:\n>>>> autovac_workers 1 : 71s, 71s, 72s\n>>>> autovac_workers 2 : 37s, 32s, 32s\n>>>> autovac_workers 3 : 29s, 26s, 26s\n>>>> autovac_workers 5 : 20s, 19s, 18s\n>>>> autovac_workers 10: 13s, 8s, 8s\n>>>>\n>>>> 10000 tables:\n>>>> autovac_workers 1 : 158s,157s, 159s\n>>>> autovac_workers 2 : 80s, 53s, 78s\n>>>> autovac_workers 3 : 75s, 67s, 67s\n>>>> autovac_workers 5 : 61s, 42s, 42s\n>>>> autovac_workers 10: 69s, 26s, 25s\n>>>>\n>>>> 20000 tables:\n>>>> autovac_workers 1 : 379s, 380s, 389s\n>>>> autovac_workers 2 : 236s, 232s, 233s\n>>>> autovac_workers 3 : 222s, 181s, 182s\n>>>> autovac_workers 5 : 212s, 132s, 139s\n>>>> autovac_workers 10: 317s, 91s, 89s\n>>>>\n>>>> I don't see a big difference between Kasahara-san's patch and the\n>>>> method of always checking the existing stats.\n>>>\n>>> Thanks for doing the benchmark!\n>>>\n>>> This benchmark result makes me think that we don't need to tweak\n>>> AtEOXact_PgStat() and can use Kasahara-san approach.\n>>> That's good news :)\n>>\n>> Yeah, given that all autovaucum workers have the list of tables to\n>> vacuum in the same order in most cases, the assumption in\n>> Kasahara-san’s patch that if a worker needs to vacuum a table it’s\n>> unlikely that it will be able to skip the next table using the current\n>> snapshot of stats makes sense to me.\n>>\n>> One small comment on v6 patch:\n>>\n>> + /* When we decide to do vacuum or analyze, the existing stats cannot\n>> + * be reused in the next cycle because it's cleared at the end of vacuum\n>> + * or analyze (by AtEOXact_PgStat()).\n>> + */\n>> + use_existing_stats = false;\n>>\n>> I think the comment should start on the second line (i.g., \\n is\n>> needed after /*).\n> Oops, thanks.\n> Fixed.\n\nThanks for updating the patch!\n\nI applied the following cosmetic changes to the patch.\nAttached is the updated version of the patch.\nCoud you review this version?\n\n- Ran pgindent to fix some warnings that \"git diff --check\"\n reported on the patch.\n- Made the order of arguments consistent between\n recheck_relation_needs_vacanalyze and relation_needs_vacanalyze.\n- Renamed the variable use_existing_stats to reuse_stats for simplicity.\n- Added more comments.\n\nBarring any objection, I'm thinking to commit this version.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Thu, 3 Dec 2020 21:09:32 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: autovac issue with large number of tables" }, { "msg_contents": "Hi,\n\nOn Thu, Dec 3, 2020 at 9:09 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/12/03 11:46, Kasahara Tatsuhito wrote:\n> > On Wed, Dec 2, 2020 at 7:11 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >>\n> >> On Wed, Dec 2, 2020 at 3:33 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>\n> >>>\n> >>>\n> >>> On 2020/12/02 12:53, Masahiko Sawada wrote:\n> >>>> On Tue, Dec 1, 2020 at 5:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >>>>>\n> >>>>> On Tue, Dec 1, 2020 at 4:32 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>>>\n> >>>>>>\n> >>>>>>\n> >>>>>> On 2020/12/01 16:23, Masahiko Sawada wrote:\n> >>>>>>> On Tue, Dec 1, 2020 at 1:48 PM Kasahara Tatsuhito\n> >>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n> >>>>>>>>\n> >>>>>>>> Hi,\n> >>>>>>>>\n> >>>>>>>> On Mon, Nov 30, 2020 at 8:59 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>>>>>>\n> >>>>>>>>>\n> >>>>>>>>>\n> >>>>>>>>> On 2020/11/30 10:43, Masahiko Sawada wrote:\n> >>>>>>>>>> On Sun, Nov 29, 2020 at 10:34 PM Kasahara Tatsuhito\n> >>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n> >>>>>>>>>>>\n> >>>>>>>>>>> Hi, Thanks for you comments.\n> >>>>>>>>>>>\n> >>>>>>>>>>> On Fri, Nov 27, 2020 at 9:51 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>>>>>>>>>\n> >>>>>>>>>>>>\n> >>>>>>>>>>>>\n> >>>>>>>>>>>> On 2020/11/27 18:38, Kasahara Tatsuhito wrote:\n> >>>>>>>>>>>>> Hi,\n> >>>>>>>>>>>>>\n> >>>>>>>>>>>>> On Fri, Nov 27, 2020 at 1:43 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>\n> >>>>>>>>>>>>>> On 2020/11/26 10:41, Kasahara Tatsuhito wrote:\n> >>>>>>>>>>>>>>> On Wed, Nov 25, 2020 at 8:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> On Wed, Nov 25, 2020 at 4:18 PM Kasahara Tatsuhito\n> >>>>>>>>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n> >>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>> Hi,\n> >>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>> On Wed, Nov 25, 2020 at 2:17 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>> On Fri, Sep 4, 2020 at 7:50 PM Kasahara Tatsuhito\n> >>>>>>>>>>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n> >>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>> Hi,\n> >>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>> On Wed, Sep 2, 2020 at 2:10 AM Kasahara Tatsuhito\n> >>>>>>>>>>>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n> >>>>>>>>>>>>>>>>>>>>> I wonder if we could have table_recheck_autovac do two probes of the stats\n> >>>>>>>>>>>>>>>>>>>>> data. First probe the existing stats data, and if it shows the table to\n> >>>>>>>>>>>>>>>>>>>>> be already vacuumed, return immediately. If not, *then* force a stats\n> >>>>>>>>>>>>>>>>>>>>> re-read, and check a second time.\n> >>>>>>>>>>>>>>>>>>>> Does the above mean that the second and subsequent table_recheck_autovac()\n> >>>>>>>>>>>>>>>>>>>> will be improved to first check using the previous refreshed statistics?\n> >>>>>>>>>>>>>>>>>>>> I think that certainly works.\n> >>>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>>> If that's correct, I'll try to create a patch for the PoC\n> >>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>> I still don't know how to reproduce Jim's troubles, but I was able to reproduce\n> >>>>>>>>>>>>>>>>>>> what was probably a very similar problem.\n> >>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>> This problem seems to be more likely to occur in cases where you have\n> >>>>>>>>>>>>>>>>>>> a large number of tables,\n> >>>>>>>>>>>>>>>>>>> i.e., a large amount of stats, and many small tables need VACUUM at\n> >>>>>>>>>>>>>>>>>>> the same time.\n> >>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>> So I followed Tom's advice and created a patch for the PoC.\n> >>>>>>>>>>>>>>>>>>> This patch will enable a flag in the table_recheck_autovac function to use\n> >>>>>>>>>>>>>>>>>>> the existing stats next time if VACUUM (or ANALYZE) has already been done\n> >>>>>>>>>>>>>>>>>>> by another worker on the check after the stats have been updated.\n> >>>>>>>>>>>>>>>>>>> If the tables continue to require VACUUM after the refresh, then a refresh\n> >>>>>>>>>>>>>>>>>>> will be required instead of using the existing statistics.\n> >>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>> I did simple test with HEAD and HEAD + this PoC patch.\n> >>>>>>>>>>>>>>>>>>> The tests were conducted in two cases.\n> >>>>>>>>>>>>>>>>>>> (I changed few configurations. see attached scripts)\n> >>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>> 1. Normal VACUUM case\n> >>>>>>>>>>>>>>>>>>> - SET autovacuum = off\n> >>>>>>>>>>>>>>>>>>> - CREATE tables with 100 rows\n> >>>>>>>>>>>>>>>>>>> - DELETE 90 rows for each tables\n> >>>>>>>>>>>>>>>>>>> - SET autovacuum = on and restart PostgreSQL\n> >>>>>>>>>>>>>>>>>>> - Measure the time it takes for all tables to be VACUUMed\n> >>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>> 2. Anti wrap round VACUUM case\n> >>>>>>>>>>>>>>>>>>> - CREATE brank tables\n> >>>>>>>>>>>>>>>>>>> - SELECT all of these tables (for generate stats)\n> >>>>>>>>>>>>>>>>>>> - SET autovacuum_freeze_max_age to low values and restart PostgreSQL\n> >>>>>>>>>>>>>>>>>>> - Consumes a lot of XIDs by using txid_curent()\n> >>>>>>>>>>>>>>>>>>> - Measure the time it takes for all tables to be VACUUMed\n> >>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>> For each test case, the following results were obtained by changing\n> >>>>>>>>>>>>>>>>>>> autovacuum_max_workers parameters to 1, 2, 3(def) 5 and 10.\n> >>>>>>>>>>>>>>>>>>> Also changing num of tables to 1000, 5000, 10000 and 20000.\n> >>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>> Due to the poor VM environment (2 VCPU/4 GB), the results are a little unstable,\n> >>>>>>>>>>>>>>>>>>> but I think it's enough to ask for a trend.\n> >>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>> ===========================================================================\n> >>>>>>>>>>>>>>>>>>> [1.Normal VACUUM case]\n> >>>>>>>>>>>>>>>>>>> tables:1000\n> >>>>>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 20 sec VS (with patch) 20 sec\n> >>>>>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 18 sec VS (with patch) 16 sec\n> >>>>>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 18 sec VS (with patch) 16 sec\n> >>>>>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 19 sec VS (with patch) 17 sec\n> >>>>>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 19 sec VS (with patch) 17 sec\n> >>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>> tables:5000\n> >>>>>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 77 sec VS (with patch) 78 sec\n> >>>>>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 61 sec VS (with patch) 43 sec\n> >>>>>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 38 sec VS (with patch) 38 sec\n> >>>>>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 45 sec VS (with patch) 37 sec\n> >>>>>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 43 sec VS (with patch) 35 sec\n> >>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>> tables:10000\n> >>>>>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 152 sec VS (with patch) 153 sec\n> >>>>>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 119 sec VS (with patch) 98 sec\n> >>>>>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 87 sec VS (with patch) 78 sec\n> >>>>>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 100 sec VS (with patch) 66 sec\n> >>>>>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 97 sec VS (with patch) 56 sec\n> >>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>> tables:20000\n> >>>>>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 338 sec VS (with patch) 339 sec\n> >>>>>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 231 sec VS (with patch) 229 sec\n> >>>>>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 220 sec VS (with patch) 191 sec\n> >>>>>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 234 sec VS (with patch) 147 sec\n> >>>>>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 320 sec VS (with patch) 113 sec\n> >>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>> [2.Anti wrap round VACUUM case]\n> >>>>>>>>>>>>>>>>>>> tables:1000\n> >>>>>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 19 sec VS (with patch) 18 sec\n> >>>>>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 14 sec VS (with patch) 15 sec\n> >>>>>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 14 sec VS (with patch) 14 sec\n> >>>>>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 14 sec VS (with patch) 16 sec\n> >>>>>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 16 sec VS (with patch) 14 sec\n> >>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>> tables:5000\n> >>>>>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 69 sec VS (with patch) 69 sec\n> >>>>>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 66 sec VS (with patch) 47 sec\n> >>>>>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 59 sec VS (with patch) 37 sec\n> >>>>>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 39 sec VS (with patch) 28 sec\n> >>>>>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 39 sec VS (with patch) 29 sec\n> >>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>> tables:10000\n> >>>>>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 139 sec VS (with patch) 138 sec\n> >>>>>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 130 sec VS (with patch) 86 sec\n> >>>>>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 120 sec VS (with patch) 68 sec\n> >>>>>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 96 sec VS (with patch) 41 sec\n> >>>>>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 90 sec VS (with patch) 39 sec\n> >>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>> tables:20000\n> >>>>>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 313 sec VS (with patch) 331 sec\n> >>>>>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 209 sec VS (with patch) 201 sec\n> >>>>>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 227 sec VS (with patch) 141 sec\n> >>>>>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 236 sec VS (with patch) 88 sec\n> >>>>>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 309 sec VS (with patch) 74 sec\n> >>>>>>>>>>>>>>>>>>> ===========================================================================\n> >>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>> The cases without patch, the scalability of the worker has decreased\n> >>>>>>>>>>>>>>>>>>> as the number of tables has increased.\n> >>>>>>>>>>>>>>>>>>> In fact, the more workers there are, the longer it takes to complete\n> >>>>>>>>>>>>>>>>>>> VACUUM to all tables.\n> >>>>>>>>>>>>>>>>>>> The cases with patch, it shows good scalability with respect to the\n> >>>>>>>>>>>>>>>>>>> number of workers.\n> >>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>> It seems a good performance improvement even without the patch of\n> >>>>>>>>>>>>>>>>>> shared memory based stats collector.\n> >>>>>>>>>>>>>>\n> >>>>>>>>>>>>>> Sounds great!\n> >>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>> Note that perf top results showed that hash_search_with_hash_value,\n> >>>>>>>>>>>>>>>>>>> hash_seq_search and\n> >>>>>>>>>>>>>>>>>>> pgstat_read_statsfiles are dominant during VACUUM in all patterns,\n> >>>>>>>>>>>>>>>>>>> with or without the patch.\n> >>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>> Therefore, there is still a need to find ways to optimize the reading\n> >>>>>>>>>>>>>>>>>>> of large amounts of stats.\n> >>>>>>>>>>>>>>>>>>> However, this patch is effective in its own right, and since there are\n> >>>>>>>>>>>>>>>>>>> only a few parts to modify,\n> >>>>>>>>>>>>>>>>>>> I think it should be able to be applied to current (preferably\n> >>>>>>>>>>>>>>>>>>> pre-v13) PostgreSQL.\n> >>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>> +1\n> >>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>> +\n> >>>>>>>>>>>>>>>>>> + /* We might be better to refresh stats */\n> >>>>>>>>>>>>>>>>>> + use_existing_stats = false;\n> >>>>>>>>>>>>>>>>>> }\n> >>>>>>>>>>>>>>>>>> + else\n> >>>>>>>>>>>>>>>>>> + {\n> >>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>> - heap_freetuple(classTup);\n> >>>>>>>>>>>>>>>>>> + heap_freetuple(classTup);\n> >>>>>>>>>>>>>>>>>> + /* The relid has already vacuumed, so we might be better to\n> >>>>>>>>>>>>>>>>>> use exiting stats */\n> >>>>>>>>>>>>>>>>>> + use_existing_stats = true;\n> >>>>>>>>>>>>>>>>>> + }\n> >>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>> With that patch, the autovacuum process refreshes the stats in the\n> >>>>>>>>>>>>>>>>>> next check if it finds out that the table still needs to be vacuumed.\n> >>>>>>>>>>>>>>>>>> But I guess it's not necessarily true because the next table might be\n> >>>>>>>>>>>>>>>>>> vacuumed already. So I think we might want to always use the existing\n> >>>>>>>>>>>>>>>>>> for the first check. What do you think?\n> >>>>>>>>>>>>>>>>> Thanks for your comment.\n> >>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>> If we assume the case where some workers vacuum on large tables\n> >>>>>>>>>>>>>>>>> and a single worker vacuum on small tables, the processing\n> >>>>>>>>>>>>>>>>> performance of the single worker will be slightly lower if the\n> >>>>>>>>>>>>>>>>> existing statistics are checked every time.\n> >>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>> In fact, at first I tried to check the existing stats every time,\n> >>>>>>>>>>>>>>>>> but the performance was slightly worse in cases with a small number of workers.\n> >>>>>>>>>>>>>>\n> >>>>>>>>>>>>>> Do you have this benchmark result?\n> >>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>> (Checking the existing stats is lightweight , but at high frequency,\n> >>>>>>>>>>>>>>>>> it affects processing performance.)\n> >>>>>>>>>>>>>>>>> Therefore, at after refresh statistics, determine whether autovac\n> >>>>>>>>>>>>>>>>> should use the existing statistics.\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> Yeah, since the test you used uses a lot of small tables, if there are\n> >>>>>>>>>>>>>>>> a few workers, checking the existing stats is unlikely to return true\n> >>>>>>>>>>>>>>>> (no need to vacuum). So the cost of existing stats check ends up being\n> >>>>>>>>>>>>>>>> overhead. Not sure how slow always checking the existing stats was,\n> >>>>>>>>>>>>>>>> but given that the shared memory based stats collector patch could\n> >>>>>>>>>>>>>>>> improve the performance of refreshing stats, it might be better not to\n> >>>>>>>>>>>>>>>> check the existing stats frequently like the patch does. Anyway, I\n> >>>>>>>>>>>>>>>> think it’s better to evaluate the performance improvement with other\n> >>>>>>>>>>>>>>>> cases too.\n> >>>>>>>>>>>>>>> Yeah, I would like to see how much the performance changes in other cases.\n> >>>>>>>>>>>>>>> In addition, if the shared-based-stats patch is applied, we won't need to reload\n> >>>>>>>>>>>>>>> a huge stats file, so we will just have to check the stats on\n> >>>>>>>>>>>>>>> shared-mem every time.\n> >>>>>>>>>>>>>>> Perhaps the logic of table_recheck_autovac could be simpler.\n> >>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>> BTW, I found some typos in comments, so attache a fixed version.\n> >>>>>>>>>>>>>>\n> >>>>>>>>>>>>>> The patch adds some duplicated codes into table_recheck_autovac().\n> >>>>>>>>>>>>>> It's better to make the common function performing them and make\n> >>>>>>>>>>>>>> table_recheck_autovac() call that common function, to simplify the code.\n> >>>>>>>>>>>>> Thanks for your comment.\n> >>>>>>>>>>>>> Hmm.. I've cut out the duplicate part.\n> >>>>>>>>>>>>> Attach the patch.\n> >>>>>>>>>>>>> Could you confirm that it fits your expecting?\n> >>>>>>>>>>>>\n> >>>>>>>>>>>> Yes, thanks for updataing the patch! Here are another review comments.\n> >>>>>>>>>>>>\n> >>>>>>>>>>>> + shared = pgstat_fetch_stat_dbentry(InvalidOid);\n> >>>>>>>>>>>> + dbentry = pgstat_fetch_stat_dbentry(MyDatabaseId);\n> >>>>>>>>>>>>\n> >>>>>>>>>>>> When using the existing stats, ISTM that these are not necessary and\n> >>>>>>>>>>>> we can reuse \"shared\" and \"dbentry\" obtained before. Right?\n> >>>>>>>>>>> Yeah, but unless autovac_refresh_stats() is called, these functions\n> >>>>>>>>>>> read the information from the\n> >>>>>>>>>>> local hash table without re-read the stats file, so the process is very light.\n> >>>>>>>>>>> Therefore, I think, it is better to keep the current logic to keep the\n> >>>>>>>>>>> code simple.\n> >>>>>>>>>>>\n> >>>>>>>>>>>>\n> >>>>>>>>>>>> + /* We might be better to refresh stats */\n> >>>>>>>>>>>> + use_existing_stats = false;\n> >>>>>>>>>>>>\n> >>>>>>>>>>>> I think that we should add more comments about why it's better to\n> >>>>>>>>>>>> refresh the stats in this case.\n> >>>>>>>>>>>>\n> >>>>>>>>>>>> + /* The relid has already vacuumed, so we might be better to use existing stats */\n> >>>>>>>>>>>> + use_existing_stats = true;\n> >>>>>>>>>>>>\n> >>>>>>>>>>>> I think that we should add more comments about why it's better to\n> >>>>>>>>>>>> reuse the stats in this case.\n> >>>>>>>>>>> I added comments.\n> >>>>>>>>>>>\n> >>>>>>>>>>> Attache the patch.\n> >>>>>>>>>>>\n> >>>>>>>>>>\n> >>>>>>>>>> Thank you for updating the patch. Here are some small comments on the\n> >>>>>>>>>> latest (v4) patch.\n> >>>>>>>>>>\n> >>>>>>>>>> + * So if the last time we checked a table that was already vacuumed after\n> >>>>>>>>>> + * refres stats, check the current statistics before refreshing it.\n> >>>>>>>>>> + */\n> >>>>>>>>>>\n> >>>>>>>>>> s/refres/refresh/\n> >>>>>>>> Thanks! fixed.\n> >>>>>>>> Attached the patch.\n> >>>>>>>>\n> >>>>>>>>>>\n> >>>>>>>>>> -----\n> >>>>>>>>>> +/* Counter to determine if statistics should be refreshed */\n> >>>>>>>>>> +static bool use_existing_stats = false;\n> >>>>>>>>>> +\n> >>>>>>>>>>\n> >>>>>>>>>> I think 'use_existing_stats' can be declared within table_recheck_autovac().\n> >>>>>>>>>>\n> >>>>>>>>>> -----\n> >>>>>>>>>> While testing the performance, I realized that the statistics are\n> >>>>>>>>>> reset every time vacuumed one table, leading to re-reading the stats\n> >>>>>>>>>> file even if 'use_existing_stats' is true. Please refer that vacuum()\n> >>>>>>>>>> eventually calls AtEOXact_PgStat() which calls to\n> >>>>>>>>>> pgstat_clear_snapshot().\n> >>>>>>>>>\n> >>>>>>>>> Good catch!\n> >>>>>>>>>\n> >>>>>>>>>\n> >>>>>>>>>> I believe that's why the performance of the\n> >>>>>>>>>> method of always checking the existing stats wasn’t good as expected.\n> >>>>>>>>>> So if we save the statistics somewhere and use it for rechecking, the\n> >>>>>>>>>> results of the performance benchmark will differ between these two\n> >>>>>>>>>> methods.\n> >>>>>>>> Thanks for you checks.\n> >>>>>>>> But, if a worker did vacuum(), that means this worker had determined\n> >>>>>>>> need vacuum in the\n> >>>>>>>> table_recheck_autovac(). So, use_existing_stats set to false, and next\n> >>>>>>>> time, refresh stats.\n> >>>>>>>> Therefore I think the current patch is fine, as we want to avoid\n> >>>>>>>> unnecessary refreshing of\n> >>>>>>>> statistics before the actual vacuum(), right?\n> >>>>>>>\n> >>>>>>> Yes, you're right.\n> >>>>>>>\n> >>>>>>> When I benchmarked the performance of the method of always checking\n> >>>>>>> existing stats I edited your patch so that it sets use_existing_stats\n> >>>>>>> = true even if the second check is false (i.g., vacuum is needed).\n> >>>>>>> And the result I got was worse than expected especially in the case of\n> >>>>>>> a few autovacuum workers. But it doesn't evaluate the performance of\n> >>>>>>> that method rightly as the stats snapshot is cleared every time\n> >>>>>>> vacuum. Given you had similar results, I guess you used a similar way\n> >>>>>>> when evaluating it, is it right? If so, it’s better to fix this issue\n> >>>>>>> and see how the performance benchmark results will differ.\n> >>>>>>>\n> >>>>>>> For example, the results of the test case with 10000 tables and 1\n> >>>>>>> autovacuum worker I reported before was:\n> >>>>>>>\n> >>>>>>> 10000 tables:\n> >>>>>>> autovac_workers 1 : 158s,157s, 290s\n> >>>>>>>\n> >>>>>>> But after fixing that issue in the third method (always checking the\n> >>>>>>> existing stats), the results are:\n> >>>>>>\n> >>>>>> Could you tell me how you fixed that issue? You copied the stats to\n> >>>>>> somewhere as you suggested or skipped pgstat_clear_snapshot() as\n> >>>>>> I suggested?\n> >>>>>\n> >>>>> I used the way you suggested in this quick test; skipped\n> >>>>> pgstat_clear_snapshot().\n> >>>>>\n> >>>>>>\n> >>>>>> Kasahara-san seems not to like the latter idea because it might\n> >>>>>> cause bad side effect. So we should use the former idea?\n> >>>>>\n> >>>>> Not sure. I'm also concerned about the side effect but I've not checked yet.\n> >>>>>\n> >>>>> Since probably there is no big difference between the two ways in\n> >>>>> terms of performance I'm going to see how the performance benchmark\n> >>>>> result will change first.\n> >>>>\n> >>>> I've tested performance improvement again. From the left the execution\n> >>>> time of the current HEAD, Kasahara-san's patch, the method of always\n> >>>> checking the existing stats (using approach suggested by Fujii-san),\n> >>>> in seconds.\n> >>>>\n> >>>> 1000 tables:\n> >>>> autovac_workers 1 : 13s, 13s, 13s\n> >>>> autovac_workers 2 : 6s, 4s, 4s\n> >>>> autovac_workers 3 : 3s, 4s, 3s\n> >>>> autovac_workers 5 : 3s, 3s, 2s\n> >>>> autovac_workers 10: 2s, 3s, 2s\n> >>>>\n> >>>> 5000 tables:\n> >>>> autovac_workers 1 : 71s, 71s, 72s\n> >>>> autovac_workers 2 : 37s, 32s, 32s\n> >>>> autovac_workers 3 : 29s, 26s, 26s\n> >>>> autovac_workers 5 : 20s, 19s, 18s\n> >>>> autovac_workers 10: 13s, 8s, 8s\n> >>>>\n> >>>> 10000 tables:\n> >>>> autovac_workers 1 : 158s,157s, 159s\n> >>>> autovac_workers 2 : 80s, 53s, 78s\n> >>>> autovac_workers 3 : 75s, 67s, 67s\n> >>>> autovac_workers 5 : 61s, 42s, 42s\n> >>>> autovac_workers 10: 69s, 26s, 25s\n> >>>>\n> >>>> 20000 tables:\n> >>>> autovac_workers 1 : 379s, 380s, 389s\n> >>>> autovac_workers 2 : 236s, 232s, 233s\n> >>>> autovac_workers 3 : 222s, 181s, 182s\n> >>>> autovac_workers 5 : 212s, 132s, 139s\n> >>>> autovac_workers 10: 317s, 91s, 89s\n> >>>>\n> >>>> I don't see a big difference between Kasahara-san's patch and the\n> >>>> method of always checking the existing stats.\n> >>>\n> >>> Thanks for doing the benchmark!\n> >>>\n> >>> This benchmark result makes me think that we don't need to tweak\n> >>> AtEOXact_PgStat() and can use Kasahara-san approach.\n> >>> That's good news :)\n> >>\n> >> Yeah, given that all autovaucum workers have the list of tables to\n> >> vacuum in the same order in most cases, the assumption in\n> >> Kasahara-san’s patch that if a worker needs to vacuum a table it’s\n> >> unlikely that it will be able to skip the next table using the current\n> >> snapshot of stats makes sense to me.\n> >>\n> >> One small comment on v6 patch:\n> >>\n> >> + /* When we decide to do vacuum or analyze, the existing stats cannot\n> >> + * be reused in the next cycle because it's cleared at the end of vacuum\n> >> + * or analyze (by AtEOXact_PgStat()).\n> >> + */\n> >> + use_existing_stats = false;\n> >>\n> >> I think the comment should start on the second line (i.g., \\n is\n> >> needed after /*).\n> > Oops, thanks.\n> > Fixed.\n>\n> Thanks for updating the patch!\n>\n> I applied the following cosmetic changes to the patch.\n> Attached is the updated version of the patch.\n> Coud you review this version?\nThanks for tweaking the patch.\n\n> - Ran pgindent to fix some warnings that \"git diff --check\"\n> reported on the patch.\n> - Made the order of arguments consistent between\n> recheck_relation_needs_vacanalyze and relation_needs_vacanalyze.\n> - Renamed the variable use_existing_stats to reuse_stats for simplicity.\n> - Added more comments.\nI think it's no problem.\nThe patch passed makecheck, and I benchmarked \"Anti wrap round VACUUM\ncase\" (only 20000 tables) just in case.\n\n From the left the execution time of the current HEAD, v8 patch.\ntables 20000:\n autovac workers 1: 319sec, 315sec\n autovac workers 2: 301sec, 190sec\n autovac workers 3: 270sec, 133sec\n autovac workers 5: 211sec, 86sec\n autovac workers 10: 376sec, 68sec\n\nIt's as expected.\n\n> Barring any objection, I'm thinking to commit this version.\n+1\n\nBest regards,\n\n> Regards,\n>\n> --\n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n\n\n\n-- \nTatsuhito Kasahara\nkasahara.tatsuhito _at_ gmail.com\n\n\n", "msg_date": "Fri, 4 Dec 2020 12:21:44 +0900", "msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>", "msg_from_op": false, "msg_subject": "Re: autovac issue with large number of tables" }, { "msg_contents": "\n\nOn 2020/12/04 12:21, Kasahara Tatsuhito wrote:\n> Hi,\n> \n> On Thu, Dec 3, 2020 at 9:09 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2020/12/03 11:46, Kasahara Tatsuhito wrote:\n>>> On Wed, Dec 2, 2020 at 7:11 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>>>\n>>>> On Wed, Dec 2, 2020 at 3:33 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>\n>>>>>\n>>>>>\n>>>>> On 2020/12/02 12:53, Masahiko Sawada wrote:\n>>>>>> On Tue, Dec 1, 2020 at 5:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>>>>>>\n>>>>>>> On Tue, Dec 1, 2020 at 4:32 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>>>\n>>>>>>>>\n>>>>>>>>\n>>>>>>>> On 2020/12/01 16:23, Masahiko Sawada wrote:\n>>>>>>>>> On Tue, Dec 1, 2020 at 1:48 PM Kasahara Tatsuhito\n>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n>>>>>>>>>>\n>>>>>>>>>> Hi,\n>>>>>>>>>>\n>>>>>>>>>> On Mon, Nov 30, 2020 at 8:59 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>>>>>>\n>>>>>>>>>>>\n>>>>>>>>>>>\n>>>>>>>>>>> On 2020/11/30 10:43, Masahiko Sawada wrote:\n>>>>>>>>>>>> On Sun, Nov 29, 2020 at 10:34 PM Kasahara Tatsuhito\n>>>>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n>>>>>>>>>>>>>\n>>>>>>>>>>>>> Hi, Thanks for you comments.\n>>>>>>>>>>>>>\n>>>>>>>>>>>>> On Fri, Nov 27, 2020 at 9:51 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>> On 2020/11/27 18:38, Kasahara Tatsuhito wrote:\n>>>>>>>>>>>>>>> Hi,\n>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>> On Fri, Nov 27, 2020 at 1:43 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>> On 2020/11/26 10:41, Kasahara Tatsuhito wrote:\n>>>>>>>>>>>>>>>>> On Wed, Nov 25, 2020 at 8:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>> On Wed, Nov 25, 2020 at 4:18 PM Kasahara Tatsuhito\n>>>>>>>>>>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>> Hi,\n>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>> On Wed, Nov 25, 2020 at 2:17 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>>> On Fri, Sep 4, 2020 at 7:50 PM Kasahara Tatsuhito\n>>>>>>>>>>>>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n>>>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>>>> Hi,\n>>>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>>>> On Wed, Sep 2, 2020 at 2:10 AM Kasahara Tatsuhito\n>>>>>>>>>>>>>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n>>>>>>>>>>>>>>>>>>>>>>> I wonder if we could have table_recheck_autovac do two probes of the stats\n>>>>>>>>>>>>>>>>>>>>>>> data. First probe the existing stats data, and if it shows the table to\n>>>>>>>>>>>>>>>>>>>>>>> be already vacuumed, return immediately. If not, *then* force a stats\n>>>>>>>>>>>>>>>>>>>>>>> re-read, and check a second time.\n>>>>>>>>>>>>>>>>>>>>>> Does the above mean that the second and subsequent table_recheck_autovac()\n>>>>>>>>>>>>>>>>>>>>>> will be improved to first check using the previous refreshed statistics?\n>>>>>>>>>>>>>>>>>>>>>> I think that certainly works.\n>>>>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>>>>> If that's correct, I'll try to create a patch for the PoC\n>>>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>>>> I still don't know how to reproduce Jim's troubles, but I was able to reproduce\n>>>>>>>>>>>>>>>>>>>>> what was probably a very similar problem.\n>>>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>>>> This problem seems to be more likely to occur in cases where you have\n>>>>>>>>>>>>>>>>>>>>> a large number of tables,\n>>>>>>>>>>>>>>>>>>>>> i.e., a large amount of stats, and many small tables need VACUUM at\n>>>>>>>>>>>>>>>>>>>>> the same time.\n>>>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>>>> So I followed Tom's advice and created a patch for the PoC.\n>>>>>>>>>>>>>>>>>>>>> This patch will enable a flag in the table_recheck_autovac function to use\n>>>>>>>>>>>>>>>>>>>>> the existing stats next time if VACUUM (or ANALYZE) has already been done\n>>>>>>>>>>>>>>>>>>>>> by another worker on the check after the stats have been updated.\n>>>>>>>>>>>>>>>>>>>>> If the tables continue to require VACUUM after the refresh, then a refresh\n>>>>>>>>>>>>>>>>>>>>> will be required instead of using the existing statistics.\n>>>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>>>> I did simple test with HEAD and HEAD + this PoC patch.\n>>>>>>>>>>>>>>>>>>>>> The tests were conducted in two cases.\n>>>>>>>>>>>>>>>>>>>>> (I changed few configurations. see attached scripts)\n>>>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>>>> 1. Normal VACUUM case\n>>>>>>>>>>>>>>>>>>>>> - SET autovacuum = off\n>>>>>>>>>>>>>>>>>>>>> - CREATE tables with 100 rows\n>>>>>>>>>>>>>>>>>>>>> - DELETE 90 rows for each tables\n>>>>>>>>>>>>>>>>>>>>> - SET autovacuum = on and restart PostgreSQL\n>>>>>>>>>>>>>>>>>>>>> - Measure the time it takes for all tables to be VACUUMed\n>>>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>>>> 2. Anti wrap round VACUUM case\n>>>>>>>>>>>>>>>>>>>>> - CREATE brank tables\n>>>>>>>>>>>>>>>>>>>>> - SELECT all of these tables (for generate stats)\n>>>>>>>>>>>>>>>>>>>>> - SET autovacuum_freeze_max_age to low values and restart PostgreSQL\n>>>>>>>>>>>>>>>>>>>>> - Consumes a lot of XIDs by using txid_curent()\n>>>>>>>>>>>>>>>>>>>>> - Measure the time it takes for all tables to be VACUUMed\n>>>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>>>> For each test case, the following results were obtained by changing\n>>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers parameters to 1, 2, 3(def) 5 and 10.\n>>>>>>>>>>>>>>>>>>>>> Also changing num of tables to 1000, 5000, 10000 and 20000.\n>>>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>>>> Due to the poor VM environment (2 VCPU/4 GB), the results are a little unstable,\n>>>>>>>>>>>>>>>>>>>>> but I think it's enough to ask for a trend.\n>>>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>>>> ===========================================================================\n>>>>>>>>>>>>>>>>>>>>> [1.Normal VACUUM case]\n>>>>>>>>>>>>>>>>>>>>> tables:1000\n>>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 20 sec VS (with patch) 20 sec\n>>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 18 sec VS (with patch) 16 sec\n>>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 18 sec VS (with patch) 16 sec\n>>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 19 sec VS (with patch) 17 sec\n>>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 19 sec VS (with patch) 17 sec\n>>>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>>>> tables:5000\n>>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 77 sec VS (with patch) 78 sec\n>>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 61 sec VS (with patch) 43 sec\n>>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 38 sec VS (with patch) 38 sec\n>>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 45 sec VS (with patch) 37 sec\n>>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 43 sec VS (with patch) 35 sec\n>>>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>>>> tables:10000\n>>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 152 sec VS (with patch) 153 sec\n>>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 119 sec VS (with patch) 98 sec\n>>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 87 sec VS (with patch) 78 sec\n>>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 100 sec VS (with patch) 66 sec\n>>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 97 sec VS (with patch) 56 sec\n>>>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>>>> tables:20000\n>>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 338 sec VS (with patch) 339 sec\n>>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 231 sec VS (with patch) 229 sec\n>>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 220 sec VS (with patch) 191 sec\n>>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 234 sec VS (with patch) 147 sec\n>>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 320 sec VS (with patch) 113 sec\n>>>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>>>> [2.Anti wrap round VACUUM case]\n>>>>>>>>>>>>>>>>>>>>> tables:1000\n>>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 19 sec VS (with patch) 18 sec\n>>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 14 sec VS (with patch) 15 sec\n>>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 14 sec VS (with patch) 14 sec\n>>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 14 sec VS (with patch) 16 sec\n>>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 16 sec VS (with patch) 14 sec\n>>>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>>>> tables:5000\n>>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 69 sec VS (with patch) 69 sec\n>>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 66 sec VS (with patch) 47 sec\n>>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 59 sec VS (with patch) 37 sec\n>>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 39 sec VS (with patch) 28 sec\n>>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 39 sec VS (with patch) 29 sec\n>>>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>>>> tables:10000\n>>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 139 sec VS (with patch) 138 sec\n>>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 130 sec VS (with patch) 86 sec\n>>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 120 sec VS (with patch) 68 sec\n>>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 96 sec VS (with patch) 41 sec\n>>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 90 sec VS (with patch) 39 sec\n>>>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>>>> tables:20000\n>>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 313 sec VS (with patch) 331 sec\n>>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 209 sec VS (with patch) 201 sec\n>>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 227 sec VS (with patch) 141 sec\n>>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 236 sec VS (with patch) 88 sec\n>>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 309 sec VS (with patch) 74 sec\n>>>>>>>>>>>>>>>>>>>>> ===========================================================================\n>>>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>>>> The cases without patch, the scalability of the worker has decreased\n>>>>>>>>>>>>>>>>>>>>> as the number of tables has increased.\n>>>>>>>>>>>>>>>>>>>>> In fact, the more workers there are, the longer it takes to complete\n>>>>>>>>>>>>>>>>>>>>> VACUUM to all tables.\n>>>>>>>>>>>>>>>>>>>>> The cases with patch, it shows good scalability with respect to the\n>>>>>>>>>>>>>>>>>>>>> number of workers.\n>>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>>> It seems a good performance improvement even without the patch of\n>>>>>>>>>>>>>>>>>>>> shared memory based stats collector.\n>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>> Sounds great!\n>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>>>> Note that perf top results showed that hash_search_with_hash_value,\n>>>>>>>>>>>>>>>>>>>>> hash_seq_search and\n>>>>>>>>>>>>>>>>>>>>> pgstat_read_statsfiles are dominant during VACUUM in all patterns,\n>>>>>>>>>>>>>>>>>>>>> with or without the patch.\n>>>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>>>> Therefore, there is still a need to find ways to optimize the reading\n>>>>>>>>>>>>>>>>>>>>> of large amounts of stats.\n>>>>>>>>>>>>>>>>>>>>> However, this patch is effective in its own right, and since there are\n>>>>>>>>>>>>>>>>>>>>> only a few parts to modify,\n>>>>>>>>>>>>>>>>>>>>> I think it should be able to be applied to current (preferably\n>>>>>>>>>>>>>>>>>>>>> pre-v13) PostgreSQL.\n>>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>>> +1\n>>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>>> +\n>>>>>>>>>>>>>>>>>>>> + /* We might be better to refresh stats */\n>>>>>>>>>>>>>>>>>>>> + use_existing_stats = false;\n>>>>>>>>>>>>>>>>>>>> }\n>>>>>>>>>>>>>>>>>>>> + else\n>>>>>>>>>>>>>>>>>>>> + {\n>>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>>> - heap_freetuple(classTup);\n>>>>>>>>>>>>>>>>>>>> + heap_freetuple(classTup);\n>>>>>>>>>>>>>>>>>>>> + /* The relid has already vacuumed, so we might be better to\n>>>>>>>>>>>>>>>>>>>> use exiting stats */\n>>>>>>>>>>>>>>>>>>>> + use_existing_stats = true;\n>>>>>>>>>>>>>>>>>>>> + }\n>>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>>> With that patch, the autovacuum process refreshes the stats in the\n>>>>>>>>>>>>>>>>>>>> next check if it finds out that the table still needs to be vacuumed.\n>>>>>>>>>>>>>>>>>>>> But I guess it's not necessarily true because the next table might be\n>>>>>>>>>>>>>>>>>>>> vacuumed already. So I think we might want to always use the existing\n>>>>>>>>>>>>>>>>>>>> for the first check. What do you think?\n>>>>>>>>>>>>>>>>>>> Thanks for your comment.\n>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>> If we assume the case where some workers vacuum on large tables\n>>>>>>>>>>>>>>>>>>> and a single worker vacuum on small tables, the processing\n>>>>>>>>>>>>>>>>>>> performance of the single worker will be slightly lower if the\n>>>>>>>>>>>>>>>>>>> existing statistics are checked every time.\n>>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>> In fact, at first I tried to check the existing stats every time,\n>>>>>>>>>>>>>>>>>>> but the performance was slightly worse in cases with a small number of workers.\n>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>> Do you have this benchmark result?\n>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>> (Checking the existing stats is lightweight , but at high frequency,\n>>>>>>>>>>>>>>>>>>> it affects processing performance.)\n>>>>>>>>>>>>>>>>>>> Therefore, at after refresh statistics, determine whether autovac\n>>>>>>>>>>>>>>>>>>> should use the existing statistics.\n>>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>> Yeah, since the test you used uses a lot of small tables, if there are\n>>>>>>>>>>>>>>>>>> a few workers, checking the existing stats is unlikely to return true\n>>>>>>>>>>>>>>>>>> (no need to vacuum). So the cost of existing stats check ends up being\n>>>>>>>>>>>>>>>>>> overhead. Not sure how slow always checking the existing stats was,\n>>>>>>>>>>>>>>>>>> but given that the shared memory based stats collector patch could\n>>>>>>>>>>>>>>>>>> improve the performance of refreshing stats, it might be better not to\n>>>>>>>>>>>>>>>>>> check the existing stats frequently like the patch does. Anyway, I\n>>>>>>>>>>>>>>>>>> think it’s better to evaluate the performance improvement with other\n>>>>>>>>>>>>>>>>>> cases too.\n>>>>>>>>>>>>>>>>> Yeah, I would like to see how much the performance changes in other cases.\n>>>>>>>>>>>>>>>>> In addition, if the shared-based-stats patch is applied, we won't need to reload\n>>>>>>>>>>>>>>>>> a huge stats file, so we will just have to check the stats on\n>>>>>>>>>>>>>>>>> shared-mem every time.\n>>>>>>>>>>>>>>>>> Perhaps the logic of table_recheck_autovac could be simpler.\n>>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>>>> BTW, I found some typos in comments, so attache a fixed version.\n>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>> The patch adds some duplicated codes into table_recheck_autovac().\n>>>>>>>>>>>>>>>> It's better to make the common function performing them and make\n>>>>>>>>>>>>>>>> table_recheck_autovac() call that common function, to simplify the code.\n>>>>>>>>>>>>>>> Thanks for your comment.\n>>>>>>>>>>>>>>> Hmm.. I've cut out the duplicate part.\n>>>>>>>>>>>>>>> Attach the patch.\n>>>>>>>>>>>>>>> Could you confirm that it fits your expecting?\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>> Yes, thanks for updataing the patch! Here are another review comments.\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>> + shared = pgstat_fetch_stat_dbentry(InvalidOid);\n>>>>>>>>>>>>>> + dbentry = pgstat_fetch_stat_dbentry(MyDatabaseId);\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>> When using the existing stats, ISTM that these are not necessary and\n>>>>>>>>>>>>>> we can reuse \"shared\" and \"dbentry\" obtained before. Right?\n>>>>>>>>>>>>> Yeah, but unless autovac_refresh_stats() is called, these functions\n>>>>>>>>>>>>> read the information from the\n>>>>>>>>>>>>> local hash table without re-read the stats file, so the process is very light.\n>>>>>>>>>>>>> Therefore, I think, it is better to keep the current logic to keep the\n>>>>>>>>>>>>> code simple.\n>>>>>>>>>>>>>\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>> + /* We might be better to refresh stats */\n>>>>>>>>>>>>>> + use_existing_stats = false;\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>> I think that we should add more comments about why it's better to\n>>>>>>>>>>>>>> refresh the stats in this case.\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>> + /* The relid has already vacuumed, so we might be better to use existing stats */\n>>>>>>>>>>>>>> + use_existing_stats = true;\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>> I think that we should add more comments about why it's better to\n>>>>>>>>>>>>>> reuse the stats in this case.\n>>>>>>>>>>>>> I added comments.\n>>>>>>>>>>>>>\n>>>>>>>>>>>>> Attache the patch.\n>>>>>>>>>>>>>\n>>>>>>>>>>>>\n>>>>>>>>>>>> Thank you for updating the patch. Here are some small comments on the\n>>>>>>>>>>>> latest (v4) patch.\n>>>>>>>>>>>>\n>>>>>>>>>>>> + * So if the last time we checked a table that was already vacuumed after\n>>>>>>>>>>>> + * refres stats, check the current statistics before refreshing it.\n>>>>>>>>>>>> + */\n>>>>>>>>>>>>\n>>>>>>>>>>>> s/refres/refresh/\n>>>>>>>>>> Thanks! fixed.\n>>>>>>>>>> Attached the patch.\n>>>>>>>>>>\n>>>>>>>>>>>>\n>>>>>>>>>>>> -----\n>>>>>>>>>>>> +/* Counter to determine if statistics should be refreshed */\n>>>>>>>>>>>> +static bool use_existing_stats = false;\n>>>>>>>>>>>> +\n>>>>>>>>>>>>\n>>>>>>>>>>>> I think 'use_existing_stats' can be declared within table_recheck_autovac().\n>>>>>>>>>>>>\n>>>>>>>>>>>> -----\n>>>>>>>>>>>> While testing the performance, I realized that the statistics are\n>>>>>>>>>>>> reset every time vacuumed one table, leading to re-reading the stats\n>>>>>>>>>>>> file even if 'use_existing_stats' is true. Please refer that vacuum()\n>>>>>>>>>>>> eventually calls AtEOXact_PgStat() which calls to\n>>>>>>>>>>>> pgstat_clear_snapshot().\n>>>>>>>>>>>\n>>>>>>>>>>> Good catch!\n>>>>>>>>>>>\n>>>>>>>>>>>\n>>>>>>>>>>>> I believe that's why the performance of the\n>>>>>>>>>>>> method of always checking the existing stats wasn’t good as expected.\n>>>>>>>>>>>> So if we save the statistics somewhere and use it for rechecking, the\n>>>>>>>>>>>> results of the performance benchmark will differ between these two\n>>>>>>>>>>>> methods.\n>>>>>>>>>> Thanks for you checks.\n>>>>>>>>>> But, if a worker did vacuum(), that means this worker had determined\n>>>>>>>>>> need vacuum in the\n>>>>>>>>>> table_recheck_autovac(). So, use_existing_stats set to false, and next\n>>>>>>>>>> time, refresh stats.\n>>>>>>>>>> Therefore I think the current patch is fine, as we want to avoid\n>>>>>>>>>> unnecessary refreshing of\n>>>>>>>>>> statistics before the actual vacuum(), right?\n>>>>>>>>>\n>>>>>>>>> Yes, you're right.\n>>>>>>>>>\n>>>>>>>>> When I benchmarked the performance of the method of always checking\n>>>>>>>>> existing stats I edited your patch so that it sets use_existing_stats\n>>>>>>>>> = true even if the second check is false (i.g., vacuum is needed).\n>>>>>>>>> And the result I got was worse than expected especially in the case of\n>>>>>>>>> a few autovacuum workers. But it doesn't evaluate the performance of\n>>>>>>>>> that method rightly as the stats snapshot is cleared every time\n>>>>>>>>> vacuum. Given you had similar results, I guess you used a similar way\n>>>>>>>>> when evaluating it, is it right? If so, it’s better to fix this issue\n>>>>>>>>> and see how the performance benchmark results will differ.\n>>>>>>>>>\n>>>>>>>>> For example, the results of the test case with 10000 tables and 1\n>>>>>>>>> autovacuum worker I reported before was:\n>>>>>>>>>\n>>>>>>>>> 10000 tables:\n>>>>>>>>> autovac_workers 1 : 158s,157s, 290s\n>>>>>>>>>\n>>>>>>>>> But after fixing that issue in the third method (always checking the\n>>>>>>>>> existing stats), the results are:\n>>>>>>>>\n>>>>>>>> Could you tell me how you fixed that issue? You copied the stats to\n>>>>>>>> somewhere as you suggested or skipped pgstat_clear_snapshot() as\n>>>>>>>> I suggested?\n>>>>>>>\n>>>>>>> I used the way you suggested in this quick test; skipped\n>>>>>>> pgstat_clear_snapshot().\n>>>>>>>\n>>>>>>>>\n>>>>>>>> Kasahara-san seems not to like the latter idea because it might\n>>>>>>>> cause bad side effect. So we should use the former idea?\n>>>>>>>\n>>>>>>> Not sure. I'm also concerned about the side effect but I've not checked yet.\n>>>>>>>\n>>>>>>> Since probably there is no big difference between the two ways in\n>>>>>>> terms of performance I'm going to see how the performance benchmark\n>>>>>>> result will change first.\n>>>>>>\n>>>>>> I've tested performance improvement again. From the left the execution\n>>>>>> time of the current HEAD, Kasahara-san's patch, the method of always\n>>>>>> checking the existing stats (using approach suggested by Fujii-san),\n>>>>>> in seconds.\n>>>>>>\n>>>>>> 1000 tables:\n>>>>>> autovac_workers 1 : 13s, 13s, 13s\n>>>>>> autovac_workers 2 : 6s, 4s, 4s\n>>>>>> autovac_workers 3 : 3s, 4s, 3s\n>>>>>> autovac_workers 5 : 3s, 3s, 2s\n>>>>>> autovac_workers 10: 2s, 3s, 2s\n>>>>>>\n>>>>>> 5000 tables:\n>>>>>> autovac_workers 1 : 71s, 71s, 72s\n>>>>>> autovac_workers 2 : 37s, 32s, 32s\n>>>>>> autovac_workers 3 : 29s, 26s, 26s\n>>>>>> autovac_workers 5 : 20s, 19s, 18s\n>>>>>> autovac_workers 10: 13s, 8s, 8s\n>>>>>>\n>>>>>> 10000 tables:\n>>>>>> autovac_workers 1 : 158s,157s, 159s\n>>>>>> autovac_workers 2 : 80s, 53s, 78s\n>>>>>> autovac_workers 3 : 75s, 67s, 67s\n>>>>>> autovac_workers 5 : 61s, 42s, 42s\n>>>>>> autovac_workers 10: 69s, 26s, 25s\n>>>>>>\n>>>>>> 20000 tables:\n>>>>>> autovac_workers 1 : 379s, 380s, 389s\n>>>>>> autovac_workers 2 : 236s, 232s, 233s\n>>>>>> autovac_workers 3 : 222s, 181s, 182s\n>>>>>> autovac_workers 5 : 212s, 132s, 139s\n>>>>>> autovac_workers 10: 317s, 91s, 89s\n>>>>>>\n>>>>>> I don't see a big difference between Kasahara-san's patch and the\n>>>>>> method of always checking the existing stats.\n>>>>>\n>>>>> Thanks for doing the benchmark!\n>>>>>\n>>>>> This benchmark result makes me think that we don't need to tweak\n>>>>> AtEOXact_PgStat() and can use Kasahara-san approach.\n>>>>> That's good news :)\n>>>>\n>>>> Yeah, given that all autovaucum workers have the list of tables to\n>>>> vacuum in the same order in most cases, the assumption in\n>>>> Kasahara-san’s patch that if a worker needs to vacuum a table it’s\n>>>> unlikely that it will be able to skip the next table using the current\n>>>> snapshot of stats makes sense to me.\n>>>>\n>>>> One small comment on v6 patch:\n>>>>\n>>>> + /* When we decide to do vacuum or analyze, the existing stats cannot\n>>>> + * be reused in the next cycle because it's cleared at the end of vacuum\n>>>> + * or analyze (by AtEOXact_PgStat()).\n>>>> + */\n>>>> + use_existing_stats = false;\n>>>>\n>>>> I think the comment should start on the second line (i.g., \\n is\n>>>> needed after /*).\n>>> Oops, thanks.\n>>> Fixed.\n>>\n>> Thanks for updating the patch!\n>>\n>> I applied the following cosmetic changes to the patch.\n>> Attached is the updated version of the patch.\n>> Coud you review this version?\n> Thanks for tweaking the patch.\n> \n>> - Ran pgindent to fix some warnings that \"git diff --check\"\n>> reported on the patch.\n>> - Made the order of arguments consistent between\n>> recheck_relation_needs_vacanalyze and relation_needs_vacanalyze.\n>> - Renamed the variable use_existing_stats to reuse_stats for simplicity.\n>> - Added more comments.\n> I think it's no problem.\n> The patch passed makecheck, and I benchmarked \"Anti wrap round VACUUM\n> case\" (only 20000 tables) just in case.\n> \n> From the left the execution time of the current HEAD, v8 patch.\n> tables 20000:\n> autovac workers 1: 319sec, 315sec\n> autovac workers 2: 301sec, 190sec\n> autovac workers 3: 270sec, 133sec\n> autovac workers 5: 211sec, 86sec\n> autovac workers 10: 376sec, 68sec\n> \n> It's as expected.\n\nThanks!\n\n\n>> Barring any objection, I'm thinking to commit this version.\n> +1\n\nPushed.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 9 Dec 2020 00:01:52 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: autovac issue with large number of tables" }, { "msg_contents": "On Wed, Dec 9, 2020 at 12:01 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/12/04 12:21, Kasahara Tatsuhito wrote:\n> > Hi,\n> >\n> > On Thu, Dec 3, 2020 at 9:09 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>\n> >>\n> >>\n> >> On 2020/12/03 11:46, Kasahara Tatsuhito wrote:\n> >>> On Wed, Dec 2, 2020 at 7:11 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >>>>\n> >>>> On Wed, Dec 2, 2020 at 3:33 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>>\n> >>>>>\n> >>>>>\n> >>>>> On 2020/12/02 12:53, Masahiko Sawada wrote:\n> >>>>>> On Tue, Dec 1, 2020 at 5:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >>>>>>>\n> >>>>>>> On Tue, Dec 1, 2020 at 4:32 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>>>>>\n> >>>>>>>>\n> >>>>>>>>\n> >>>>>>>> On 2020/12/01 16:23, Masahiko Sawada wrote:\n> >>>>>>>>> On Tue, Dec 1, 2020 at 1:48 PM Kasahara Tatsuhito\n> >>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n> >>>>>>>>>>\n> >>>>>>>>>> Hi,\n> >>>>>>>>>>\n> >>>>>>>>>> On Mon, Nov 30, 2020 at 8:59 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>>>>>>>>\n> >>>>>>>>>>>\n> >>>>>>>>>>>\n> >>>>>>>>>>> On 2020/11/30 10:43, Masahiko Sawada wrote:\n> >>>>>>>>>>>> On Sun, Nov 29, 2020 at 10:34 PM Kasahara Tatsuhito\n> >>>>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n> >>>>>>>>>>>>>\n> >>>>>>>>>>>>> Hi, Thanks for you comments.\n> >>>>>>>>>>>>>\n> >>>>>>>>>>>>> On Fri, Nov 27, 2020 at 9:51 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>\n> >>>>>>>>>>>>>> On 2020/11/27 18:38, Kasahara Tatsuhito wrote:\n> >>>>>>>>>>>>>>> Hi,\n> >>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>> On Fri, Nov 27, 2020 at 1:43 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> On 2020/11/26 10:41, Kasahara Tatsuhito wrote:\n> >>>>>>>>>>>>>>>>> On Wed, Nov 25, 2020 at 8:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>> On Wed, Nov 25, 2020 at 4:18 PM Kasahara Tatsuhito\n> >>>>>>>>>>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n> >>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>> Hi,\n> >>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>> On Wed, Nov 25, 2020 at 2:17 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >>>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>>> On Fri, Sep 4, 2020 at 7:50 PM Kasahara Tatsuhito\n> >>>>>>>>>>>>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n> >>>>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>>>> Hi,\n> >>>>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>>>> On Wed, Sep 2, 2020 at 2:10 AM Kasahara Tatsuhito\n> >>>>>>>>>>>>>>>>>>>>> <kasahara.tatsuhito@gmail.com> wrote:\n> >>>>>>>>>>>>>>>>>>>>>>> I wonder if we could have table_recheck_autovac do two probes of the stats\n> >>>>>>>>>>>>>>>>>>>>>>> data. First probe the existing stats data, and if it shows the table to\n> >>>>>>>>>>>>>>>>>>>>>>> be already vacuumed, return immediately. If not, *then* force a stats\n> >>>>>>>>>>>>>>>>>>>>>>> re-read, and check a second time.\n> >>>>>>>>>>>>>>>>>>>>>> Does the above mean that the second and subsequent table_recheck_autovac()\n> >>>>>>>>>>>>>>>>>>>>>> will be improved to first check using the previous refreshed statistics?\n> >>>>>>>>>>>>>>>>>>>>>> I think that certainly works.\n> >>>>>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>>>>> If that's correct, I'll try to create a patch for the PoC\n> >>>>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>>>> I still don't know how to reproduce Jim's troubles, but I was able to reproduce\n> >>>>>>>>>>>>>>>>>>>>> what was probably a very similar problem.\n> >>>>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>>>> This problem seems to be more likely to occur in cases where you have\n> >>>>>>>>>>>>>>>>>>>>> a large number of tables,\n> >>>>>>>>>>>>>>>>>>>>> i.e., a large amount of stats, and many small tables need VACUUM at\n> >>>>>>>>>>>>>>>>>>>>> the same time.\n> >>>>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>>>> So I followed Tom's advice and created a patch for the PoC.\n> >>>>>>>>>>>>>>>>>>>>> This patch will enable a flag in the table_recheck_autovac function to use\n> >>>>>>>>>>>>>>>>>>>>> the existing stats next time if VACUUM (or ANALYZE) has already been done\n> >>>>>>>>>>>>>>>>>>>>> by another worker on the check after the stats have been updated.\n> >>>>>>>>>>>>>>>>>>>>> If the tables continue to require VACUUM after the refresh, then a refresh\n> >>>>>>>>>>>>>>>>>>>>> will be required instead of using the existing statistics.\n> >>>>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>>>> I did simple test with HEAD and HEAD + this PoC patch.\n> >>>>>>>>>>>>>>>>>>>>> The tests were conducted in two cases.\n> >>>>>>>>>>>>>>>>>>>>> (I changed few configurations. see attached scripts)\n> >>>>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>>>> 1. Normal VACUUM case\n> >>>>>>>>>>>>>>>>>>>>> - SET autovacuum = off\n> >>>>>>>>>>>>>>>>>>>>> - CREATE tables with 100 rows\n> >>>>>>>>>>>>>>>>>>>>> - DELETE 90 rows for each tables\n> >>>>>>>>>>>>>>>>>>>>> - SET autovacuum = on and restart PostgreSQL\n> >>>>>>>>>>>>>>>>>>>>> - Measure the time it takes for all tables to be VACUUMed\n> >>>>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>>>> 2. Anti wrap round VACUUM case\n> >>>>>>>>>>>>>>>>>>>>> - CREATE brank tables\n> >>>>>>>>>>>>>>>>>>>>> - SELECT all of these tables (for generate stats)\n> >>>>>>>>>>>>>>>>>>>>> - SET autovacuum_freeze_max_age to low values and restart PostgreSQL\n> >>>>>>>>>>>>>>>>>>>>> - Consumes a lot of XIDs by using txid_curent()\n> >>>>>>>>>>>>>>>>>>>>> - Measure the time it takes for all tables to be VACUUMed\n> >>>>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>>>> For each test case, the following results were obtained by changing\n> >>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers parameters to 1, 2, 3(def) 5 and 10.\n> >>>>>>>>>>>>>>>>>>>>> Also changing num of tables to 1000, 5000, 10000 and 20000.\n> >>>>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>>>> Due to the poor VM environment (2 VCPU/4 GB), the results are a little unstable,\n> >>>>>>>>>>>>>>>>>>>>> but I think it's enough to ask for a trend.\n> >>>>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>>>> ===========================================================================\n> >>>>>>>>>>>>>>>>>>>>> [1.Normal VACUUM case]\n> >>>>>>>>>>>>>>>>>>>>> tables:1000\n> >>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 20 sec VS (with patch) 20 sec\n> >>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 18 sec VS (with patch) 16 sec\n> >>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 18 sec VS (with patch) 16 sec\n> >>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 19 sec VS (with patch) 17 sec\n> >>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 19 sec VS (with patch) 17 sec\n> >>>>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>>>> tables:5000\n> >>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 77 sec VS (with patch) 78 sec\n> >>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 61 sec VS (with patch) 43 sec\n> >>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 38 sec VS (with patch) 38 sec\n> >>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 45 sec VS (with patch) 37 sec\n> >>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 43 sec VS (with patch) 35 sec\n> >>>>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>>>> tables:10000\n> >>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 152 sec VS (with patch) 153 sec\n> >>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 119 sec VS (with patch) 98 sec\n> >>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 87 sec VS (with patch) 78 sec\n> >>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 100 sec VS (with patch) 66 sec\n> >>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 97 sec VS (with patch) 56 sec\n> >>>>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>>>> tables:20000\n> >>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 338 sec VS (with patch) 339 sec\n> >>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 231 sec VS (with patch) 229 sec\n> >>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 220 sec VS (with patch) 191 sec\n> >>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 234 sec VS (with patch) 147 sec\n> >>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 320 sec VS (with patch) 113 sec\n> >>>>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>>>> [2.Anti wrap round VACUUM case]\n> >>>>>>>>>>>>>>>>>>>>> tables:1000\n> >>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 19 sec VS (with patch) 18 sec\n> >>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 14 sec VS (with patch) 15 sec\n> >>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 14 sec VS (with patch) 14 sec\n> >>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 14 sec VS (with patch) 16 sec\n> >>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 16 sec VS (with patch) 14 sec\n> >>>>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>>>> tables:5000\n> >>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 69 sec VS (with patch) 69 sec\n> >>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 66 sec VS (with patch) 47 sec\n> >>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 59 sec VS (with patch) 37 sec\n> >>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 39 sec VS (with patch) 28 sec\n> >>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 39 sec VS (with patch) 29 sec\n> >>>>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>>>> tables:10000\n> >>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 139 sec VS (with patch) 138 sec\n> >>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 130 sec VS (with patch) 86 sec\n> >>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 120 sec VS (with patch) 68 sec\n> >>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 96 sec VS (with patch) 41 sec\n> >>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 90 sec VS (with patch) 39 sec\n> >>>>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>>>> tables:20000\n> >>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 1: (HEAD) 313 sec VS (with patch) 331 sec\n> >>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 2: (HEAD) 209 sec VS (with patch) 201 sec\n> >>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 3: (HEAD) 227 sec VS (with patch) 141 sec\n> >>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 5: (HEAD) 236 sec VS (with patch) 88 sec\n> >>>>>>>>>>>>>>>>>>>>> autovacuum_max_workers 10: (HEAD) 309 sec VS (with patch) 74 sec\n> >>>>>>>>>>>>>>>>>>>>> ===========================================================================\n> >>>>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>>>> The cases without patch, the scalability of the worker has decreased\n> >>>>>>>>>>>>>>>>>>>>> as the number of tables has increased.\n> >>>>>>>>>>>>>>>>>>>>> In fact, the more workers there are, the longer it takes to complete\n> >>>>>>>>>>>>>>>>>>>>> VACUUM to all tables.\n> >>>>>>>>>>>>>>>>>>>>> The cases with patch, it shows good scalability with respect to the\n> >>>>>>>>>>>>>>>>>>>>> number of workers.\n> >>>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>>> It seems a good performance improvement even without the patch of\n> >>>>>>>>>>>>>>>>>>>> shared memory based stats collector.\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> Sounds great!\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>>>> Note that perf top results showed that hash_search_with_hash_value,\n> >>>>>>>>>>>>>>>>>>>>> hash_seq_search and\n> >>>>>>>>>>>>>>>>>>>>> pgstat_read_statsfiles are dominant during VACUUM in all patterns,\n> >>>>>>>>>>>>>>>>>>>>> with or without the patch.\n> >>>>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>>>> Therefore, there is still a need to find ways to optimize the reading\n> >>>>>>>>>>>>>>>>>>>>> of large amounts of stats.\n> >>>>>>>>>>>>>>>>>>>>> However, this patch is effective in its own right, and since there are\n> >>>>>>>>>>>>>>>>>>>>> only a few parts to modify,\n> >>>>>>>>>>>>>>>>>>>>> I think it should be able to be applied to current (preferably\n> >>>>>>>>>>>>>>>>>>>>> pre-v13) PostgreSQL.\n> >>>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>>> +1\n> >>>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>>> +\n> >>>>>>>>>>>>>>>>>>>> + /* We might be better to refresh stats */\n> >>>>>>>>>>>>>>>>>>>> + use_existing_stats = false;\n> >>>>>>>>>>>>>>>>>>>> }\n> >>>>>>>>>>>>>>>>>>>> + else\n> >>>>>>>>>>>>>>>>>>>> + {\n> >>>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>>> - heap_freetuple(classTup);\n> >>>>>>>>>>>>>>>>>>>> + heap_freetuple(classTup);\n> >>>>>>>>>>>>>>>>>>>> + /* The relid has already vacuumed, so we might be better to\n> >>>>>>>>>>>>>>>>>>>> use exiting stats */\n> >>>>>>>>>>>>>>>>>>>> + use_existing_stats = true;\n> >>>>>>>>>>>>>>>>>>>> + }\n> >>>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>>> With that patch, the autovacuum process refreshes the stats in the\n> >>>>>>>>>>>>>>>>>>>> next check if it finds out that the table still needs to be vacuumed.\n> >>>>>>>>>>>>>>>>>>>> But I guess it's not necessarily true because the next table might be\n> >>>>>>>>>>>>>>>>>>>> vacuumed already. So I think we might want to always use the existing\n> >>>>>>>>>>>>>>>>>>>> for the first check. What do you think?\n> >>>>>>>>>>>>>>>>>>> Thanks for your comment.\n> >>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>> If we assume the case where some workers vacuum on large tables\n> >>>>>>>>>>>>>>>>>>> and a single worker vacuum on small tables, the processing\n> >>>>>>>>>>>>>>>>>>> performance of the single worker will be slightly lower if the\n> >>>>>>>>>>>>>>>>>>> existing statistics are checked every time.\n> >>>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>> In fact, at first I tried to check the existing stats every time,\n> >>>>>>>>>>>>>>>>>>> but the performance was slightly worse in cases with a small number of workers.\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> Do you have this benchmark result?\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>> (Checking the existing stats is lightweight , but at high frequency,\n> >>>>>>>>>>>>>>>>>>> it affects processing performance.)\n> >>>>>>>>>>>>>>>>>>> Therefore, at after refresh statistics, determine whether autovac\n> >>>>>>>>>>>>>>>>>>> should use the existing statistics.\n> >>>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>> Yeah, since the test you used uses a lot of small tables, if there are\n> >>>>>>>>>>>>>>>>>> a few workers, checking the existing stats is unlikely to return true\n> >>>>>>>>>>>>>>>>>> (no need to vacuum). So the cost of existing stats check ends up being\n> >>>>>>>>>>>>>>>>>> overhead. Not sure how slow always checking the existing stats was,\n> >>>>>>>>>>>>>>>>>> but given that the shared memory based stats collector patch could\n> >>>>>>>>>>>>>>>>>> improve the performance of refreshing stats, it might be better not to\n> >>>>>>>>>>>>>>>>>> check the existing stats frequently like the patch does. Anyway, I\n> >>>>>>>>>>>>>>>>>> think it’s better to evaluate the performance improvement with other\n> >>>>>>>>>>>>>>>>>> cases too.\n> >>>>>>>>>>>>>>>>> Yeah, I would like to see how much the performance changes in other cases.\n> >>>>>>>>>>>>>>>>> In addition, if the shared-based-stats patch is applied, we won't need to reload\n> >>>>>>>>>>>>>>>>> a huge stats file, so we will just have to check the stats on\n> >>>>>>>>>>>>>>>>> shared-mem every time.\n> >>>>>>>>>>>>>>>>> Perhaps the logic of table_recheck_autovac could be simpler.\n> >>>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>>>>> BTW, I found some typos in comments, so attache a fixed version.\n> >>>>>>>>>>>>>>>>\n> >>>>>>>>>>>>>>>> The patch adds some duplicated codes into table_recheck_autovac().\n> >>>>>>>>>>>>>>>> It's better to make the common function performing them and make\n> >>>>>>>>>>>>>>>> table_recheck_autovac() call that common function, to simplify the code.\n> >>>>>>>>>>>>>>> Thanks for your comment.\n> >>>>>>>>>>>>>>> Hmm.. I've cut out the duplicate part.\n> >>>>>>>>>>>>>>> Attach the patch.\n> >>>>>>>>>>>>>>> Could you confirm that it fits your expecting?\n> >>>>>>>>>>>>>>\n> >>>>>>>>>>>>>> Yes, thanks for updataing the patch! Here are another review comments.\n> >>>>>>>>>>>>>>\n> >>>>>>>>>>>>>> + shared = pgstat_fetch_stat_dbentry(InvalidOid);\n> >>>>>>>>>>>>>> + dbentry = pgstat_fetch_stat_dbentry(MyDatabaseId);\n> >>>>>>>>>>>>>>\n> >>>>>>>>>>>>>> When using the existing stats, ISTM that these are not necessary and\n> >>>>>>>>>>>>>> we can reuse \"shared\" and \"dbentry\" obtained before. Right?\n> >>>>>>>>>>>>> Yeah, but unless autovac_refresh_stats() is called, these functions\n> >>>>>>>>>>>>> read the information from the\n> >>>>>>>>>>>>> local hash table without re-read the stats file, so the process is very light.\n> >>>>>>>>>>>>> Therefore, I think, it is better to keep the current logic to keep the\n> >>>>>>>>>>>>> code simple.\n> >>>>>>>>>>>>>\n> >>>>>>>>>>>>>>\n> >>>>>>>>>>>>>> + /* We might be better to refresh stats */\n> >>>>>>>>>>>>>> + use_existing_stats = false;\n> >>>>>>>>>>>>>>\n> >>>>>>>>>>>>>> I think that we should add more comments about why it's better to\n> >>>>>>>>>>>>>> refresh the stats in this case.\n> >>>>>>>>>>>>>>\n> >>>>>>>>>>>>>> + /* The relid has already vacuumed, so we might be better to use existing stats */\n> >>>>>>>>>>>>>> + use_existing_stats = true;\n> >>>>>>>>>>>>>>\n> >>>>>>>>>>>>>> I think that we should add more comments about why it's better to\n> >>>>>>>>>>>>>> reuse the stats in this case.\n> >>>>>>>>>>>>> I added comments.\n> >>>>>>>>>>>>>\n> >>>>>>>>>>>>> Attache the patch.\n> >>>>>>>>>>>>>\n> >>>>>>>>>>>>\n> >>>>>>>>>>>> Thank you for updating the patch. Here are some small comments on the\n> >>>>>>>>>>>> latest (v4) patch.\n> >>>>>>>>>>>>\n> >>>>>>>>>>>> + * So if the last time we checked a table that was already vacuumed after\n> >>>>>>>>>>>> + * refres stats, check the current statistics before refreshing it.\n> >>>>>>>>>>>> + */\n> >>>>>>>>>>>>\n> >>>>>>>>>>>> s/refres/refresh/\n> >>>>>>>>>> Thanks! fixed.\n> >>>>>>>>>> Attached the patch.\n> >>>>>>>>>>\n> >>>>>>>>>>>>\n> >>>>>>>>>>>> -----\n> >>>>>>>>>>>> +/* Counter to determine if statistics should be refreshed */\n> >>>>>>>>>>>> +static bool use_existing_stats = false;\n> >>>>>>>>>>>> +\n> >>>>>>>>>>>>\n> >>>>>>>>>>>> I think 'use_existing_stats' can be declared within table_recheck_autovac().\n> >>>>>>>>>>>>\n> >>>>>>>>>>>> -----\n> >>>>>>>>>>>> While testing the performance, I realized that the statistics are\n> >>>>>>>>>>>> reset every time vacuumed one table, leading to re-reading the stats\n> >>>>>>>>>>>> file even if 'use_existing_stats' is true. Please refer that vacuum()\n> >>>>>>>>>>>> eventually calls AtEOXact_PgStat() which calls to\n> >>>>>>>>>>>> pgstat_clear_snapshot().\n> >>>>>>>>>>>\n> >>>>>>>>>>> Good catch!\n> >>>>>>>>>>>\n> >>>>>>>>>>>\n> >>>>>>>>>>>> I believe that's why the performance of the\n> >>>>>>>>>>>> method of always checking the existing stats wasn’t good as expected.\n> >>>>>>>>>>>> So if we save the statistics somewhere and use it for rechecking, the\n> >>>>>>>>>>>> results of the performance benchmark will differ between these two\n> >>>>>>>>>>>> methods.\n> >>>>>>>>>> Thanks for you checks.\n> >>>>>>>>>> But, if a worker did vacuum(), that means this worker had determined\n> >>>>>>>>>> need vacuum in the\n> >>>>>>>>>> table_recheck_autovac(). So, use_existing_stats set to false, and next\n> >>>>>>>>>> time, refresh stats.\n> >>>>>>>>>> Therefore I think the current patch is fine, as we want to avoid\n> >>>>>>>>>> unnecessary refreshing of\n> >>>>>>>>>> statistics before the actual vacuum(), right?\n> >>>>>>>>>\n> >>>>>>>>> Yes, you're right.\n> >>>>>>>>>\n> >>>>>>>>> When I benchmarked the performance of the method of always checking\n> >>>>>>>>> existing stats I edited your patch so that it sets use_existing_stats\n> >>>>>>>>> = true even if the second check is false (i.g., vacuum is needed).\n> >>>>>>>>> And the result I got was worse than expected especially in the case of\n> >>>>>>>>> a few autovacuum workers. But it doesn't evaluate the performance of\n> >>>>>>>>> that method rightly as the stats snapshot is cleared every time\n> >>>>>>>>> vacuum. Given you had similar results, I guess you used a similar way\n> >>>>>>>>> when evaluating it, is it right? If so, it’s better to fix this issue\n> >>>>>>>>> and see how the performance benchmark results will differ.\n> >>>>>>>>>\n> >>>>>>>>> For example, the results of the test case with 10000 tables and 1\n> >>>>>>>>> autovacuum worker I reported before was:\n> >>>>>>>>>\n> >>>>>>>>> 10000 tables:\n> >>>>>>>>> autovac_workers 1 : 158s,157s, 290s\n> >>>>>>>>>\n> >>>>>>>>> But after fixing that issue in the third method (always checking the\n> >>>>>>>>> existing stats), the results are:\n> >>>>>>>>\n> >>>>>>>> Could you tell me how you fixed that issue? You copied the stats to\n> >>>>>>>> somewhere as you suggested or skipped pgstat_clear_snapshot() as\n> >>>>>>>> I suggested?\n> >>>>>>>\n> >>>>>>> I used the way you suggested in this quick test; skipped\n> >>>>>>> pgstat_clear_snapshot().\n> >>>>>>>\n> >>>>>>>>\n> >>>>>>>> Kasahara-san seems not to like the latter idea because it might\n> >>>>>>>> cause bad side effect. So we should use the former idea?\n> >>>>>>>\n> >>>>>>> Not sure. I'm also concerned about the side effect but I've not checked yet.\n> >>>>>>>\n> >>>>>>> Since probably there is no big difference between the two ways in\n> >>>>>>> terms of performance I'm going to see how the performance benchmark\n> >>>>>>> result will change first.\n> >>>>>>\n> >>>>>> I've tested performance improvement again. From the left the execution\n> >>>>>> time of the current HEAD, Kasahara-san's patch, the method of always\n> >>>>>> checking the existing stats (using approach suggested by Fujii-san),\n> >>>>>> in seconds.\n> >>>>>>\n> >>>>>> 1000 tables:\n> >>>>>> autovac_workers 1 : 13s, 13s, 13s\n> >>>>>> autovac_workers 2 : 6s, 4s, 4s\n> >>>>>> autovac_workers 3 : 3s, 4s, 3s\n> >>>>>> autovac_workers 5 : 3s, 3s, 2s\n> >>>>>> autovac_workers 10: 2s, 3s, 2s\n> >>>>>>\n> >>>>>> 5000 tables:\n> >>>>>> autovac_workers 1 : 71s, 71s, 72s\n> >>>>>> autovac_workers 2 : 37s, 32s, 32s\n> >>>>>> autovac_workers 3 : 29s, 26s, 26s\n> >>>>>> autovac_workers 5 : 20s, 19s, 18s\n> >>>>>> autovac_workers 10: 13s, 8s, 8s\n> >>>>>>\n> >>>>>> 10000 tables:\n> >>>>>> autovac_workers 1 : 158s,157s, 159s\n> >>>>>> autovac_workers 2 : 80s, 53s, 78s\n> >>>>>> autovac_workers 3 : 75s, 67s, 67s\n> >>>>>> autovac_workers 5 : 61s, 42s, 42s\n> >>>>>> autovac_workers 10: 69s, 26s, 25s\n> >>>>>>\n> >>>>>> 20000 tables:\n> >>>>>> autovac_workers 1 : 379s, 380s, 389s\n> >>>>>> autovac_workers 2 : 236s, 232s, 233s\n> >>>>>> autovac_workers 3 : 222s, 181s, 182s\n> >>>>>> autovac_workers 5 : 212s, 132s, 139s\n> >>>>>> autovac_workers 10: 317s, 91s, 89s\n> >>>>>>\n> >>>>>> I don't see a big difference between Kasahara-san's patch and the\n> >>>>>> method of always checking the existing stats.\n> >>>>>\n> >>>>> Thanks for doing the benchmark!\n> >>>>>\n> >>>>> This benchmark result makes me think that we don't need to tweak\n> >>>>> AtEOXact_PgStat() and can use Kasahara-san approach.\n> >>>>> That's good news :)\n> >>>>\n> >>>> Yeah, given that all autovaucum workers have the list of tables to\n> >>>> vacuum in the same order in most cases, the assumption in\n> >>>> Kasahara-san’s patch that if a worker needs to vacuum a table it’s\n> >>>> unlikely that it will be able to skip the next table using the current\n> >>>> snapshot of stats makes sense to me.\n> >>>>\n> >>>> One small comment on v6 patch:\n> >>>>\n> >>>> + /* When we decide to do vacuum or analyze, the existing stats cannot\n> >>>> + * be reused in the next cycle because it's cleared at the end of vacuum\n> >>>> + * or analyze (by AtEOXact_PgStat()).\n> >>>> + */\n> >>>> + use_existing_stats = false;\n> >>>>\n> >>>> I think the comment should start on the second line (i.g., \\n is\n> >>>> needed after /*).\n> >>> Oops, thanks.\n> >>> Fixed.\n> >>\n> >> Thanks for updating the patch!\n> >>\n> >> I applied the following cosmetic changes to the patch.\n> >> Attached is the updated version of the patch.\n> >> Coud you review this version?\n> > Thanks for tweaking the patch.\n> >\n> >> - Ran pgindent to fix some warnings that \"git diff --check\"\n> >> reported on the patch.\n> >> - Made the order of arguments consistent between\n> >> recheck_relation_needs_vacanalyze and relation_needs_vacanalyze.\n> >> - Renamed the variable use_existing_stats to reuse_stats for simplicity.\n> >> - Added more comments.\n> > I think it's no problem.\n> > The patch passed makecheck, and I benchmarked \"Anti wrap round VACUUM\n> > case\" (only 20000 tables) just in case.\n> >\n> > From the left the execution time of the current HEAD, v8 patch.\n> > tables 20000:\n> > autovac workers 1: 319sec, 315sec\n> > autovac workers 2: 301sec, 190sec\n> > autovac workers 3: 270sec, 133sec\n> > autovac workers 5: 211sec, 86sec\n> > autovac workers 10: 376sec, 68sec\n> >\n> > It's as expected.\n>\n> Thanks!\n>\n>\n> >> Barring any objection, I'm thinking to commit this version.\n> > +1\n>\n> Pushed.\nThanks !\n\n>\n> Regards,\n>\n> --\n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n\n\n\n--\nTatsuhito Kasahara\nkasahara.tatsuhito _at_ gmail.com\n\n\n", "msg_date": "Wed, 9 Dec 2020 09:47:53 +0900", "msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>", "msg_from_op": false, "msg_subject": "Re: autovac issue with large number of tables" } ]
[ { "msg_contents": "I have a user case like this:\n\nrs = prepared_stmt.execute(1);\nwhile(rs.next())\n{\n // do something with the result and commit the transaction.\n conn.commit();\n}\n\nThe driver used the extended protocol in this case. It works like this: 1).\nParse ->\nPreparedStmt. 2). Bind -> Bind the prepared stmt with a Portal, no chance\nto\nset the CURSOR_OPT_HOLD option. 3). Execute. 4). Commit - the portal was\ndropped at this stage. 5). when fetching the next batch of results, we get\nthe error\n\"Portal doesn't exist\"\n\nThere are several methods we can work around this, but no one is perfect.\n1.run the prepared stmt in a dedicated connection. (The number of\nconnection will\ndoubled)\n2. use the with hold cursor. It doesn't support any bind parameter, so we\nhave\n to create a cursor for each dedicated id.\n3. don't commit the transaction. -- long transaction with many rows locked.\n\nI have several questions about this case:\n1. How about filling a cursorOptions information in bind protocol? then we\ncan\nset the portal->cursorOptions accordingly? if so, how to be compatible\nwith the\nold driver usually?\n2. Currently I want to add a new GUC parameter, if set it to true, server\nwill\ncreate a holdable portal, or else nothing changed. Then let the user set\nit to true in the above case and reset it to false afterward. Is there any\nissue\nwith this method?\n\n-- \nBest Regards\nAndy Fan\n\nI have a user case like this:rs = prepared_stmt.execute(1);while(rs.next()){    // do something with the result and commit the transaction.    conn.commit();}The driver used the extended protocol in this case. It works like this: 1). Parse ->PreparedStmt.  2). Bind -> Bind the prepared stmt with a Portal, no chance toset the CURSOR_OPT_HOLD option.  3). Execute.   4). Commit - the portal wasdropped at this stage.  5). when fetching the next batch of results, we get the error\"Portal doesn't exist\" There are several methods we can work around this, but no one is perfect.1.run the prepared stmt in a dedicated connection.  (The number of connection willdoubled)2. use the with hold cursor.  It doesn't support any bind parameter, so we have   to create a cursor for each dedicated id.3. don't commit the transaction.  -- long transaction with many rows locked.I have several questions about this case:1. How about filling a cursorOptions information in bind protocol?  then we canset the portal->cursorOptions accordingly?  if so, how to be compatible with theold driver usually? 2. Currently I want to add a new GUC parameter, if set it to true, server willcreate a holdable portal, or else nothing changed.  Then let the user set it to true in the above case and reset it to false afterward.  Is there any issue with this method?-- Best RegardsAndy Fan", "msg_date": "Mon, 27 Jul 2020 11:52:42 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Allows Extend Protocol support CURSOR_OPT_HOLD with prepared stmt." }, { "msg_contents": ">\n>\n> 2. Currently I want to add a new GUC parameter, if set it to true, server\n> will\n> create a holdable portal, or else nothing changed. Then let the user set\n> it to true in the above case and reset it to false afterward. Is there\n> any issue\n> with this method?\n>\n>\nI forget to say in this case, the user has to drop the holdable\nportal explicitly.\n\n\n-- \nBest Regards\nAndy Fan\n\n2. Currently I want to add a new GUC parameter, if set it to true, server willcreate a holdable portal, or else nothing changed.  Then let the user set it to true in the above case and reset it to false afterward.  Is there any issue with this method? I forget to say in this case, the user has to drop the holdable portal  explicitly. -- Best RegardsAndy Fan", "msg_date": "Mon, 27 Jul 2020 11:57:19 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allows Extend Protocol support CURSOR_OPT_HOLD with prepared\n stmt." }, { "msg_contents": "On Mon, Jul 27, 2020 at 11:57 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n>\n>> 2. Currently I want to add a new GUC parameter, if set it to true, server\n>> will\n>> create a holdable portal, or else nothing changed. Then let the user set\n>> it to true in the above case and reset it to false afterward. Is there\n>> any issue\n>> with this method?\n>>\n>>\n> I forget to say in this case, the user has to drop the holdable\n> portal explicitly.\n>\n>\n>\nAfter some days's hack and testing, I found more issues to support the\nfollowing case\n\nrs = prepared_stmt.execute(1);\nwhile(rs.next())\n{\n // do something with the result (mainly DML )\n conn.commit(); or conn.rollback();\n\n // commit / rollback to avoid the long lock holding.\n}\n\nThe holdable portal is still be dropped in transaction aborted/rollbacked\ncase since\nthe HoldPortal doesn't happens before that and \"abort/rollabck\" means\nsomething\nwrong so it is risk to hold it again. What I did to fix this issue is\nHoldPortal just after\nwe define a Holdable portal. However, that's bad for performance.\nOriginally, we just\nneeded to scan the result when needed, now we have to hold all the results\nand then fetch\nand the data one by one.\n\nThe above user case looks reasonable to me IMO, I would say it is kind of\n\"tech debt\"\nin postgres. To support this completely, looks we have to decouple the\nsnapshot/locking\nmanagement with transaction? If so, it looks like a huge change. I wonder\nif anybody\ntried to resolve this issue and where do we get to that point?\n\n-- \nBest Regards\nAndy Fan\n\nOn Mon, Jul 27, 2020 at 11:57 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:2. Currently I want to add a new GUC parameter, if set it to true, server willcreate a holdable portal, or else nothing changed.  Then let the user set it to true in the above case and reset it to false afterward.  Is there any issue with this method? I forget to say in this case, the user has to drop the holdable portal  explicitly. After some days's hack and testing, I found more issues to support the following casers = prepared_stmt.execute(1);while(rs.next()){    // do something with the result  (mainly DML )     conn.commit();  or  conn.rollback();      // commit / rollback to avoid the long lock holding.}The holdable portal is still be dropped in transaction aborted/rollbacked case since the HoldPortal doesn't happens before that and \"abort/rollabck\" means somethingwrong so it is risk to hold it again.  What I did to fix this issue is HoldPortal just afterwe define a Holdable portal.  However, that's bad for performance.  Originally, we justneeded to scan the result when needed, now we have to hold all the results and then fetchand the data one by one. The above user case looks reasonable to me IMO,  I would say it is kind of \"tech debt\" in postgres.  To support this completely, looks we have to decouple the snapshot/lockingmanagement with transaction? If so, it looks like a huge change. I wonder if anybody tried to resolve this issue and where do we get to that point? -- Best RegardsAndy Fan", "msg_date": "Wed, 12 Aug 2020 10:33:31 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allows Extend Protocol support CURSOR_OPT_HOLD with prepared\n stmt." }, { "msg_contents": "On Tue, 11 Aug 2020 at 22:33, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n>\n>\n> On Mon, Jul 27, 2020 at 11:57 AM Andy Fan <zhihui.fan1213@gmail.com>\n> wrote:\n>\n>>\n>>> 2. Currently I want to add a new GUC parameter, if set it to true,\n>>> server will\n>>> create a holdable portal, or else nothing changed. Then let the user\n>>> set\n>>> it to true in the above case and reset it to false afterward. Is there\n>>> any issue\n>>> with this method?\n>>>\n>>>\n>> I forget to say in this case, the user has to drop the holdable\n>> portal explicitly.\n>>\n>>\n>>\n> After some days's hack and testing, I found more issues to support the\n> following case\n>\n> rs = prepared_stmt.execute(1);\n> while(rs.next())\n> {\n> // do something with the result (mainly DML )\n> conn.commit(); or conn.rollback();\n>\n> // commit / rollback to avoid the long lock holding.\n> }\n>\n> The holdable portal is still be dropped in transaction aborted/rollbacked\n> case since\n> the HoldPortal doesn't happens before that and \"abort/rollabck\" means\n> something\n> wrong so it is risk to hold it again. What I did to fix this issue is\n> HoldPortal just after\n> we define a Holdable portal. However, that's bad for performance.\n> Originally, we just\n> needed to scan the result when needed, now we have to hold all the results\n> and then fetch\n> and the data one by one.\n>\n> The above user case looks reasonable to me IMO, I would say it is kind of\n> \"tech debt\"\n> in postgres. To support this completely, looks we have to decouple the\n> snapshot/locking\n> management with transaction? If so, it looks like a huge change. I wonder\n> if anybody\n> tried to resolve this issue and where do we get to that point?\n>\n> --\n> Best Regards\n> Andy Fan\n>\n\n\nI think if you set the fetch size the driver will use a named cursor and\nthis should work\n\nDave Cramer\nwww.postgres.rocks\n\nOn Tue, 11 Aug 2020 at 22:33, Andy Fan <zhihui.fan1213@gmail.com> wrote:On Mon, Jul 27, 2020 at 11:57 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:2. Currently I want to add a new GUC parameter, if set it to true, server willcreate a holdable portal, or else nothing changed.  Then let the user set it to true in the above case and reset it to false afterward.  Is there any issue with this method? I forget to say in this case, the user has to drop the holdable portal  explicitly. After some days's hack and testing, I found more issues to support the following casers = prepared_stmt.execute(1);while(rs.next()){    // do something with the result  (mainly DML )     conn.commit();  or  conn.rollback();      // commit / rollback to avoid the long lock holding.}The holdable portal is still be dropped in transaction aborted/rollbacked case since the HoldPortal doesn't happens before that and \"abort/rollabck\" means somethingwrong so it is risk to hold it again.  What I did to fix this issue is HoldPortal just afterwe define a Holdable portal.  However, that's bad for performance.  Originally, we justneeded to scan the result when needed, now we have to hold all the results and then fetchand the data one by one. The above user case looks reasonable to me IMO,  I would say it is kind of \"tech debt\" in postgres.  To support this completely, looks we have to decouple the snapshot/lockingmanagement with transaction? If so, it looks like a huge change. I wonder if anybody tried to resolve this issue and where do we get to that point? -- Best RegardsAndy FanI think if you set the fetch size the driver will use a named cursor and this should workDave Cramerwww.postgres.rocks", "msg_date": "Wed, 12 Aug 2020 05:54:31 -0400", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: Allows Extend Protocol support CURSOR_OPT_HOLD with prepared\n stmt." }, { "msg_contents": "On Wed, Aug 12, 2020 at 5:54 PM Dave Cramer <davecramer@postgres.rocks>\nwrote:\n\n>\n>\n>\n> On Tue, 11 Aug 2020 at 22:33, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n>>\n>>\n>> On Mon, Jul 27, 2020 at 11:57 AM Andy Fan <zhihui.fan1213@gmail.com>\n>> wrote:\n>>\n>>>\n>>>> 2. Currently I want to add a new GUC parameter, if set it to true,\n>>>> server will\n>>>> create a holdable portal, or else nothing changed. Then let the user\n>>>> set\n>>>> it to true in the above case and reset it to false afterward. Is there\n>>>> any issue\n>>>> with this method?\n>>>>\n>>>>\n>>> I forget to say in this case, the user has to drop the holdable\n>>> portal explicitly.\n>>>\n>>>\n>>>\n>> After some days's hack and testing, I found more issues to support the\n>> following case\n>>\n>> rs = prepared_stmt.execute(1);\n>> while(rs.next())\n>> {\n>> // do something with the result (mainly DML )\n>> conn.commit(); or conn.rollback();\n>>\n>> // commit / rollback to avoid the long lock holding.\n>> }\n>>\n>> The holdable portal is still be dropped in transaction aborted/rollbacked\n>> case since\n>> the HoldPortal doesn't happens before that and \"abort/rollabck\" means\n>> something\n>> wrong so it is risk to hold it again. What I did to fix this issue is\n>> HoldPortal just after\n>> we define a Holdable portal. However, that's bad for performance.\n>> Originally, we just\n>> needed to scan the result when needed, now we have to hold all the\n>> results and then fetch\n>> and the data one by one.\n>>\n>> The above user case looks reasonable to me IMO, I would say it is kind\n>> of \"tech debt\"\n>> in postgres. To support this completely, looks we have to decouple the\n>> snapshot/locking\n>> management with transaction? If so, it looks like a huge change. I wonder\n>> if anybody\n>> tried to resolve this issue and where do we get to that point?\n>>\n>> --\n>> Best Regards\n>> Andy Fan\n>>\n>\n>\n> I think if you set the fetch size the driver will use a named cursor and\n> this should work\n>\n>\nI knew this point before working on that, but I heard from my customer that\nthe size\nwould be pretty big, so I gave up on this idea (too early). However,\nafter working on\na Holdable solution, I see there is very little difference between caching\nthe result\non the server or client. If the drivers can use the tempfile as an extra\nstore, then\nthings will be better than the server. Or else, things will be still\ncomplex. Thanks\nfor your reminder!\n\n-- \nBest Regards\nAndy Fan\n\nOn Wed, Aug 12, 2020 at 5:54 PM Dave Cramer <davecramer@postgres.rocks> wrote:On Tue, 11 Aug 2020 at 22:33, Andy Fan <zhihui.fan1213@gmail.com> wrote:On Mon, Jul 27, 2020 at 11:57 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:2. Currently I want to add a new GUC parameter, if set it to true, server willcreate a holdable portal, or else nothing changed.  Then let the user set it to true in the above case and reset it to false afterward.  Is there any issue with this method? I forget to say in this case, the user has to drop the holdable portal  explicitly. After some days's hack and testing, I found more issues to support the following casers = prepared_stmt.execute(1);while(rs.next()){    // do something with the result  (mainly DML )     conn.commit();  or  conn.rollback();      // commit / rollback to avoid the long lock holding.}The holdable portal is still be dropped in transaction aborted/rollbacked case since the HoldPortal doesn't happens before that and \"abort/rollabck\" means somethingwrong so it is risk to hold it again.  What I did to fix this issue is HoldPortal just afterwe define a Holdable portal.  However, that's bad for performance.  Originally, we justneeded to scan the result when needed, now we have to hold all the results and then fetchand the data one by one. The above user case looks reasonable to me IMO,  I would say it is kind of \"tech debt\" in postgres.  To support this completely, looks we have to decouple the snapshot/lockingmanagement with transaction? If so, it looks like a huge change. I wonder if anybody tried to resolve this issue and where do we get to that point? -- Best RegardsAndy FanI think if you set the fetch size the driver will use a named cursor and this should work I knew this point before working on that, but I heard from my customer that the sizewould be pretty big, so I gave up on this idea (too early).   However, after working ona Holdable solution, I see there is very little difference between caching the resulton the server or client.   If the drivers can use the tempfile as an extra store, then things will be better than the server.  Or else,  things will be still complex.  Thanksfor your reminder!-- Best RegardsAndy Fan", "msg_date": "Wed, 12 Aug 2020 20:11:12 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allows Extend Protocol support CURSOR_OPT_HOLD with prepared\n stmt." }, { "msg_contents": "On Wed, Aug 12, 2020 at 8:11 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n>\n>\n> On Wed, Aug 12, 2020 at 5:54 PM Dave Cramer <davecramer@postgres.rocks>\n> wrote:\n>\n>>\n>>\n>>\n>> On Tue, 11 Aug 2020 at 22:33, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>>\n>>>\n>>>\n>>> On Mon, Jul 27, 2020 at 11:57 AM Andy Fan <zhihui.fan1213@gmail.com>\n>>> wrote:\n>>>\n>>>>\n>>>>> 2. Currently I want to add a new GUC parameter, if set it to true,\n>>>>> server will\n>>>>> create a holdable portal, or else nothing changed. Then let the user\n>>>>> set\n>>>>> it to true in the above case and reset it to false afterward. Is\n>>>>> there any issue\n>>>>> with this method?\n>>>>>\n>>>>>\n>>>> I forget to say in this case, the user has to drop the holdable\n>>>> portal explicitly.\n>>>>\n>>>>\n>>>>\n>>> After some days's hack and testing, I found more issues to support the\n>>> following case\n>>>\n>>> rs = prepared_stmt.execute(1);\n>>> while(rs.next())\n>>> {\n>>> // do something with the result (mainly DML )\n>>> conn.commit(); or conn.rollback();\n>>>\n>>> // commit / rollback to avoid the long lock holding.\n>>> }\n>>>\n>>> The holdable portal is still be dropped in transaction\n>>> aborted/rollbacked case since\n>>> the HoldPortal doesn't happens before that and \"abort/rollabck\" means\n>>> something\n>>> wrong so it is risk to hold it again. What I did to fix this issue is\n>>> HoldPortal just after\n>>> we define a Holdable portal. However, that's bad for performance.\n>>> Originally, we just\n>>> needed to scan the result when needed, now we have to hold all the\n>>> results and then fetch\n>>> and the data one by one.\n>>>\n>>> The above user case looks reasonable to me IMO, I would say it is kind\n>>> of \"tech debt\"\n>>> in postgres. To support this completely, looks we have to decouple the\n>>> snapshot/locking\n>>> management with transaction? If so, it looks like a huge change. I\n>>> wonder if anybody\n>>> tried to resolve this issue and where do we get to that point?\n>>>\n>>> --\n>>> Best Regards\n>>> Andy Fan\n>>>\n>>\n>>\n>> I think if you set the fetch size the driver will use a named cursor and\n>> this should work\n>>\n>>\n> If the drivers can use the tempfile as an extra store, then things will be\n> better than the server.\n>\n\nMaybe not much better, just the same as each other. Both need to\nstore all of them first and fetch them from the temp store again.\n\n-- \nBest Regards\nAndy Fan\n\nOn Wed, Aug 12, 2020 at 8:11 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:On Wed, Aug 12, 2020 at 5:54 PM Dave Cramer <davecramer@postgres.rocks> wrote:On Tue, 11 Aug 2020 at 22:33, Andy Fan <zhihui.fan1213@gmail.com> wrote:On Mon, Jul 27, 2020 at 11:57 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:2. Currently I want to add a new GUC parameter, if set it to true, server willcreate a holdable portal, or else nothing changed.  Then let the user set it to true in the above case and reset it to false afterward.  Is there any issue with this method? I forget to say in this case, the user has to drop the holdable portal  explicitly. After some days's hack and testing, I found more issues to support the following casers = prepared_stmt.execute(1);while(rs.next()){    // do something with the result  (mainly DML )     conn.commit();  or  conn.rollback();      // commit / rollback to avoid the long lock holding.}The holdable portal is still be dropped in transaction aborted/rollbacked case since the HoldPortal doesn't happens before that and \"abort/rollabck\" means somethingwrong so it is risk to hold it again.  What I did to fix this issue is HoldPortal just afterwe define a Holdable portal.  However, that's bad for performance.  Originally, we justneeded to scan the result when needed, now we have to hold all the results and then fetchand the data one by one. The above user case looks reasonable to me IMO,  I would say it is kind of \"tech debt\" in postgres.  To support this completely, looks we have to decouple the snapshot/lockingmanagement with transaction? If so, it looks like a huge change. I wonder if anybody tried to resolve this issue and where do we get to that point? -- Best RegardsAndy FanI think if you set the fetch size the driver will use a named cursor and this should work If the drivers can use the tempfile as an extra store, then things will be better than the server. Maybe not much better, just the same as each other.  Both need tostore all of them first and fetch them from the temp store again. -- Best RegardsAndy Fan", "msg_date": "Wed, 12 Aug 2020 20:14:12 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allows Extend Protocol support CURSOR_OPT_HOLD with prepared\n stmt." }, { "msg_contents": "On Wed, 12 Aug 2020 at 08:14, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n>\n>\n> On Wed, Aug 12, 2020 at 8:11 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n>>\n>>\n>> On Wed, Aug 12, 2020 at 5:54 PM Dave Cramer <davecramer@postgres.rocks>\n>> wrote:\n>>\n>>>\n>>>\n>>>\n>>> On Tue, 11 Aug 2020 at 22:33, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>>>\n>>>>\n>>>>\n>>>> On Mon, Jul 27, 2020 at 11:57 AM Andy Fan <zhihui.fan1213@gmail.com>\n>>>> wrote:\n>>>>\n>>>>>\n>>>>>> 2. Currently I want to add a new GUC parameter, if set it to true,\n>>>>>> server will\n>>>>>> create a holdable portal, or else nothing changed. Then let the user\n>>>>>> set\n>>>>>> it to true in the above case and reset it to false afterward. Is\n>>>>>> there any issue\n>>>>>> with this method?\n>>>>>>\n>>>>>>\n>>>>> I forget to say in this case, the user has to drop the holdable\n>>>>> portal explicitly.\n>>>>>\n>>>>>\n>>>>>\n>>>> After some days's hack and testing, I found more issues to support the\n>>>> following case\n>>>>\n>>>> rs = prepared_stmt.execute(1);\n>>>> while(rs.next())\n>>>> {\n>>>> // do something with the result (mainly DML )\n>>>> conn.commit(); or conn.rollback();\n>>>>\n>>>> // commit / rollback to avoid the long lock holding.\n>>>> }\n>>>>\n>>>> The holdable portal is still be dropped in transaction\n>>>> aborted/rollbacked case since\n>>>> the HoldPortal doesn't happens before that and \"abort/rollabck\" means\n>>>> something\n>>>> wrong so it is risk to hold it again. What I did to fix this issue is\n>>>> HoldPortal just after\n>>>> we define a Holdable portal. However, that's bad for performance.\n>>>> Originally, we just\n>>>> needed to scan the result when needed, now we have to hold all the\n>>>> results and then fetch\n>>>> and the data one by one.\n>>>>\n>>>> The above user case looks reasonable to me IMO, I would say it is kind\n>>>> of \"tech debt\"\n>>>> in postgres. To support this completely, looks we have to decouple the\n>>>> snapshot/locking\n>>>> management with transaction? If so, it looks like a huge change. I\n>>>> wonder if anybody\n>>>> tried to resolve this issue and where do we get to that point?\n>>>>\n>>>> --\n>>>> Best Regards\n>>>> Andy Fan\n>>>>\n>>>\n>>>\n>>> I think if you set the fetch size the driver will use a named cursor and\n>>> this should work\n>>>\n>>>\n>> If the drivers can use the tempfile as an extra store, then things will\n>> be better than the server.\n>>\n>\n> Maybe not much better, just the same as each other. Both need to\n> store all of them first and fetch them from the temp store again.\n>\n>\nYa I thought about this after I answered it. If you have a resultset that\nyou requested in a transaction and then you commit the transaction I think\nit is reasonable to expect that the resultset is no longer valid.\n\n\nDave Cramer\nwww.postgres.rocks\n\n>\n\nOn Wed, 12 Aug 2020 at 08:14, Andy Fan <zhihui.fan1213@gmail.com> wrote:On Wed, Aug 12, 2020 at 8:11 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:On Wed, Aug 12, 2020 at 5:54 PM Dave Cramer <davecramer@postgres.rocks> wrote:On Tue, 11 Aug 2020 at 22:33, Andy Fan <zhihui.fan1213@gmail.com> wrote:On Mon, Jul 27, 2020 at 11:57 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:2. Currently I want to add a new GUC parameter, if set it to true, server willcreate a holdable portal, or else nothing changed.  Then let the user set it to true in the above case and reset it to false afterward.  Is there any issue with this method? I forget to say in this case, the user has to drop the holdable portal  explicitly. After some days's hack and testing, I found more issues to support the following casers = prepared_stmt.execute(1);while(rs.next()){    // do something with the result  (mainly DML )     conn.commit();  or  conn.rollback();      // commit / rollback to avoid the long lock holding.}The holdable portal is still be dropped in transaction aborted/rollbacked case since the HoldPortal doesn't happens before that and \"abort/rollabck\" means somethingwrong so it is risk to hold it again.  What I did to fix this issue is HoldPortal just afterwe define a Holdable portal.  However, that's bad for performance.  Originally, we justneeded to scan the result when needed, now we have to hold all the results and then fetchand the data one by one. The above user case looks reasonable to me IMO,  I would say it is kind of \"tech debt\" in postgres.  To support this completely, looks we have to decouple the snapshot/lockingmanagement with transaction? If so, it looks like a huge change. I wonder if anybody tried to resolve this issue and where do we get to that point? -- Best RegardsAndy FanI think if you set the fetch size the driver will use a named cursor and this should work If the drivers can use the tempfile as an extra store, then things will be better than the server. Maybe not much better, just the same as each other.  Both need tostore all of them first and fetch them from the temp store again. Ya I thought about this after I answered it. If you have a resultset that you requested in a transaction and then you commit the transaction I think it is reasonable to expect that the resultset is no longer valid.Dave Cramerwww.postgres.rocks", "msg_date": "Wed, 12 Aug 2020 08:21:05 -0400", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: Allows Extend Protocol support CURSOR_OPT_HOLD with prepared\n stmt." }, { "msg_contents": "On Wed, Aug 12, 2020 at 8:21 PM Dave Cramer <davecramer@postgres.rocks>\nwrote:\n\n>\n>\n> On Wed, 12 Aug 2020 at 08:14, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n>>\n>>\n>> On Wed, Aug 12, 2020 at 8:11 PM Andy Fan <zhihui.fan1213@gmail.com>\n>> wrote:\n>>\n>>>\n>>>\n>>> On Wed, Aug 12, 2020 at 5:54 PM Dave Cramer <davecramer@postgres.rocks>\n>>> wrote:\n>>>\n>>>>\n>>>>\n>>>>\n>>>> On Tue, 11 Aug 2020 at 22:33, Andy Fan <zhihui.fan1213@gmail.com>\n>>>> wrote:\n>>>>\n>>>>>\n>>>>>\n>>>>> On Mon, Jul 27, 2020 at 11:57 AM Andy Fan <zhihui.fan1213@gmail.com>\n>>>>> wrote:\n>>>>>\n>>>>>>\n>>>>>>> 2. Currently I want to add a new GUC parameter, if set it to true,\n>>>>>>> server will\n>>>>>>> create a holdable portal, or else nothing changed. Then let the\n>>>>>>> user set\n>>>>>>> it to true in the above case and reset it to false afterward. Is\n>>>>>>> there any issue\n>>>>>>> with this method?\n>>>>>>>\n>>>>>>>\n>>>>>> I forget to say in this case, the user has to drop the holdable\n>>>>>> portal explicitly.\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>> After some days's hack and testing, I found more issues to support the\n>>>>> following case\n>>>>>\n>>>>> rs = prepared_stmt.execute(1);\n>>>>> while(rs.next())\n>>>>> {\n>>>>> // do something with the result (mainly DML )\n>>>>> conn.commit(); or conn.rollback();\n>>>>>\n>>>>> // commit / rollback to avoid the long lock holding.\n>>>>> }\n>>>>>\n>>>>> The holdable portal is still be dropped in transaction\n>>>>> aborted/rollbacked case since\n>>>>> the HoldPortal doesn't happens before that and \"abort/rollabck\" means\n>>>>> something\n>>>>> wrong so it is risk to hold it again. What I did to fix this issue is\n>>>>> HoldPortal just after\n>>>>> we define a Holdable portal. However, that's bad for performance.\n>>>>> Originally, we just\n>>>>> needed to scan the result when needed, now we have to hold all the\n>>>>> results and then fetch\n>>>>> and the data one by one.\n>>>>>\n>>>>> The above user case looks reasonable to me IMO, I would say it is\n>>>>> kind of \"tech debt\"\n>>>>> in postgres. To support this completely, looks we have to decouple\n>>>>> the snapshot/locking\n>>>>> management with transaction? If so, it looks like a huge change. I\n>>>>> wonder if anybody\n>>>>> tried to resolve this issue and where do we get to that point?\n>>>>>\n>>>>> --\n>>>>> Best Regards\n>>>>> Andy Fan\n>>>>>\n>>>>\n>>>>\n>>>> I think if you set the fetch size the driver will use a named cursor\n>>>> and this should work\n>>>>\n>>>>\n>>> If the drivers can use the tempfile as an extra store, then things will\n>>> be better than the server.\n>>>\n>>\n>> Maybe not much better, just the same as each other. Both need to\n>> store all of them first and fetch them from the temp store again.\n>>\n>>\n> Ya I thought about this after I answered it. If you have a resultset that\n> you requested in a transaction and then you commit the transaction I think\n> it is reasonable to expect that the resultset is no longer valid.\n>\n>\nI checked JDBC, the resultset only uses memory to cache the resultset.\nso we can't set an inf+ fetch size with the hope that the client's\nresultset\ncan cache all of them for us.\n\nBasically I will use my above hack.\n\n-- \nBest Regards\nAndy Fan\n\nOn Wed, Aug 12, 2020 at 8:21 PM Dave Cramer <davecramer@postgres.rocks> wrote:On Wed, 12 Aug 2020 at 08:14, Andy Fan <zhihui.fan1213@gmail.com> wrote:On Wed, Aug 12, 2020 at 8:11 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:On Wed, Aug 12, 2020 at 5:54 PM Dave Cramer <davecramer@postgres.rocks> wrote:On Tue, 11 Aug 2020 at 22:33, Andy Fan <zhihui.fan1213@gmail.com> wrote:On Mon, Jul 27, 2020 at 11:57 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:2. Currently I want to add a new GUC parameter, if set it to true, server willcreate a holdable portal, or else nothing changed.  Then let the user set it to true in the above case and reset it to false afterward.  Is there any issue with this method? I forget to say in this case, the user has to drop the holdable portal  explicitly. After some days's hack and testing, I found more issues to support the following casers = prepared_stmt.execute(1);while(rs.next()){    // do something with the result  (mainly DML )     conn.commit();  or  conn.rollback();      // commit / rollback to avoid the long lock holding.}The holdable portal is still be dropped in transaction aborted/rollbacked case since the HoldPortal doesn't happens before that and \"abort/rollabck\" means somethingwrong so it is risk to hold it again.  What I did to fix this issue is HoldPortal just afterwe define a Holdable portal.  However, that's bad for performance.  Originally, we justneeded to scan the result when needed, now we have to hold all the results and then fetchand the data one by one. The above user case looks reasonable to me IMO,  I would say it is kind of \"tech debt\" in postgres.  To support this completely, looks we have to decouple the snapshot/lockingmanagement with transaction? If so, it looks like a huge change. I wonder if anybody tried to resolve this issue and where do we get to that point? -- Best RegardsAndy FanI think if you set the fetch size the driver will use a named cursor and this should work If the drivers can use the tempfile as an extra store, then things will be better than the server. Maybe not much better, just the same as each other.  Both need tostore all of them first and fetch them from the temp store again. Ya I thought about this after I answered it. If you have a resultset that you requested in a transaction and then you commit the transaction I think it is reasonable to expect that the resultset is no longer valid.I checked JDBC, the resultset only uses memory to cache the resultset. so we can't set  an inf+ fetch size with the hope that the client's resultsetcan cache all of them for us.   Basically I will use my above hack. -- Best RegardsAndy Fan", "msg_date": "Wed, 12 Aug 2020 21:06:31 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allows Extend Protocol support CURSOR_OPT_HOLD with prepared\n stmt." } ]
[ { "msg_contents": "Hi hackers,\n\nI've attached a patch to display individual query in the \npg_stat_activity query field when multiple SQL statements are currently \ndisplayed.\n\n_Motivation:_\n\nWhen multiple statements are displayed then we don’t know which one is \ncurrently running.\n\nFor example:\n\npsql -c \"select pg_sleep(10);select pg_sleep(20);\" is currently \ndisplayed as:\n\npostgres=# select backend_type,query from pg_stat_activity;\n backend_type | query\n------------------------------+--------------------------------------------------\n client backend | select pg_sleep(10);select pg_sleep(20);\n\nShowing which statement is currently being executed would be more helpful.\n\n_Technical context and proposal:_\n\nThere is 2 points in this patch:\n\n * modifying the current behavior in “exec_simple_query”\n * modifying the current behavior in “ExecInitParallelPlan”\n\n\nSo that we could see for example:\n\n backend_type | query\n------------------------------+--------------------------------------------------\n client backend | select pg_sleep(10);\n\nand then\n\n backend_type | query\n------------------------------+--------------------------------------------------\n client backend | select pg_sleep(20);\n\ninstead of the multiple sql statement described in the “motivation” section.\n\nAnother example: parallel worker being triggered while executing a function:\n\ncreate or replace function test()\nreturns void as $$select count(/) as \"first\" from foo;select pg_sleep(10);select count(/) as \"second\" from foo;select pg_sleep(11);select pg_sleep(10)\n$$\nlanguage sql;\n\nWe currently see:\n\n backend_type | query\n------------------------------+--------------------------------------------------------------------------------------------------------------------------------------\n client backend | select test();\n parallel worker | select count(*) as \"first\" from foo;select pg_sleep(10);select count(*) as \"second\" from foo;select pg_sleep(11);select pg_sleep(10)+\n |\n parallel worker | select count(*) as \"first\" from foo;select pg_sleep(10);select count(*) as \"second\" from foo;select pg_sleep(11);select pg_sleep(10)+\n |\n\nwhile the attached patch would provide:\n\n \n backend_type | query [217/1938]\n------------------------------+--------------------------------------------------\n client backend | select test();\n parallel worker | select count(*) as \"first\" from foo;\n parallel worker | select count(*) as \"first\" from foo;\n\nand then:\n\n backend_type | query\n------------------------------+--------------------------------------------------\n client backend | select test();\n parallel worker | select count(*) as \"second\" from foo;\n parallel worker | select count(*) as \"second\" from foo;\n\nI will add this patch to the next commitfest. I look forward to your \nfeedback about the idea and/or implementation.\n\nRegards,\nBertrand", "msg_date": "Mon, 27 Jul 2020 09:36:59 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Display individual query in pg_stat_activity" }, { "msg_contents": "Hi\n\nOn Mon, Jul 27, 2020 at 3:40 PM Drouvot, Bertrand <bdrouvot@amazon.com>\nwrote:\n\n> Hi hackers,\n>\n> I've attached a patch to display individual query in the pg_stat_activity\n> query field when multiple SQL statements are currently displayed.\n>\n> *Motivation:*\n>\n> When multiple statements are displayed then we don’t know which one is\n> currently running.\n>\n\nI'm not sure I'd want that to happen, as it could make it much harder to\ntrack the activity back to a query in the application layer or server logs.\n\nPerhaps a separate field could be added for the current statement, or a\nvalue to indicate what the current statement number in the query is?\n\n-- \nDave Page\nBlog: http://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: http://www.enterprisedb.com\n\nHiOn Mon, Jul 27, 2020 at 3:40 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n\n Hi hackers,\n\n I've attached a patch to display individual query in the\n pg_stat_activity query field when multiple SQL statements are\n currently displayed. \n\nMotivation:\n\n When multiple statements are displayed then we don’t know which one\n is currently running.I'm not sure I'd want that to happen, as it could make it much harder to track the activity back to a query in the application layer or server logs. Perhaps a separate field could be added for the current statement, or a value to indicate what the current statement number in the query is?-- Dave PageBlog: http://pgsnake.blogspot.comTwitter: @pgsnakeEDB: http://www.enterprisedb.com", "msg_date": "Mon, 27 Jul 2020 15:57:10 +0100", "msg_from": "Dave Page <dpage@pgadmin.org>", "msg_from_op": false, "msg_subject": "Re: Display individual query in pg_stat_activity" }, { "msg_contents": "On 7/27/20 07:57, Dave Page wrote:\n> I'm not sure I'd want that to happen, as it could make it much harder\n> to track the activity back to a query in the application layer or\n> server logs. \n>\n> Perhaps a separate field could be added for the current statement, or\n> a value to indicate what the current statement number in the query is?\n\nMight be helpful to give some specifics about circumstances where\nstrings can appear in pg_stat_activity.query with multiple statements.\n\n1) First of all, IIUC multiple statements are only supported in the\nfirst place by the simple protocol and PLs.  Anyone using parameterized\nstatements (bind variables) should be unaffected by this.\n\n2) My read of the official pg JDBC driver is that even for batch\noperations it currently iterates and sends each statement individually.\nI don't think the JDBC driver has the capability to send multiple\nstatements, so java apps using this driver should be unaffected.\n\n3) psql -c will always send the string as a single \"simple protocol\"\nrequest.  Scripts will be impacted.\n\n4) PLs also seem to have a code path that can put multiple statements in\npg_stat_activity when parallel slaves are launched.  PL code will be\nimpacted.\n\n5) pgAdmin uses the simple protocol and when a user executes a block of\nstatements, pgAdmin seems to send the whole block as a single \"simple\nprotocol\" request.  Tools like pgAdmin will be impacted.\n\nAt the application layer, it doesn't seem problematic to me if\nPostgreSQL reports each query one at a time.  IMO most people will find\nthis to be a more useful behavior and they will still find their queries\nin their app code or app logs.\n\nHowever at the PostgreSQL logging layer this is a good call-out.  I just\ndid a quick test on 14devel to double-check my assumption and it does\nseem that PostgreSQL logs the entire combined query for psql -c.  I\nthink it would be better for PostgreSQL to report queries individually\nin the log too - for example pgBadger summaries will be even more useful\nif they report information for each individual query rather than a\nsingle big block of multiple queries.\n\nGiven how small this patch is, it seems worthwhile to at least\ninvestigate whether the logging component could be addressed just as easily.\n\n-Jeremy\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services\n\n\n\n\n\n\n\nOn 7/27/20 07:57, Dave Page wrote:\n\n\n\n\n\n\n\nI'm not sure I'd want that to happen, as it could make\n it much harder to track the activity back to a query in\n the application layer or server logs. \n\n\nPerhaps a separate field could be added for the current\n statement, or a value to indicate what the current\n statement number in the query is?\n\n\n\n\n\n Might be helpful to give some specifics about circumstances where\n strings can appear in pg_stat_activity.query with multiple\n statements. \n\n 1) First of all, IIUC multiple statements are only supported in the\n first place by the simple protocol and PLs.  Anyone using\n parameterized statements (bind variables) should be unaffected by\n this.\n\n 2) My read of the official pg JDBC driver is that even for batch\n operations it currently iterates and sends each statement\n individually. I don't think the JDBC driver has the capability to\n send multiple statements, so java apps using this driver should be\n unaffected.\n\n 3) psql -c will always send the string as a single \"simple protocol\"\n request.  Scripts will be impacted.\n\n 4) PLs also seem to have a code path that can put multiple\n statements in pg_stat_activity when parallel slaves are launched. \n PL code will be impacted.\n\n 5) pgAdmin uses the simple protocol and when a user executes a block\n of statements, pgAdmin seems to send the whole block as a single\n \"simple protocol\" request.  Tools like pgAdmin will be impacted.\n\n At the application layer, it doesn't seem problematic to me if\n PostgreSQL reports each query one at a time.  IMO most people will\n find this to be a more useful behavior and they will still find\n their queries in their app code or app logs.\n\n However at the PostgreSQL logging layer this is a good call-out.  I\n just did a quick test on 14devel to double-check my assumption and\n it does seem that PostgreSQL logs the entire combined query for psql\n -c.  I think it would be better for PostgreSQL to report queries\n individually in the log too - for example pgBadger summaries will be\n even more useful if they report information for each individual\n query rather than a single big block of multiple queries.\n\n Given how small this patch is, it seems worthwhile to at least\n investigate whether the logging component could be addressed just as\n easily.\n\n -Jeremy\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services", "msg_date": "Mon, 27 Jul 2020 08:28:07 -0700", "msg_from": "Jeremy Schneider <schnjere@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Display individual query in pg_stat_activity" }, { "msg_contents": "On Mon, Jul 27, 2020 at 4:28 PM Jeremy Schneider <schnjere@amazon.com>\nwrote:\n\n> On 7/27/20 07:57, Dave Page wrote:\n>\n> I'm not sure I'd want that to happen, as it could make it much harder to\n> track the activity back to a query in the application layer or server logs.\n>\n> Perhaps a separate field could be added for the current statement, or a\n> value to indicate what the current statement number in the query is?\n>\n>\n> Might be helpful to give some specifics about circumstances where strings\n> can appear in pg_stat_activity.query with multiple statements.\n>\n> 1) First of all, IIUC multiple statements are only supported in the first\n> place by the simple protocol and PLs. Anyone using parameterized\n> statements (bind variables) should be unaffected by this.\n>\n> 2) My read of the official pg JDBC driver is that even for batch\n> operations it currently iterates and sends each statement individually. I\n> don't think the JDBC driver has the capability to send multiple statements,\n> so java apps using this driver should be unaffected.\n>\n\nThat is just one of a number of different popular drivers of course.\n\n\n>\n> 3) psql -c will always send the string as a single \"simple protocol\"\n> request. Scripts will be impacted.\n>\n> 4) PLs also seem to have a code path that can put multiple statements in\n> pg_stat_activity when parallel slaves are launched. PL code will be\n> impacted.\n>\n> 5) pgAdmin uses the simple protocol and when a user executes a block of\n> statements, pgAdmin seems to send the whole block as a single \"simple\n> protocol\" request. Tools like pgAdmin will be impacted.\n>\n\nIt does. It also prepends some queries with comments, specifically to allow\nusers to filter them out when they're analysing logs (a feature requested\nby users, not just something we thought was a good idea). I'm assuming that\nthis patch would also strip those?\n\n\n>\n> At the application layer, it doesn't seem problematic to me if PostgreSQL\n> reports each query one at a time. IMO most people will find this to be a\n> more useful behavior and they will still find their queries in their app\n> code or app logs.\n>\n\nI think there are arguments to be made for both approaches.\n\n\n>\n> However at the PostgreSQL logging layer this is a good call-out. I just\n> did a quick test on 14devel to double-check my assumption and it does seem\n> that PostgreSQL logs the entire combined query for psql -c. I think it\n> would be better for PostgreSQL to report queries individually in the log\n> too - for example pgBadger summaries will be even more useful if they\n> report information for each individual query rather than a single big block\n> of multiple queries.\n>\n> Given how small this patch is, it seems worthwhile to at least investigate\n> whether the logging component could be addressed just as easily.\n>\n> -Jeremy\n>\n> --\n> Jeremy Schneider\n> Database Engineer\n> Amazon Web Services\n>\n>\n>\n\n-- \nDave Page\nBlog: http://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: http://www.enterprisedb.com\n\nOn Mon, Jul 27, 2020 at 4:28 PM Jeremy Schneider <schnjere@amazon.com> wrote:\n\nOn 7/27/20 07:57, Dave Page wrote:\n\n\n\n\n\n\nI'm not sure I'd want that to happen, as it could make\n it much harder to track the activity back to a query in\n the application layer or server logs. \n\n\nPerhaps a separate field could be added for the current\n statement, or a value to indicate what the current\n statement number in the query is?\n\n\n\n\n\n Might be helpful to give some specifics about circumstances where\n strings can appear in pg_stat_activity.query with multiple\n statements. \n\n 1) First of all, IIUC multiple statements are only supported in the\n first place by the simple protocol and PLs.  Anyone using\n parameterized statements (bind variables) should be unaffected by\n this.\n\n 2) My read of the official pg JDBC driver is that even for batch\n operations it currently iterates and sends each statement\n individually. I don't think the JDBC driver has the capability to\n send multiple statements, so java apps using this driver should be\n unaffected.That is just one of a number of different popular drivers of course. \n\n 3) psql -c will always send the string as a single \"simple protocol\"\n request.  Scripts will be impacted.\n\n 4) PLs also seem to have a code path that can put multiple\n statements in pg_stat_activity when parallel slaves are launched. \n PL code will be impacted.\n\n 5) pgAdmin uses the simple protocol and when a user executes a block\n of statements, pgAdmin seems to send the whole block as a single\n \"simple protocol\" request.  Tools like pgAdmin will be impacted.It does. It also prepends some queries with comments, specifically to allow users to filter them out when they're analysing logs (a feature requested by users, not just something we thought was a good idea). I'm assuming that this patch would also strip those? \n\n At the application layer, it doesn't seem problematic to me if\n PostgreSQL reports each query one at a time.  IMO most people will\n find this to be a more useful behavior and they will still find\n their queries in their app code or app logs.I think there are arguments to be made for both approaches. \n\n However at the PostgreSQL logging layer this is a good call-out.  I\n just did a quick test on 14devel to double-check my assumption and\n it does seem that PostgreSQL logs the entire combined query for psql\n -c.  I think it would be better for PostgreSQL to report queries\n individually in the log too - for example pgBadger summaries will be\n even more useful if they report information for each individual\n query rather than a single big block of multiple queries.\n\n Given how small this patch is, it seems worthwhile to at least\n investigate whether the logging component could be addressed just as\n easily.\n\n -Jeremy\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services\n\n\n\n-- Dave PageBlog: http://pgsnake.blogspot.comTwitter: @pgsnakeEDB: http://www.enterprisedb.com", "msg_date": "Mon, 27 Jul 2020 17:00:17 +0100", "msg_from": "Dave Page <dpage@pgadmin.org>", "msg_from_op": false, "msg_subject": "Re: Display individual query in pg_stat_activity" }, { "msg_contents": "On 7/27/20 9:57 AM, Dave Page wrote:\n> On Mon, Jul 27, 2020 at 3:40 PM Drouvot, Bertrand <bdrouvot@amazon.com \n> <mailto:bdrouvot@amazon.com>> wrote:\n<snip>\n>\n> When multiple statements are displayed then we don’t know which\n> one is currently running.\n>\n>\n> I'm not sure I'd want that to happen, as it could make it much harder \n> to track the activity back to a query in the application layer or \n> server logs.\n>\n> Perhaps a separate field could be added for the current statement, or \n> a value to indicate what the current statement number in the query is?\nPerhaps turn query into text[]. That would make it easy to concatenate \nback together if desired.\n> -- \n> Dave Page\n> Blog: http://pgsnake.blogspot.com\n> Twitter: @pgsnake\n>\n> EDB: http://www.enterprisedb.com\n>\n\n\n\n\n\n\nOn 7/27/20 9:57 AM, Dave Page wrote:\n\n\n\n\nOn Mon, Jul 27, 2020 at 3:40 PM Drouvot, Bertrand\n <bdrouvot@amazon.com> wrote:\n\n\n\n <snip>\n\n\n\n\nWhen multiple\n statements are displayed then we don’t know which one is\n currently running.\n\n\n\nI'm not sure I'd want that to happen, as it could make\n it much harder to track the activity back to a query in\n the application layer or server logs. \n\n\nPerhaps a separate field could be added for the current\n statement, or a value to indicate what the current\n statement number in the query is?\n\n\n\n\n Perhaps turn query into text[]. That would make it easy to\n concatenate back together if desired.\n\n\n\n\n\n-- \n\n\nDave Page\n Blog: http://pgsnake.blogspot.com\n Twitter: @pgsnake\n\n EDB: http://www.enterprisedb.com", "msg_date": "Wed, 29 Jul 2020 14:24:45 -0500", "msg_from": "Jim Nasby <nasbyj@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Display individual query in pg_stat_activity" }, { "msg_contents": "Hi,\n\nOn 7/27/20 4:57 PM, Dave Page wrote:\n>\n> *CAUTION*: This email originated from outside of the organization. Do \n> not click links or open attachments unless you can confirm the sender \n> and know the content is safe.\n>\n>\n> Hi\n>\n> On Mon, Jul 27, 2020 at 3:40 PM Drouvot, Bertrand <bdrouvot@amazon.com \n> <mailto:bdrouvot@amazon.com>> wrote:\n>\n> Hi hackers,\n>\n> I've attached a patch to display individual query in the\n> pg_stat_activity query field when multiple SQL statements are\n> currently displayed.\n>\n> _Motivation:_\n>\n> When multiple statements are displayed then we don’t know which\n> one is currently running.\n>\n>\n> I'm not sure I'd want that to happen, as it could make it much harder \n> to track the activity back to a query in the application layer or \n> server logs.\n>\n> Perhaps a separate field could be added for the current statement, or \n> a value to indicate what the current statement number in the query is?\n\nThanks for he feedback.\n\nI like the idea of adding extra information without changing the current \nbehavior.\n\nA value to indicate what the current statement number is, would need \nparsing the query field by the user to get the individual statement.\n\nI think the separate field makes sense (though it come with an extra \nmemory price) as it will not change the existing behavior and would just \nprovide extra information (without any extra parsing needed for the user).\n\nI attached a mock up v2 patch that adds this new field.\n\nOutcome Examples:\n\n   backend_type | query                                            | \nindividual_query\n----------------+---------------------------------------------------------------------------------------------+----------------------\n  client backend | select backend_type, query, individual_query from \npg_stat_activity where length(query) > 0; |\n  client backend | select pg_sleep(10);select pg_sleep(20); | select \npg_sleep(20);\n\nor\n\n   backend_type | query |           individual_query\n-----------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------\n  client backend  | select backend_type, query, individual_query from \npg_stat_activity where length(query) > 0; |\n  client backend  | select test(); |\n  parallel worker | select count(*) as \"first\" from foo;select \npg_sleep(10);create index bdtidx on foo(generate_series);select count(*) \nas \"second\" from foo;select pg_sleep(11);select count(*) as \"third\" from \nfoo | select count(*) as \"second\" from foo;\n  parallel worker | select count(*) as \"first\" from foo;select \npg_sleep(10);create index bdtidx on foo(generate_series);select count(*) \nas \"second\" from foo;select pg_sleep(11);select count(*) as \"third\" from \nfoo | select count(*) as \"second\" from foo;\n\nAs you can see the individual_query field is populated only when the \nquery field is a multiple statements one.\n\nRegards,\n\nBertrand\n\n>\n> -- \n> Dave Page\n> Blog: http://pgsnake.blogspot.com\n> Twitter: @pgsnake\n>\n> EDB: http://www.enterprisedb.com\n>", "msg_date": "Thu, 6 Aug 2020 12:10:47 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Display individual query in pg_stat_activity" }, { "msg_contents": "On Thu, Aug 6, 2020 at 12:17 PM Drouvot, Bertrand <bdrouvot@amazon.com>\nwrote:\n\n> Hi,\n> On 7/27/20 4:57 PM, Dave Page wrote:\n>\n> *CAUTION*: This email originated from outside of the organization. Do not\n> click links or open attachments unless you can confirm the sender and know\n> the content is safe.\n>\n> Hi\n>\n> On Mon, Jul 27, 2020 at 3:40 PM Drouvot, Bertrand <bdrouvot@amazon.com>\n> wrote:\n>\n>> Hi hackers,\n>>\n>> I've attached a patch to display individual query in the pg_stat_activity\n>> query field when multiple SQL statements are currently displayed.\n>>\n>> *Motivation:*\n>>\n>> When multiple statements are displayed then we don’t know which one is\n>> currently running.\n>>\n>\n> I'm not sure I'd want that to happen, as it could make it much harder to\n> track the activity back to a query in the application layer or server logs.\n>\n> Perhaps a separate field could be added for the current statement, or a\n> value to indicate what the current statement number in the query is?\n>\n> Thanks for he feedback.\n>\n> I like the idea of adding extra information without changing the current\n> behavior.\n>\n> A value to indicate what the current statement number is, would need\n> parsing the query field by the user to get the individual statement.\n>\n> I think the separate field makes sense (though it come with an extra\n> memory price) as it will not change the existing behavior and would just\n> provide extra information (without any extra parsing needed for the user).\n>\n>\n>\nIdle though without having considered it too much -- you might reduce the\nmemory overhead by just storing a start/end offset into the combined query\nstring instead of a copy of the query. That way the cost would only be paid\nwhen doing the reading of pg_stat_activity (by extracting the piece of the\nstring), which I'd argue is done orders of magnitude fewer times than the\nquery changes at least on busy systems. Care would have to be taken for the\ncase of the current executing query actually being entirely past the end of\nthe query string buffer of course, but I don't think that's too hard to\ndefine a useful behaviour for. (The user interface would stay the same,\nshowing the actual string and thus not requiring the user to do any parsing)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Thu, Aug 6, 2020 at 12:17 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n\nHi,\n\nOn 7/27/20 4:57 PM, Dave Page wrote:\n\n\n\n\n\n\n\nCAUTION: This email originated\n from outside of the organization. Do not click links\n or open attachments unless you can confirm the\n sender and know the content is safe.\n\n\n\n\n\n\n\n\nHi\n\n\nOn Mon, Jul 27, 2020 at\n 3:40 PM Drouvot, Bertrand <bdrouvot@amazon.com>\n wrote:\n\n\nHi hackers,\n\n I've attached a patch to display individual query in the\n pg_stat_activity query field when multiple SQL\n statements are currently displayed.\n \n\nMotivation:\n\n When multiple statements are displayed then we don’t\n know which one is currently running.\n\n\n\n\nI'm not sure I'd want that to happen, as it could make\n it much harder to track the activity back to a query in\n the application layer or server logs. \n\n\nPerhaps a separate field could be added for the current\n statement, or a value to indicate what the current\n statement number in the query is?\n\n\n\n\nThanks for he feedback.\nI like the idea of adding extra information without changing the\n current behavior.\n\nA value to indicate what the current statement number is, would\n need parsing the query field by the user to get the individual\n statement.\nI think the separate field makes sense (though it come with an\n extra memory price) as it will not change the existing behavior\n and would just provide extra information (without any extra\n parsing needed for the user).\n\nIdle though without having considered it too much -- you might reduce the memory overhead by just storing a start/end offset into the combined query string instead of a copy of the query. That way the cost would only be paid when doing the reading of pg_stat_activity (by extracting the piece of the string), which I'd argue is done orders of magnitude fewer times than the query changes at least on busy systems. Care would have to be taken for the case of the current executing query actually being entirely past the end of the query string buffer of course, but I don't think that's too hard to define a useful behaviour for. (The user interface would stay the same, showing the actual string and thus not requiring the user to do any parsing) --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Thu, 6 Aug 2020 12:24:46 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Display individual query in pg_stat_activity" }, { "msg_contents": "Hi,\n\nOn 8/6/20 12:24 PM, Magnus Hagander wrote:\n>\n> On Thu, Aug 6, 2020 at 12:17 PM Drouvot, Bertrand <bdrouvot@amazon.com \n> <mailto:bdrouvot@amazon.com>> wrote:\n>\n> Hi,\n>\n> On 7/27/20 4:57 PM, Dave Page wrote:\n>>\n>> Hi\n>>\n>> On Mon, Jul 27, 2020 at 3:40 PM Drouvot, Bertrand\n>> <bdrouvot@amazon.com <mailto:bdrouvot@amazon.com>> wrote:\n>>\n>> Hi hackers,\n>>\n>> I've attached a patch to display individual query in the\n>> pg_stat_activity query field when multiple SQL statements are\n>> currently displayed.\n>>\n>> _Motivation:_\n>>\n>> When multiple statements are displayed then we don’t know\n>> which one is currently running.\n>>\n>>\n>> I'm not sure I'd want that to happen, as it could make it much\n>> harder to track the activity back to a query in the application\n>> layer or server logs.\n>>\n>> Perhaps a separate field could be added for the current\n>> statement, or a value to indicate what the current statement\n>> number in the query is?\n>\n> Thanks for he feedback.\n>\n> I like the idea of adding extra information without changing the\n> current behavior.\n>\n> A value to indicate what the current statement number is, would\n> need parsing the query field by the user to get the individual\n> statement.\n>\n> I think the separate field makes sense (though it come with an\n> extra memory price) as it will not change the existing behavior\n> and would just provide extra information (without any extra\n> parsing needed for the user).\n>\n>\n>\n> Idle though without having considered it too much -- you might reduce \n> the memory overhead by just storing a start/end offset into the \n> combined query string instead of a copy of the query.\n\nGood point, thanks for the feedback.\n\nThe new attached patch is making use of stmt_len and stmt_location \n(instead of a copy of the query).\n\n> That way the cost would only be paid when doing the reading of \n> pg_stat_activity (by extracting the piece of the string), which I'd \n> argue is done orders of magnitude fewer times than the query changes \n> at least on busy systems.\n\nThe individual query extraction (making use of stmt_len and \nstmt_location) has been moved to pg_stat_get_activity() in the new \nattached patch (as opposed to pgstat_report_activity() in the previous \npatch version).\n\n> Care would have to be taken for the case of the current executing \n> query actually being entirely past the end of the query string buffer \n> of course, but I don't think that's too hard to define a useful \n> behaviour for. (The user interface would stay the same, showing the \n> actual string and thus not requiring the user to do any parsing)\n\nAs a proposal the new attached patch does not display the individual \nquery if length + location is greater than \npgstat_track_activity_query_size (anyway it could not, as the query \nfield that might contain multiple statements is already <= \npgstat_track_activity_query_size in pg_stat_get_activity()).\n\nBertrand\n\n> -- \n>  Magnus Hagander\n>  Me: https://www.hagander.net/ <http://www.hagander.net/>\n>  Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>", "msg_date": "Mon, 17 Aug 2020 07:49:12 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Display individual query in pg_stat_activity" }, { "msg_contents": "Hi,\n\n> I've attached a patch to display individual query in the\n> pg_stat_activity query field when multiple SQL statements are\n> currently displayed.\n> \n> Motivation:\n> \n> When multiple statements are displayed then we don’t know which\n> one is currently running.\n> \n> I'm not sure I'd want that to happen, as it could make it much\n> harder to track the activity back to a query in the application\n> layer or server logs.\n> \n> Perhaps a separate field could be added for the current statement,\n> or a value to indicate what the current statement number in the\n> query is?\n\nAs a user, I think this feature is useful to users.\n\nIt would be nice that pg_stat_activity also show currently running query\nin a user defined function(PL/pgSQL) .\n\nI understood that this patch is not for user defined functions.\nPlease let me know if it's better to make another thread.\n\nIn general, PL/pgSQL functions have multiple queries,\nand users want to know the progress of query execution, doesn't it?\n\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 18 Aug 2020 15:54:02 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Display individual query in pg_stat_activity" }, { "msg_contents": "Hi\n\nút 18. 8. 2020 v 8:54 odesílatel Masahiro Ikeda <ikedamsh@oss.nttdata.com>\nnapsal:\n\n> Hi,\n>\n> > I've attached a patch to display individual query in the\n> > pg_stat_activity query field when multiple SQL statements are\n> > currently displayed.\n> >\n> > Motivation:\n> >\n> > When multiple statements are displayed then we don’t know which\n> > one is currently running.\n> >\n> > I'm not sure I'd want that to happen, as it could make it much\n> > harder to track the activity back to a query in the application\n> > layer or server logs.\n> >\n> > Perhaps a separate field could be added for the current statement,\n> > or a value to indicate what the current statement number in the\n> > query is?\n>\n> As a user, I think this feature is useful to users.\n>\n> It would be nice that pg_stat_activity also show currently running query\n> in a user defined function(PL/pgSQL) .\n>\n> I understood that this patch is not for user defined functions.\n> Please let me know if it's better to make another thread.\n>\n> In general, PL/pgSQL functions have multiple queries,\n> and users want to know the progress of query execution, doesn't it?\n>\n\nI am afraid of the significant performance impact of this feature. In this\ncase you have to copy all nested queries to the stat collector process.\nVery common usage of PL is a glue of very fast queries. Sure, it is used\nlike glue for very slow queries too.\n\nJust I thinking about two features:\n\n1. extra interface for auto_explain, that allows you to get a stack of\nstatements assigned to some pid (probably these informations should be\nstored inside shared memory and collected before any query execution).\nSometimes some slow function is slow due repeated execution of relatively\nfast queries. In this case, the deeper nested level is not too interesting.\nYou need to see a stack of calls and you are searching the first slow level\nin the stack.\n\n2. can be nice to have a status column in pg_stat_activity, and status GUC\nfor sending a custom information from deep levels to the user. Now, users\nuse application_name, but some special variables can be better for this\npurpose. This value of status can be refreshed periodically and can\nsubstitute some tags. So developer can set\n\nBEGIN\n -- before slow long query\n SET status TO 'slow query calculation xxy %d';\n ...\n\nIt is a alternative to RAISE NOTICE, but with different format - with\nformat that is special for reading from pg_stat_activity\n\nFor long (slow) queries usually you need to see the sum of all times of all\nlevels from the call stack to get valuable information.\n\nRegards\n\nPavel\n\np.s. pg_stat_activity is maybe too wide table already, and probably is not\ngood to enhance this table too much\n\n\n\n> --\n> Masahiro Ikeda\n> NTT DATA CORPORATION\n>\n>\n>\n\nHiút 18. 8. 2020 v 8:54 odesílatel Masahiro Ikeda <ikedamsh@oss.nttdata.com> napsal:Hi,\n\n> I've attached a patch to display individual query in the\n> pg_stat_activity query field when multiple SQL statements are\n> currently displayed.\n> \n> Motivation:\n> \n> When multiple statements are displayed then we don’t know which\n> one is currently running.\n> \n> I'm not sure I'd want that to happen, as it could make it much\n> harder to track the activity back to a query in the application\n> layer or server logs.\n> \n> Perhaps a separate field could be added for the current statement,\n> or a value to indicate what the current statement number in the\n> query is?\n\nAs a user, I think this feature is useful to users.\n\nIt would be nice that pg_stat_activity also show currently running query\nin a user defined function(PL/pgSQL) .\n\nI understood that this patch is not for user defined functions.\nPlease let me know if it's better to make another thread.\n\nIn general, PL/pgSQL functions have multiple queries,\nand users want to know the progress of query execution, doesn't it?I am afraid of the significant performance impact of this feature. In this case you have to copy all nested queries to the stat collector process. Very common usage of PL is a glue of very fast queries. Sure, it is used like glue for very slow queries too.Just I thinking about two features:1. extra interface for auto_explain, that allows you to get a stack of statements assigned to some pid (probably these informations should be stored inside shared memory and collected before any query execution). Sometimes some slow function is slow due repeated execution of relatively fast queries. In this case, the deeper nested level is not too interesting. You need to see a stack of calls and you are searching the first slow level in the stack. 2. can be nice to have a status column in pg_stat_activity, and status GUC for sending a custom information from deep levels to the user. Now, users use application_name, but some special variables can be better for this purpose.  This value of status can be refreshed periodically and can substitute some tags. So developer can setBEGIN  -- before slow long query  SET status TO 'slow query calculation xxy %d'; ...It is a alternative to RAISE NOTICE, but with different format - with format that is special for reading from pg_stat_activityFor long (slow) queries usually you need to see the sum of all times of all levels from the call stack to get valuable information. RegardsPavelp.s. pg_stat_activity is maybe too wide table already, and probably is not good to enhance this table too much\n\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION", "msg_date": "Tue, 18 Aug 2020 09:35:05 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Display individual query in pg_stat_activity" }, { "msg_contents": "Hi,\n\nOn 8/18/20 9:35 AM, Pavel Stehule wrote:\n>\n> Hi\n>\n> út 18. 8. 2020 v 8:54 odesílatel Masahiro Ikeda \n> <ikedamsh@oss.nttdata.com <mailto:ikedamsh@oss.nttdata.com>> napsal:\n>\n> Hi,\n>\n> > I've attached a patch to display individual query in the\n> > pg_stat_activity query field when multiple SQL statements are\n> > currently displayed.\n> >\n> > Motivation:\n> >\n> > When multiple statements are displayed then we don’t know which\n> > one is currently running.\n> >\n> > I'm not sure I'd want that to happen, as it could make it much\n> > harder to track the activity back to a query in the application\n> > layer or server logs.\n> >\n> > Perhaps a separate field could be added for the current statement,\n> > or a value to indicate what the current statement number in the\n> > query is?\n>\n> As a user, I think this feature is useful to users.\n>\n> It would be nice that pg_stat_activity also show currently running\n> query\n> in a user defined function(PL/pgSQL) .\n>\n> I understood that this patch is not for user defined functions.\n> Please let me know if it's better to make another thread.\n>\nYeah I think it would be nice to have.\n\nI also think it would be better to create a dedicated thread (specially \nlooking at Pavel's comment below)\n\n>\n> In general, PL/pgSQL functions have multiple queries,\n> and users want to know the progress of query execution, doesn't it?\n>\n>\n> I am afraid of the significant performance impact of this feature. In \n> this case you have to copy all nested queries to the stat collector \n> process. Very common usage of PL is a glue of very fast queries. Sure, \n> it is used like glue for very slow queries too.\n>\n> Just I thinking about two features:\n>\n> 1. extra interface for auto_explain, that allows you to get a stack of \n> statements assigned to some pid (probably these informations should be \n> stored inside shared memory and collected before any query execution). \n> Sometimes some slow function is slow due repeated execution of \n> relatively fast queries. In this case, the deeper nested level is not \n> too interesting. You need to see a stack of calls and you are \n> searching the first slow level in the stack.\n>\n> 2. can be nice to have a status column in pg_stat_activity, and status \n> GUC for sending a custom information from deep levels to the user. \n> Now, users use application_name, but some special variables can be \n> better for this purpose.  This value of status can be refreshed \n> periodically and can substitute some tags. So developer can set\n>\n> BEGIN\n>   -- before slow long query\n>   SET status TO 'slow query calculation xxy %d';\n>  ...\n>\n> It is a alternative to RAISE NOTICE, but with different format - with \n> format that is special for reading from pg_stat_activity\n>\n> For long (slow) queries usually you need to see the sum of all times \n> of all levels from the call stack to get valuable information.\n>\n> Regards\n>\n> Pavel\n>\n> p.s. pg_stat_activity is maybe too wide table already, and probably is \n> not good to enhance this table too much\n>\n>\nThanks\n\nBertrand\n\n\n>\n> -- \n> Masahiro Ikeda\n> NTT DATA CORPORATION\n>\n>\n\n\n\n\n\n\nHi,\n\nOn 8/18/20 9:35 AM, Pavel Stehule\n wrote:\n\n\n\n\n\n\n\nHi\n\n\n\nút 18. 8. 2020 v 8:54\n odesílatel Masahiro Ikeda <ikedamsh@oss.nttdata.com>\n napsal:\n\n\n Hi,\n\n > I've attached a patch to display individual query in\n the\n > pg_stat_activity query field when multiple SQL\n statements are\n > currently displayed.\n > \n > Motivation:\n > \n > When multiple statements are displayed then we don’t\n know which\n > one is currently running.\n > \n > I'm not sure I'd want that to happen, as it could\n make it much\n > harder to track the activity back to a query in the\n application\n > layer or server logs.\n > \n > Perhaps a separate field could be added for the\n current statement,\n > or a value to indicate what the current statement\n number in the\n > query is?\n\n As a user, I think this feature is useful to users.\n\n It would be nice that pg_stat_activity also show currently\n running query\n in a user defined function(PL/pgSQL) .\n\n I understood that this patch is not for user defined\n functions.\n Please let me know if it's better to make another thread.\n\n\n\n\n\nYeah I think it would be nice to have.\nI also think it would be better to create a dedicated thread\n (specially looking at Pavel's comment below)\n\n\n\n\n\n\n In general, PL/pgSQL functions have multiple queries,\n and users want to know the progress of query execution,\n doesn't it?\n\n\n\nI am afraid of the significant performance impact of\n this feature. In this case you have to copy all nested\n queries to the stat collector process. Very common usage\n of PL is a glue of very fast queries. Sure, it is used\n like glue for very slow queries too.\n\n\n\nJust I thinking about two features:\n\n\n1. extra interface for auto_explain, that allows you to\n get a stack of statements assigned to some pid (probably\n these informations should be stored inside shared memory\n and collected before any query execution). Sometimes some\n slow function is slow due repeated execution of relatively\n fast queries. In this case, the deeper nested level is not\n too interesting. You need to see a stack of calls and you\n are searching the first slow level in the stack.\n \n\n\n\n2. can be nice to have a status column in\n pg_stat_activity, and status GUC for sending a custom\n information from deep levels to the user. Now, users use\n application_name, but some special variables can be better\n for this purpose.  This value of status can be refreshed\n periodically and can substitute some tags. So developer\n can set\n\n\nBEGIN\n  -- before slow long query\n  SET status TO 'slow query calculation xxy %d';\n ...\n\n\nIt is a alternative to RAISE NOTICE, but with different\n format - with format that is special for reading from\n pg_stat_activity\n\n\nFor long (slow) queries usually you need to see the sum\n of all times of all levels from the call stack to get\n valuable information.\n \n\n\n\nRegards\n\n\nPavel\n\n\np.s. pg_stat_activity is maybe too wide table already,\n and probably is not good to enhance this table too much\n\n\n\n\n\n\n\n\n\n\n\nThanks\n\nBertrand\n\n\n\n\n\n\n\n\n\n\n\n -- \n Masahiro Ikeda\n NTT DATA CORPORATION", "msg_date": "Wed, 19 Aug 2020 07:48:24 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Display individual query in pg_stat_activity" }, { "msg_contents": "On 2020-08-19 14:48, Drouvot, Bertrand wrote:\n> Hi,\n> On 8/18/20 9:35 AM, Pavel Stehule wrote:\n> \n>> Hi\n>> \n>> út 18. 8. 2020 v 8:54 odesílatel Masahiro Ikeda\n>> <ikedamsh@oss.nttdata.com> napsal:\n>> \n>>> Hi,\n>>> \n>>>> I've attached a patch to display individual query in the\n>>>> pg_stat_activity query field when multiple SQL statements are\n>>>> currently displayed.\n>>>> \n>>>> Motivation:\n>>>> \n>>>> When multiple statements are displayed then we don’t know\n>>> which\n>>>> one is currently running.\n>>>> \n>>>> I'm not sure I'd want that to happen, as it could make it much\n>>>> harder to track the activity back to a query in the application\n>>>> layer or server logs.\n>>>> \n>>>> Perhaps a separate field could be added for the current\n>>> statement,\n>>>> or a value to indicate what the current statement number in the\n>>>> query is?\n>>> \n>>> As a user, I think this feature is useful to users.\n>>> \n>>> It would be nice that pg_stat_activity also show currently running\n>>> query\n>>> in a user defined function(PL/pgSQL) .\n>>> \n>>> I understood that this patch is not for user defined functions.\n>>> Please let me know if it's better to make another thread.\n> \n> Yeah I think it would be nice to have.\n> \n> I also think it would be better to create a dedicated thread\n> (specially looking at Pavel's comment below)\n\nThank you. I will.\n\n>>> In general, PL/pgSQL functions have multiple queries,\n>>> and users want to know the progress of query execution, doesn't\n>>> it?\n>> \n>> I am afraid of the significant performance impact of this feature.\n>> In this case you have to copy all nested queries to the stat\n>> collector process. Very common usage of PL is a glue of very fast\n>> queries. Sure, it is used like glue for very slow queries too.\n>> Just I thinking about two features:\n\nOK, thanks for much advice and show alternative solutions.\n\n>> 1. extra interface for auto_explain, that allows you to get a stack\n>> of statements assigned to some pid (probably these informations\n>> should be stored inside shared memory and collected before any query\n>> execution). Sometimes some slow function is slow due repeated\n>> execution of relatively fast queries. In this case, the deeper\n>> nested level is not too interesting. You need to see a stack of\n>> calls and you are searching the first slow level in the stack.\n\nThanks. I didn't know auto_explain module.\nI agreed when only requested, it copy the stack of statements.\n\n>> 2. can be nice to have a status column in pg_stat_activity, and\n>> status GUC for sending a custom information from deep levels to the\n>> user. Now, users use application_name, but some special variables\n>> can be better for this purpose. This value of status can be\n>> refreshed periodically and can substitute some tags. So developer\n>> can set\n>> \n>> BEGIN\n>> -- before slow long query\n>> SET status TO 'slow query calculation xxy %d';\n>> ...\n>> \n>> It is a alternative to RAISE NOTICE, but with different format -\n>> with format that is special for reading from pg_stat_activity\n>> \n>> For long (slow) queries usually you need to see the sum of all times\n>> of all levels from the call stack to get valuable information.\n\nIn comparison to 1, user must implements logging statement to\ntheir query but user can control what he/she wants to know.\n\nI worry which solution is best.\n\n>> p.s. pg_stat_activity is maybe too wide table already, and probably\n>> is not good to enhance this table too much\n\nThanks. I couldn't think from this point of view.\n\nAfter I make some PoC patches, I will create a dedicated thread.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 28 Aug 2020 17:06:12 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Display individual query in pg_stat_activity" }, { "msg_contents": "pá 28. 8. 2020 v 10:06 odesílatel Masahiro Ikeda <ikedamsh@oss.nttdata.com>\nnapsal:\n\n> On 2020-08-19 14:48, Drouvot, Bertrand wrote:\n> > Hi,\n> > On 8/18/20 9:35 AM, Pavel Stehule wrote:\n> >\n> >> Hi\n> >>\n> >> út 18. 8. 2020 v 8:54 odesílatel Masahiro Ikeda\n> >> <ikedamsh@oss.nttdata.com> napsal:\n> >>\n> >>> Hi,\n> >>>\n> >>>> I've attached a patch to display individual query in the\n> >>>> pg_stat_activity query field when multiple SQL statements are\n> >>>> currently displayed.\n> >>>>\n> >>>> Motivation:\n> >>>>\n> >>>> When multiple statements are displayed then we don’t know\n> >>> which\n> >>>> one is currently running.\n> >>>>\n> >>>> I'm not sure I'd want that to happen, as it could make it much\n> >>>> harder to track the activity back to a query in the application\n> >>>> layer or server logs.\n> >>>>\n> >>>> Perhaps a separate field could be added for the current\n> >>> statement,\n> >>>> or a value to indicate what the current statement number in the\n> >>>> query is?\n> >>>\n> >>> As a user, I think this feature is useful to users.\n> >>>\n> >>> It would be nice that pg_stat_activity also show currently running\n> >>> query\n> >>> in a user defined function(PL/pgSQL) .\n> >>>\n> >>> I understood that this patch is not for user defined functions.\n> >>> Please let me know if it's better to make another thread.\n> >\n> > Yeah I think it would be nice to have.\n> >\n> > I also think it would be better to create a dedicated thread\n> > (specially looking at Pavel's comment below)\n>\n> Thank you. I will.\n>\n> >>> In general, PL/pgSQL functions have multiple queries,\n> >>> and users want to know the progress of query execution, doesn't\n> >>> it?\n> >>\n> >> I am afraid of the significant performance impact of this feature.\n> >> In this case you have to copy all nested queries to the stat\n> >> collector process. Very common usage of PL is a glue of very fast\n> >> queries. Sure, it is used like glue for very slow queries too.\n> >> Just I thinking about two features:\n>\n> OK, thanks for much advice and show alternative solutions.\n>\n> >> 1. extra interface for auto_explain, that allows you to get a stack\n> >> of statements assigned to some pid (probably these informations\n> >> should be stored inside shared memory and collected before any query\n> >> execution). Sometimes some slow function is slow due repeated\n> >> execution of relatively fast queries. In this case, the deeper\n> >> nested level is not too interesting. You need to see a stack of\n> >> calls and you are searching the first slow level in the stack.\n>\n> Thanks. I didn't know auto_explain module.\n> I agreed when only requested, it copy the stack of statements.\n>\n> >> 2. can be nice to have a status column in pg_stat_activity, and\n> >> status GUC for sending a custom information from deep levels to the\n> >> user. Now, users use application_name, but some special variables\n> >> can be better for this purpose. This value of status can be\n> >> refreshed periodically and can substitute some tags. So developer\n> >> can set\n> >>\n> >> BEGIN\n> >> -- before slow long query\n> >> SET status TO 'slow query calculation xxy %d';\n> >> ...\n> >>\n> >> It is a alternative to RAISE NOTICE, but with different format -\n> >> with format that is special for reading from pg_stat_activity\n> >>\n> >> For long (slow) queries usually you need to see the sum of all times\n> >> of all levels from the call stack to get valuable information.\n>\n> In comparison to 1, user must implements logging statement to\n> their query but user can control what he/she wants to know.\n>\n> I worry which solution is best.\n>\n\nThere is no best solution - @1 doesn't need manual work, but @1 is not too\nuseful when queries are similar (first n chars) and are long. In this case\ncustom messages are much more practical.\n\nI don't think so we can implement only one design - in this case we can\nsupport more tools with similar purpose but different behaviors in corner\ncases.\n\n\n> >> p.s. pg_stat_activity is maybe too wide table already, and probably\n> >> is not good to enhance this table too much\n>\n> Thanks. I couldn't think from this point of view.\n>\n> After I make some PoC patches, I will create a dedicated thread.\n>\n> Regards,\n> --\n> Masahiro Ikeda\n> NTT DATA CORPORATION\n>\n\npá 28. 8. 2020 v 10:06 odesílatel Masahiro Ikeda <ikedamsh@oss.nttdata.com> napsal:On 2020-08-19 14:48, Drouvot, Bertrand wrote:\n> Hi,\n> On 8/18/20 9:35 AM, Pavel Stehule wrote:\n> \n>> Hi\n>> \n>> út 18. 8. 2020 v 8:54 odesílatel Masahiro Ikeda\n>> <ikedamsh@oss.nttdata.com> napsal:\n>> \n>>> Hi,\n>>> \n>>>> I've attached a patch to display individual query in the\n>>>> pg_stat_activity query field when multiple SQL statements are\n>>>> currently displayed.\n>>>> \n>>>> Motivation:\n>>>> \n>>>> When multiple statements are displayed then we don’t know\n>>> which\n>>>> one is currently running.\n>>>> \n>>>> I'm not sure I'd want that to happen, as it could make it much\n>>>> harder to track the activity back to a query in the application\n>>>> layer or server logs.\n>>>> \n>>>> Perhaps a separate field could be added for the current\n>>> statement,\n>>>> or a value to indicate what the current statement number in the\n>>>> query is?\n>>> \n>>> As a user, I think this feature is useful to users.\n>>> \n>>> It would be nice that pg_stat_activity also show currently running\n>>> query\n>>> in a user defined function(PL/pgSQL) .\n>>> \n>>> I understood that this patch is not for user defined functions.\n>>> Please let me know if it's better to make another thread.\n> \n> Yeah I think it would be nice to have.\n> \n> I also think it would be better to create a dedicated thread\n> (specially looking at Pavel's comment below)\n\nThank you. I will.\n\n>>> In general, PL/pgSQL functions have multiple queries,\n>>> and users want to know the progress of query execution, doesn't\n>>> it?\n>> \n>> I am afraid of the significant performance impact of this feature.\n>> In this case you have to copy all nested queries to the stat\n>> collector process. Very common usage of PL is a glue of very fast\n>> queries. Sure, it is used like glue for very slow queries too.\n>> Just I thinking about two features:\n\nOK, thanks for much advice and show alternative solutions.\n\n>> 1. extra interface for auto_explain, that allows you to get a stack\n>> of statements assigned to some pid (probably these informations\n>> should be stored inside shared memory and collected before any query\n>> execution). Sometimes some slow function is slow due repeated\n>> execution of relatively fast queries. In this case, the deeper\n>> nested level is not too interesting. You need to see a stack of\n>> calls and you are searching the first slow level in the stack.\n\nThanks. I didn't know auto_explain module.\nI agreed when only requested, it copy the stack of statements.\n\n>> 2. can be nice to have a status column in pg_stat_activity, and\n>> status GUC for sending a custom information from deep levels to the\n>> user. Now, users use application_name, but some special variables\n>> can be better for this purpose.  This value of status can be\n>> refreshed periodically and can substitute some tags. So developer\n>> can set\n>> \n>> BEGIN\n>> -- before slow long query\n>> SET status TO 'slow query calculation xxy %d';\n>> ...\n>> \n>> It is a alternative to RAISE NOTICE, but with different format -\n>> with format that is special for reading from pg_stat_activity\n>> \n>> For long (slow) queries usually you need to see the sum of all times\n>> of all levels from the call stack to get valuable information.\n\nIn comparison to 1, user must implements logging statement to\ntheir query but user can control what he/she wants to know.\n\nI worry which solution is best.There is no best solution - @1 doesn't need manual work, but @1 is not too useful when queries are similar (first n chars) and are long. In this case custom messages are much more practical. I don't think so we can implement only one design - in this case we can support more tools with similar purpose but different behaviors in corner cases.\n\n>> p.s. pg_stat_activity is maybe too wide table already, and probably\n>> is not good to enhance this table too much\n\nThanks. I couldn't think from this point of view.\n\nAfter I make some PoC patches, I will create a dedicated thread.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION", "msg_date": "Fri, 28 Aug 2020 10:42:39 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Display individual query in pg_stat_activity" }, { "msg_contents": "On 8/17/20 7:49 AM, Drouvot, Bertrand wrote:\n>\n> Hi,\n>\n> On 8/6/20 12:24 PM, Magnus Hagander wrote:\n>>\n>> On Thu, Aug 6, 2020 at 12:17 PM Drouvot, Bertrand \n>> <bdrouvot@amazon.com <mailto:bdrouvot@amazon.com>> wrote:\n>>\n>> Hi,\n>>\n>> On 7/27/20 4:57 PM, Dave Page wrote:\n>>>\n>>> Hi\n>>>\n>>> On Mon, Jul 27, 2020 at 3:40 PM Drouvot, Bertrand\n>>> <bdrouvot@amazon.com <mailto:bdrouvot@amazon.com>> wrote:\n>>>\n>>> Hi hackers,\n>>>\n>>> I've attached a patch to display individual query in the\n>>> pg_stat_activity query field when multiple SQL statements\n>>> are currently displayed.\n>>>\n>>> _Motivation:_\n>>>\n>>> When multiple statements are displayed then we don’t know\n>>> which one is currently running.\n>>>\n>>>\n>>> I'm not sure I'd want that to happen, as it could make it much\n>>> harder to track the activity back to a query in the application\n>>> layer or server logs.\n>>>\n>>> Perhaps a separate field could be added for the current\n>>> statement, or a value to indicate what the current statement\n>>> number in the query is?\n>>\n>> Thanks for he feedback.\n>>\n>> I like the idea of adding extra information without changing the\n>> current behavior.\n>>\n>> A value to indicate what the current statement number is, would\n>> need parsing the query field by the user to get the individual\n>> statement.\n>>\n>> I think the separate field makes sense (though it come with an\n>> extra memory price) as it will not change the existing behavior\n>> and would just provide extra information (without any extra\n>> parsing needed for the user).\n>>\n>>\n>>\n>> Idle though without having considered it too much -- you might reduce \n>> the memory overhead by just storing a start/end offset into the \n>> combined query string instead of a copy of the query.\n>\n> Good point, thanks for the feedback.\n>\n> The new attached patch is making use of stmt_len and stmt_location \n> (instead of a copy of the query).\n>\n>> That way the cost would only be paid when doing the reading of \n>> pg_stat_activity (by extracting the piece of the string), which I'd \n>> argue is done orders of magnitude fewer times than the query changes \n>> at least on busy systems.\n>\n> The individual query extraction (making use of stmt_len and \n> stmt_location) has been moved to pg_stat_get_activity() in the new \n> attached patch (as opposed to pgstat_report_activity() in the previous \n> patch version).\n>\n>> Care would have to be taken for the case of the current executing \n>> query actually being entirely past the end of the query string buffer \n>> of course, but I don't think that's too hard to define a useful \n>> behaviour for. (The user interface would stay the same, showing the \n>> actual string and thus not requiring the user to do any parsing)\n>\n> As a proposal the new attached patch does not display the individual \n> query if length + location is greater than \n> pgstat_track_activity_query_size (anyway it could not, as the query \n> field that might contain multiple statements is already <= \n> pgstat_track_activity_query_size in pg_stat_get_activity()).\n>\n> Bertrand\n>\nAttaching a new version as the previous one was not passing the Patch \nTester anymore.\n\nBertrand\n\n>> -- \n>>  Magnus Hagander\n>>  Me: https://www.hagander.net/ <http://www.hagander.net/>\n>>  Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>", "msg_date": "Thu, 10 Sep 2020 16:06:17 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Display individual query in pg_stat_activity" }, { "msg_contents": "On Thu, Sep 10, 2020 at 04:06:17PM +0200, Drouvot, Bertrand wrote:\n> Attaching a new version as the previous one was not passing the Patch Tester\n> anymore.\n\nDitto, the CF bot is complaining again. Could you send a rebase?\n--\nMichael", "msg_date": "Thu, 24 Sep 2020 12:29:38 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Display individual query in pg_stat_activity" }, { "msg_contents": "On 9/24/20 5:29 AM, Michael Paquier wrote:\n> On Thu, Sep 10, 2020 at 04:06:17PM +0200, Drouvot, Bertrand wrote:\n>> Attaching a new version as the previous one was not passing the Patch Tester\n>> anymore.\n> Ditto, the CF bot is complaining again. Could you send a rebase?\n\nThanks for letting me know.\n\nAttached a new version.\n\nBertrand", "msg_date": "Sat, 26 Sep 2020 09:37:49 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Display individual query in pg_stat_activity" }, { "msg_contents": "Hi,\r\n\r\nI noticed that this patch is failing on the cfbot.\r\nFor this, I changed the status to: 'Waiting on Author'\r\n\r\nCheers,\r\nGeorgios\n\nThe new status of this patch is: Waiting on Author\n", "msg_date": "Tue, 10 Nov 2020 15:05:28 +0000", "msg_from": "Georgios Kokolatos <gkokolatos@protonmail.com>", "msg_from_op": false, "msg_subject": "Re: Display individual query in pg_stat_activity" }, { "msg_contents": "This patch fails in the cfbot for quite some time now.\r\nI shall close it as Return With Feedback and not move it to the next CF.\r\nPlease feel free to register an updated version afresh for the next CF.\r\n\r\nCheers,\r\n//Georgios", "msg_date": "Tue, 01 Dec 2020 09:10:38 +0000", "msg_from": "Georgios Kokolatos <gkokolatos@protonmail.com>", "msg_from_op": false, "msg_subject": "Re: Display individual query in pg_stat_activity" } ]
[ { "msg_contents": "+JsonEncodeDateTime(char *buf, Datum value, Oid typid)\n...\n+ elog(ERROR, \"unknown jsonb value datetime type oid %d\", typid);\n\nI think this should be %u.\n\ncommit cc4feded0a31d2b732d4ea68613115cb720e624e\nAuthor: Andrew Dunstan <andrew@dunslane.net>\nDate: Tue Jan 16 19:07:13 2018 -0500\n\n Centralize json and jsonb handling of datetime types\n\n\n", "msg_date": "Mon, 27 Jul 2020 20:55:23 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "printing oid with %d" }, { "msg_contents": "On Mon, Jul 27, 2020 at 08:55:23PM -0500, Justin Pryzby wrote:\n> +JsonEncodeDateTime(char *buf, Datum value, Oid typid)\n> ...\n> + elog(ERROR, \"unknown jsonb value datetime type oid %d\", typid);\n> \n> I think this should be %u.\n\nGood catch. Yep, Oids are unsigned. We don't backpatch such things\nusually, do we? Particularly, this one should not be triggerable\nnormally because no code paths should call JsonEncodeDateTime() with\nan unsupported type Oid.\n--\nMichael", "msg_date": "Tue, 28 Jul 2020 16:59:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: printing oid with %d" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Good catch. Yep, Oids are unsigned. We don't backpatch such things\n> usually, do we? Particularly, this one should not be triggerable\n> normally because no code paths should call JsonEncodeDateTime() with\n> an unsupported type Oid.\n\nYeah, given that it should be an unreachable case, there's likely\nno need to back-patch.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 28 Jul 2020 10:35:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: printing oid with %d" }, { "msg_contents": "On Tue, Jul 28, 2020 at 10:35:54AM -0400, Tom Lane wrote:\n> Yeah, given that it should be an unreachable case, there's likely\n> no need to back-patch.\n\nThanks. Fixed on HEAD then.\n--\nMichael", "msg_date": "Wed, 29 Jul 2020 14:58:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: printing oid with %d" } ]
[ { "msg_contents": "Hi Hackers,\n\nWhen partitioned index support was added in veresion 11, the pg_inherits\ndocs missed the memo and still only say it describes table inheritance.\nThe attached patch adds mentions of indexes too, and notes that they can\nnot participate in multiple inheritance.\n\nI don't know what the policy is on backpatching doc fixes, but\npersonally I think it should be.\n\n- ilmari\n-- \n\"A disappointingly low fraction of the human race is,\n at any given time, on fire.\" - Stig Sandbeck Mathisen", "msg_date": "Tue, 28 Jul 2020 12:21:29 +0100", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": true, "msg_subject": "Doc patch: mention indexes in pg_inherits docs" }, { "msg_contents": "On Tue, Jul 28, 2020 at 12:21:29PM +0100, Dagfinn Ilmari Mannsåker wrote:\n> When partitioned index support was added in veresion 11, the pg_inherits\n> docs missed the memo and still only say it describes table inheritance.\n> The attached patch adds mentions of indexes too, and notes that they can\n> not participate in multiple inheritance.\n\nWhat you have here looks fine to me. We could be more picky regarding\nthe types or relations that can be added, as it can actually be\npossible to have a partitioned table or index, two relkinds of their\nown, but what you are proposing looks fine enough here.\n\n> I don't know what the policy is on backpatching doc fixes, but\n> personally I think it should be.\n\nThis is actually a bug fix, because we include in the docs some\nincorrect information, so it should be backpatched. If there are no\nobjections, I'll take care of that.\n--\nMichael", "msg_date": "Wed, 29 Jul 2020 15:06:58 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Doc patch: mention indexes in pg_inherits docs" }, { "msg_contents": "On Wed, Jul 29, 2020 at 03:06:58PM +0900, Michael Paquier wrote:\n> This is actually a bug fix, because we include in the docs some\n> incorrect information, so it should be backpatched. If there are no\n> objections, I'll take care of that.\n\nAnd done.\n--\nMichael", "msg_date": "Thu, 30 Jul 2020 15:53:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Doc patch: mention indexes in pg_inherits docs" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n\n> On Wed, Jul 29, 2020 at 03:06:58PM +0900, Michael Paquier wrote:\n>> This is actually a bug fix, because we include in the docs some\n>> incorrect information, so it should be backpatched. If there are no\n>> objections, I'll take care of that.\n>\n> And done.\n\nThanks!\n\n- ilmari\n-- \n\"A disappointingly low fraction of the human race is,\n at any given time, on fire.\" - Stig Sandbeck Mathisen\n\n\n", "msg_date": "Thu, 30 Jul 2020 10:56:22 +0100", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": true, "msg_subject": "Re: Doc patch: mention indexes in pg_inherits docs" } ]
[ { "msg_contents": "An internal server aborted last night while running a maintenance script. I\nreproduced this easily running the crashing command in a loop, and verified\nthis is a live issue on REL_13_STABLE (dc6f2fb43).\nREINDEX INDEX pg_class_tblspc_relfilenode_index\n\nIt looks like this crashed once before, and I didn't notice until now:\nFri Jun 26 22:30:29 CDT 2020: pg_shdescription: pg_toast.pg_toast_2396_index(reindex toast)...\npsql: error: could not connect to server: server closed the connection unexpectedly\n\n#3 0x0000000000afde98 in comparetup_index_btree (a=0x201fa58, b=0x201fa10, state=0x20147b0) at tuplesort.c:4251\n4251 Assert(false);\n(gdb) l\n4246 if (pos1 != pos2)\n4247 return (pos1 < pos2) ? -1 : 1;\n4248 }\n4249\n4250 /* ItemPointer values should never be equal */\n4251 Assert(false);\n4252\n4253 return 0;\n4254 }\n4255\n\n#3 0x0000000000afde98 in comparetup_index_btree (a=0x201fa58, b=0x201fa10, state=0x20147b0) at tuplesort.c:4251\n sortKey = 0x2014d60\n tuple1 = 0x20189d8\n tuple2 = 0x2018898\n keysz = 2\n tupDes = 0x7f48996b3790\n equal_hasnull = false\n nkey = 3\n compare = 0\n datum1 = 67999603\n datum2 = 67999603\n isnull1 = false\n isnull2 = false\n __func__ = \"comparetup_index_btree\"\n\n(gdb) p *tuple1\n$2 = {t_tid = {ip_blkid = {bi_hi = 0, bi_lo = 0}, ip_posid = 43}, t_info = 16}\n(gdb) p *tuple2\n$3 = {t_tid = {ip_blkid = {bi_hi = 0, bi_lo = 0}, ip_posid = 43}, t_info = 16}\n\n(gdb) p *sortKey\n$5 = {ssup_cxt = 0x40, ssup_collation = 0, ssup_reverse = false, ssup_nulls_first = false, ssup_attno = 0, ssup_extra = 0x0, comparator = 0x7f7f7f7f7f7f7f7f, abbreviate = 127, abbrev_converter = 0x7f7f7f7f7f7f7f7f, \n abbrev_abort = 0x7f7f7f7f7f7f7f7f, abbrev_full_comparator = 0x7f7f7f7f7f7f7f7f}\n\n(gdb) p *tupDes\n$6 = {natts = 2, tdtypeid = 0, tdtypmod = -1, tdrefcount = 1, constr = 0x0, attrs = 0x7f48996b37a8}\n\nCore was generated by `postgres: postgres sentinel [local] REINDEX '.\n\n(gdb) bt\n#0 0x00007f489853d1f7 in raise () from /lib64/libc.so.6\n#1 0x00007f489853e8e8 in abort () from /lib64/libc.so.6\n#2 0x0000000000aafff7 in ExceptionalCondition (conditionName=0xccd0dc \"false\", errorType=0xccc327 \"FailedAssertion\", fileName=0xccc2fd \"tuplesort.c\", lineNumber=4251) at assert.c:67\n#3 0x0000000000afde98 in comparetup_index_btree (a=0x201fa58, b=0x201fa10, state=0x20147b0) at tuplesort.c:4251\n#4 0x0000000000af1d5e in qsort_tuple (a=0x201fa10, n=18, cmp_tuple=0xafcf21 <comparetup_index_btree>, state=0x20147b0) at qsort_tuple.c:140\n#5 0x0000000000af2104 in qsort_tuple (a=0x201f710, n=50, cmp_tuple=0xafcf21 <comparetup_index_btree>, state=0x20147b0) at qsort_tuple.c:191\n#6 0x0000000000af2104 in qsort_tuple (a=0x201cc38, n=544, cmp_tuple=0xafcf21 <comparetup_index_btree>, state=0x20147b0) at qsort_tuple.c:191\n#7 0x0000000000af8056 in tuplesort_sort_memtuples (state=0x20147b0) at tuplesort.c:3490\n#8 0x0000000000af51a9 in tuplesort_performsort (state=0x20147b0) at tuplesort.c:1985\n#9 0x0000000000529418 in _bt_leafbuild (btspool=0x1f784e0, btspool2=0x0) at nbtsort.c:553\n#10 0x0000000000528f9c in btbuild (heap=0x1fb5758, index=0x7f48996b3460, indexInfo=0x1f77a48) at nbtsort.c:333\n#11 0x00000000005adcb3 in index_build (heapRelation=0x1fb5758, indexRelation=0x7f48996b3460, indexInfo=0x1f77a48, isreindex=true, parallel=true) at index.c:2903\n#12 0x00000000005aec6b in reindex_index (indexId=3455, skip_constraint_checks=false, persistence=112 'p', options=2) at index.c:3539\n#13 0x0000000000692583 in ReindexIndex (indexRelation=0x1f54840, options=0, concurrent=false) at indexcmds.c:2466\n#14 0x0000000000932e36 in standard_ProcessUtility (pstmt=0x1f54960, queryString=0x1f53d90 \"REINDEX INDEX pg_class_tblspc_relfilenode_index\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x1f54c40,\n qc=0x7ffde023bf80) at utility.c:929\n#15 0x000000000093241f in ProcessUtility (pstmt=0x1f54960, queryString=0x1f53d90 \"REINDEX INDEX pg_class_tblspc_relfilenode_index\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x1f54c40, qc=0x7ffde023bf80)\n at utility.c:524\n#16 0x0000000000931288 in PortalRunUtility (portal=0x1fb5ac0, pstmt=0x1f54960, isTopLevel=true, setHoldSnapshot=false, dest=0x1f54c40, qc=0x7ffde023bf80) at pquery.c:1157\n#17 0x00000000009314a7 in PortalRunMulti (portal=0x1fb5ac0, isTopLevel=true, setHoldSnapshot=false, dest=0x1f54c40, altdest=0x1f54c40, qc=0x7ffde023bf80) at pquery.c:1303\n#18 0x00000000009309bc in PortalRun (portal=0x1fb5ac0, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x1f54c40, altdest=0x1f54c40, qc=0x7ffde023bf80) at pquery.c:779\n#19 0x000000000092ab5b in exec_simple_query (query_string=0x1f53d90 \"REINDEX INDEX pg_class_tblspc_relfilenode_index\") at postgres.c:1239\n#20 0x000000000092eb82 in PostgresMain (argc=1, argv=0x1f7db80, dbname=0x1f509d8 \"sentinel\", username=0x1f7daa0 \"pryzbyj\") at postgres.c:4315\n#21 0x000000000087f098 in BackendRun (port=0x1f75a80) at postmaster.c:4523\n#22 0x000000000087e888 in BackendStartup (port=0x1f75a80) at postmaster.c:4215\n#23 0x000000000087ae95 in ServerLoop () at postmaster.c:1727\n#24 0x000000000087a76c in PostmasterMain (argc=5, argv=0x1f4e8e0) at postmaster.c:1400\n#25 0x00000000007823f3 in main (argc=5, argv=0x1f4e8e0) at main.c:210\n\nThis appears to be an issue with d2d8a229b (Incremental Sort), so I will add\nat: https://wiki.postgresql.org/wiki/PostgreSQL_13_Open_Items\n\n\n", "msg_date": "Tue, 28 Jul 2020 10:10:02 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "13dev failed assert: comparetup_index_btree(): ItemPointer values\n should never be equal" }, { "msg_contents": "On Tue, Jul 28, 2020 at 11:10 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> An internal server aborted last night while running a maintenance script. I\n> reproduced this easily running the crashing command in a loop, and verified\n> this is a live issue on REL_13_STABLE (dc6f2fb43).\n> REINDEX INDEX pg_class_tblspc_relfilenode_index\n>\n> It looks like this crashed once before, and I didn't notice until now:\n> Fri Jun 26 22:30:29 CDT 2020: pg_shdescription: pg_toast.pg_toast_2396_index(reindex toast)...\n> psql: error: could not connect to server: server closed the connection unexpectedly\n>\n> #3 0x0000000000afde98 in comparetup_index_btree (a=0x201fa58, b=0x201fa10, state=0x20147b0) at tuplesort.c:4251\n> 4251 Assert(false);\n> (gdb) l\n> 4246 if (pos1 != pos2)\n> 4247 return (pos1 < pos2) ? -1 : 1;\n> 4248 }\n> 4249\n> 4250 /* ItemPointer values should never be equal */\n> 4251 Assert(false);\n> 4252\n> 4253 return 0;\n> 4254 }\n> 4255\n>\n> #3 0x0000000000afde98 in comparetup_index_btree (a=0x201fa58, b=0x201fa10, state=0x20147b0) at tuplesort.c:4251\n> sortKey = 0x2014d60\n> tuple1 = 0x20189d8\n> tuple2 = 0x2018898\n> keysz = 2\n> tupDes = 0x7f48996b3790\n> equal_hasnull = false\n> nkey = 3\n> compare = 0\n> datum1 = 67999603\n> datum2 = 67999603\n> isnull1 = false\n> isnull2 = false\n> __func__ = \"comparetup_index_btree\"\n>\n> (gdb) p *tuple1\n> $2 = {t_tid = {ip_blkid = {bi_hi = 0, bi_lo = 0}, ip_posid = 43}, t_info = 16}\n> (gdb) p *tuple2\n> $3 = {t_tid = {ip_blkid = {bi_hi = 0, bi_lo = 0}, ip_posid = 43}, t_info = 16}\n>\n> (gdb) p *sortKey\n> $5 = {ssup_cxt = 0x40, ssup_collation = 0, ssup_reverse = false, ssup_nulls_first = false, ssup_attno = 0, ssup_extra = 0x0, comparator = 0x7f7f7f7f7f7f7f7f, abbreviate = 127, abbrev_converter = 0x7f7f7f7f7f7f7f7f,\n> abbrev_abort = 0x7f7f7f7f7f7f7f7f, abbrev_full_comparator = 0x7f7f7f7f7f7f7f7f}\n>\n> (gdb) p *tupDes\n> $6 = {natts = 2, tdtypeid = 0, tdtypmod = -1, tdrefcount = 1, constr = 0x0, attrs = 0x7f48996b37a8}\n>\n> Core was generated by `postgres: postgres sentinel [local] REINDEX '.\n>\n> (gdb) bt\n> #0 0x00007f489853d1f7 in raise () from /lib64/libc.so.6\n> #1 0x00007f489853e8e8 in abort () from /lib64/libc.so.6\n> #2 0x0000000000aafff7 in ExceptionalCondition (conditionName=0xccd0dc \"false\", errorType=0xccc327 \"FailedAssertion\", fileName=0xccc2fd \"tuplesort.c\", lineNumber=4251) at assert.c:67\n> #3 0x0000000000afde98 in comparetup_index_btree (a=0x201fa58, b=0x201fa10, state=0x20147b0) at tuplesort.c:4251\n> #4 0x0000000000af1d5e in qsort_tuple (a=0x201fa10, n=18, cmp_tuple=0xafcf21 <comparetup_index_btree>, state=0x20147b0) at qsort_tuple.c:140\n> #5 0x0000000000af2104 in qsort_tuple (a=0x201f710, n=50, cmp_tuple=0xafcf21 <comparetup_index_btree>, state=0x20147b0) at qsort_tuple.c:191\n> #6 0x0000000000af2104 in qsort_tuple (a=0x201cc38, n=544, cmp_tuple=0xafcf21 <comparetup_index_btree>, state=0x20147b0) at qsort_tuple.c:191\n> #7 0x0000000000af8056 in tuplesort_sort_memtuples (state=0x20147b0) at tuplesort.c:3490\n> #8 0x0000000000af51a9 in tuplesort_performsort (state=0x20147b0) at tuplesort.c:1985\n> #9 0x0000000000529418 in _bt_leafbuild (btspool=0x1f784e0, btspool2=0x0) at nbtsort.c:553\n> #10 0x0000000000528f9c in btbuild (heap=0x1fb5758, index=0x7f48996b3460, indexInfo=0x1f77a48) at nbtsort.c:333\n> #11 0x00000000005adcb3 in index_build (heapRelation=0x1fb5758, indexRelation=0x7f48996b3460, indexInfo=0x1f77a48, isreindex=true, parallel=true) at index.c:2903\n> #12 0x00000000005aec6b in reindex_index (indexId=3455, skip_constraint_checks=false, persistence=112 'p', options=2) at index.c:3539\n> #13 0x0000000000692583 in ReindexIndex (indexRelation=0x1f54840, options=0, concurrent=false) at indexcmds.c:2466\n> #14 0x0000000000932e36 in standard_ProcessUtility (pstmt=0x1f54960, queryString=0x1f53d90 \"REINDEX INDEX pg_class_tblspc_relfilenode_index\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x1f54c40,\n> qc=0x7ffde023bf80) at utility.c:929\n> #15 0x000000000093241f in ProcessUtility (pstmt=0x1f54960, queryString=0x1f53d90 \"REINDEX INDEX pg_class_tblspc_relfilenode_index\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x1f54c40, qc=0x7ffde023bf80)\n> at utility.c:524\n> #16 0x0000000000931288 in PortalRunUtility (portal=0x1fb5ac0, pstmt=0x1f54960, isTopLevel=true, setHoldSnapshot=false, dest=0x1f54c40, qc=0x7ffde023bf80) at pquery.c:1157\n> #17 0x00000000009314a7 in PortalRunMulti (portal=0x1fb5ac0, isTopLevel=true, setHoldSnapshot=false, dest=0x1f54c40, altdest=0x1f54c40, qc=0x7ffde023bf80) at pquery.c:1303\n> #18 0x00000000009309bc in PortalRun (portal=0x1fb5ac0, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x1f54c40, altdest=0x1f54c40, qc=0x7ffde023bf80) at pquery.c:779\n> #19 0x000000000092ab5b in exec_simple_query (query_string=0x1f53d90 \"REINDEX INDEX pg_class_tblspc_relfilenode_index\") at postgres.c:1239\n> #20 0x000000000092eb82 in PostgresMain (argc=1, argv=0x1f7db80, dbname=0x1f509d8 \"sentinel\", username=0x1f7daa0 \"pryzbyj\") at postgres.c:4315\n> #21 0x000000000087f098 in BackendRun (port=0x1f75a80) at postmaster.c:4523\n> #22 0x000000000087e888 in BackendStartup (port=0x1f75a80) at postmaster.c:4215\n> #23 0x000000000087ae95 in ServerLoop () at postmaster.c:1727\n> #24 0x000000000087a76c in PostmasterMain (argc=5, argv=0x1f4e8e0) at postmaster.c:1400\n> #25 0x00000000007823f3 in main (argc=5, argv=0x1f4e8e0) at main.c:210\n>\n> This appears to be an issue with d2d8a229b (Incremental Sort), so I will add\n> at: https://wiki.postgresql.org/wiki/PostgreSQL_13_Open_Items\n\nIs that assumption largely based on the incremental sort patch\nrefactoring tuplesort.c a bit? I haven't looked at it much at all, but\nI'm wondering if the issue could also be related to the btree\nduplicates changes in 13 given that we're looking at\ncomparetup_index_btree and the datums are equal.\n\nJames\n\n\n", "msg_date": "Tue, 28 Jul 2020 11:40:14 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 13dev failed assert: comparetup_index_btree(): ItemPointer values\n should never be equal" }, { "msg_contents": "On Tue, Jul 28, 2020 at 11:40:14AM -0400, James Coleman wrote:\n> > This appears to be an issue with d2d8a229b (Incremental Sort), so I will add\n> > at: https://wiki.postgresql.org/wiki/PostgreSQL_13_Open_Items\n> \n> Is that assumption largely based on the incremental sort patch\n> refactoring tuplesort.c a bit? I haven't looked at it much at all, but\n> I'm wondering if the issue could also be related to the btree\n> duplicates changes in 13 given that we're looking at\n> comparetup_index_btree and the datums are equal.\n\nGood point. I'd looked at something like this to come to my tentative\nconclusion.\n\ngit log --stat origin/REL_12_STABLE..origin/REL_13_STABLE -- src/backend/utils/sort/*tuple*c\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 28 Jul 2020 10:45:41 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: 13dev failed assert: comparetup_index_btree(): ItemPointer\n values should never be equal" }, { "msg_contents": "On Tue, Jul 28, 2020 at 8:40 AM James Coleman <jtc331@gmail.com> wrote:\n> Is that assumption largely based on the incremental sort patch\n> refactoring tuplesort.c a bit? I haven't looked at it much at all, but\n> I'm wondering if the issue could also be related to the btree\n> duplicates changes in 13 given that we're looking at\n> comparetup_index_btree and the datums are equal.\n\nIt couldn't possibly be the deduplication patch. That didn't change\nanything in tuplesort.c.\n\nThis is very likely to be related to incremental sort because it's a\nuse-after-free issue, which is the kind of thing that could only\nreally happen inside tuplesort.c. This is clear because some of the\nvariables have the tell-tale 0x7f7f7f pattern that we written by\nCLOBBER_FREED_MEMORY builds when memory is freed.\n\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 28 Jul 2020 08:47:04 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: 13dev failed assert: comparetup_index_btree(): ItemPointer values\n should never be equal" }, { "msg_contents": "On Tue, Jul 28, 2020 at 8:47 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> This is very likely to be related to incremental sort because it's a\n> use-after-free issue, which is the kind of thing that could only\n> really happen inside tuplesort.c. This is clear because some of the\n> variables have the tell-tale 0x7f7f7f pattern that we written by\n> CLOBBER_FREED_MEMORY builds when memory is freed.\n\nActually, I changed my mind. The pointer variable sortKey within\ncomparetup_index_btree() has just been incremented in a way that makes\nit point past the end of allocated memory, without being dereferenced.\nThat part is fine.\n\nI still don't think that it's deduplication, though, because at the\npoint of the crash we haven't even reached _bt_load() yet. That is, we\nhaven't reached nbtsort.c code that is specific to Postgres 13 yet\n(and besides, catalog indexes don't use deduplication in practice).\n\nI wrote the assertion that fails here with the bug that I fixed in\ncommit 4974d7f87e62a58e80c6524e49677cb25cc10e12 in mind specifically.\nThat was a bug that involved a scan that returned duplicate tuples due\nto a problem in heapam_index_build_range_scan() or all of the\ninfrastructure that it depends on (directly and indirectly). I wonder\nif it's something like that -- this is also a system catalog index.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 28 Jul 2020 10:06:03 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: 13dev failed assert: comparetup_index_btree(): ItemPointer values\n should never be equal" }, { "msg_contents": "On Tue, Jul 28, 2020 at 10:06 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> I wrote the assertion that fails here with the bug that I fixed in\n> commit 4974d7f87e62a58e80c6524e49677cb25cc10e12 in mind specifically.\n> That was a bug that involved a scan that returned duplicate tuples due\n> to a problem in heapam_index_build_range_scan() or all of the\n> infrastructure that it depends on (directly and indirectly). I wonder\n> if it's something like that -- this is also a system catalog index.\n\nIt's starting to look more like that. I can reproduce the bug by\nrunning the REINDEX in a tight loop while \"make installcheck\" runs. It\nlooks as if the two tuples passed to comparetup_index_btree() are\nseparate tuples that each point to the same heap TID.\n\nI have an rr recording of this. It shouldn't take too long to figure\nout what's going on...\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 28 Jul 2020 10:37:13 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: 13dev failed assert: comparetup_index_btree(): ItemPointer values\n should never be equal" }, { "msg_contents": "On Tue, Jul 28, 2020 at 10:37 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> It's starting to look more like that. I can reproduce the bug by\n> running the REINDEX in a tight loop while \"make installcheck\" runs. It\n> looks as if the two tuples passed to comparetup_index_btree() are\n> separate tuples that each point to the same heap TID.\n\nEvidently this is an old bug. The assertion that fails was added to\nPostgres 12, but that isn't significant. The invariant we see violated\nhere has nothing to do with any of my B-Tree work -- it would have\nbeen reasonable to add the same assertion to Postgres 9.5.\n\nIf I add the same assertion to 9.5 now, I find that the same steps\nreproduce the problem -- \"REINDEX INDEX\npg_class_tblspc_relfilenode_index\" run in a tight loop connected to\nthe regression database, concurrent with a \"make installcheck\".\n\nI still don't know what's going on here, but I find it suspicious that\nsome relevant tuples go through the HEAPTUPLE_INSERT_IN_PROGRESS +\n!TransactionIdIsCurrentTransactionId() path of\nheapam_index_build_range_scan(). After all, if this wasn't a system\ncatalog index then we'd expect to see \"concurrent insert in progress\nwithin table \\\"%s\\\"\" WARNING output.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 28 Jul 2020 12:00:42 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: 13dev failed assert: comparetup_index_btree(): ItemPointer values\n should never be equal" }, { "msg_contents": "On Tue, Jul 28, 2020 at 12:00 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I still don't know what's going on here, but I find it suspicious that\n> some relevant tuples go through the HEAPTUPLE_INSERT_IN_PROGRESS +\n> !TransactionIdIsCurrentTransactionId() path of\n> heapam_index_build_range_scan(). After all, if this wasn't a system\n> catalog index then we'd expect to see \"concurrent insert in progress\n> within table \\\"%s\\\"\" WARNING output.\n\nI also find it suspicious that I can no longer produce the assertion\nfailure once I force all non-unique system catalog indexes (such as\nJustin's repro index, pg_class_tblspc_relfilenode_index) to go through\nthe HEAPTUPLE_INSERT_IN_PROGRESS +\n!TransactionIdIsCurrentTransactionId() handling for unique indexes\nshown here:\n\n /*\n * If we are performing uniqueness checks, indexing\n * such a tuple could lead to a bogus uniqueness\n * failure. In that case we wait for the inserting\n * transaction to finish and check again.\n */\n if (checking_uniqueness)\n {\n /*\n * Must drop the lock on the buffer before we wait\n */\n LockBuffer(scan->rs_cbuf, BUFFER_LOCK_UNLOCK);\n XactLockTableWait(xwait, heapRelation,\n &heapTuple->t_self,\n XLTW_InsertIndexUnique);\n CHECK_FOR_INTERRUPTS();\n goto recheck;\n }\n\nCommenting out \"if (checking_uniqueness)\" here at least *masks* the\nbug. Seemingly by averting problems that the checking_uniqueness code\nwasn't actually designed to solve. I imagine that this factor makes\nthe bug less serious in practice -- most system catalogs are unique\nindexes.\n\nActually...was the code *originally* designed to avoid this issue?\nMight that fact have been lost since HOT was first introduced? Commit\n1ddc2703a93 changed some of the code in question to avoid deadlocks on\nsystem catalogs with new-style VACUUM FULL. I wonder if it was a good\nidea to not wait when we weren't checking_uniqueness following that\n2010 commit, though -- we used to wait like this regardless of our\nchecking_uniqueness status.\n\n(I understand that the real problem here may be the way that we can\nrelease locks early for system catalogs, but let's start with\nimmediate causes.)\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 28 Jul 2020 12:53:09 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: 13dev failed assert: comparetup_index_btree(): ItemPointer values\n should never be equal" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> I also find it suspicious that I can no longer produce the assertion\n> failure once I force all non-unique system catalog indexes (such as\n> Justin's repro index, pg_class_tblspc_relfilenode_index) to go through\n> the HEAPTUPLE_INSERT_IN_PROGRESS +\n> !TransactionIdIsCurrentTransactionId() handling for unique indexes\n> shown here:\n\nHmm...\n\n> Actually...was the code *originally* designed to avoid this issue?\n\nNo, I don't think so. It was designed for the case of unique key X\nbeing inserted immediately after a deletion of the same key. The\ndeleted tuple is presumably not yet vacuumed-away, so the new tuple\nshould have a different TID. In no case should we have multiple index\ntuples pointing at the same TID; that would imply that somebody failed\nto vacuuum away an old index entry before freeing up the heap TID.\n\nOr, perhaps, REINDEX is somehow scanning the same TID twice, and\ngenerating indeed-duplicate index entries?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 28 Jul 2020 16:04:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 13dev failed assert: comparetup_index_btree(): ItemPointer values\n should never be equal" }, { "msg_contents": "On Tue, Jul 28, 2020 at 1:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> No, I don't think so. It was designed for the case of unique key X\n> being inserted immediately after a deletion of the same key. The\n> deleted tuple is presumably not yet vacuumed-away, so the new tuple\n> should have a different TID. In no case should we have multiple index\n> tuples pointing at the same TID; that would imply that somebody failed\n> to vacuuum away an old index entry before freeing up the heap TID.\n\nIt looks like one HOT chain. I think cases where the\nvisibility/HeapTupleSatisfiesVacuum() stuff somehow gets confused\ncould result in the same heap TID (which is actually the HOT chain's\nroot TID) getting indexed twice.\n\n> Or, perhaps, REINDEX is somehow scanning the same TID twice, and\n> generating indeed-duplicate index entries?\n\nIt's 100% clear that that's what happens from my rr recording (kind\nof). A conditional breakpoint in _bt_build_callback() clearly shows\nthat it gets called twice for the same TID value (twice in immediate\nsuccession). The first time it gets called in the\n!HeapTupleIsHeapOnlyTuple() path, the second time in the\nHeapTupleIsHeapOnlyTuple() path (i.e. the path that uses the\nroot_offsets array).\n\nI notice that the root tuple of the hot chain is marked HEAP_COMBOCID\n(and xmin == xmax for the HOT chain tuple). The xmin for the successor\n(which matches xmin and xmax for root tuple) exactly matches the\nREINDEX/crashing session's OldestXmin.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 28 Jul 2020 13:26:10 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: 13dev failed assert: comparetup_index_btree(): ItemPointer values\n should never be equal" }, { "msg_contents": "On Tue, Jul 28, 2020 at 1:26 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Tue, Jul 28, 2020 at 1:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > No, I don't think so. It was designed for the case of unique key X\n> > being inserted immediately after a deletion of the same key. The\n> > deleted tuple is presumably not yet vacuumed-away, so the new tuple\n> > should have a different TID. In no case should we have multiple index\n> > tuples pointing at the same TID; that would imply that somebody failed\n> > to vacuuum away an old index entry before freeing up the heap TID.\n>\n> It looks like one HOT chain.\n\nThe fact remains that this function (originally known as\nIndexBuildHeapScan(), now heapam_index_build_range_scan()) did not\ncare about whether or not the index is unique for about 3 years\n(excluding the tupleIsAlive stuff, which was always there, even before\nHOT). The original HOT commit (commit 282d2a03dd3) said nothing about\nunique indexes in the relevant path (the HEAPTUPLE_INSERT_IN_PROGRESS\n+ !TransactionIdIsCurrentTransactionId() \"concurrent system catalog\ninsert\" path). The need to wait here really did seem to be all about\nnot getting duplicate TIDs (i.e. respecting the basic HOT invariant)\nback in 2007.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 28 Jul 2020 14:46:09 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: 13dev failed assert: comparetup_index_btree(): ItemPointer values\n should never be equal" }, { "msg_contents": "On Tue, Jul 28, 2020 at 2:46 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> The fact remains that this function (originally known as\n> IndexBuildHeapScan(), now heapam_index_build_range_scan()) did not\n> care about whether or not the index is unique for about 3 years\n> (excluding the tupleIsAlive stuff, which was always there, even before\n> HOT). The original HOT commit (commit 282d2a03dd3) said nothing about\n> unique indexes in the relevant path (the HEAPTUPLE_INSERT_IN_PROGRESS\n> + !TransactionIdIsCurrentTransactionId() \"concurrent system catalog\n> insert\" path). The need to wait here really did seem to be all about\n> not getting duplicate TIDs (i.e. respecting the basic HOT invariant)\n> back in 2007.\n\nI mentioned that the unique index aspect was added by commit 1ddc2703\nin 2010 (the new-style VACUUM FULL deadlock commit that added the \"if\n(checking_uniqueness)\" condition). Turns out that that had bugs that\nwere fixed in 2011's commit 520bcd9c9bb (at least I think so based on\na reading of the latter commit's commit message) -- though those were\nin the DELETE_IN_PROGRESS case.\n\nPerhaps 2011's commit 520bcd9c9bb missed similar\nHEAPTUPLE_INSERT_IN_PROGRESS issues that manifest themselves within\nJustin's test case now?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 28 Jul 2020 15:09:58 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: 13dev failed assert: comparetup_index_btree(): ItemPointer values\n should never be equal" }, { "msg_contents": "On Tue, Jul 28, 2020 at 3:09 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Perhaps 2011's commit 520bcd9c9bb missed similar\n> HEAPTUPLE_INSERT_IN_PROGRESS issues that manifest themselves within\n> Justin's test case now?\n\nAny further thoughts on this, Tom?\n\nThis is clearly a live bug that we should fix before too long. I could\nwrite the patch myself, but I would like to get your response to my\nanalysis before starting down that road.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Mon, 3 Aug 2020 17:37:06 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: 13dev failed assert: comparetup_index_btree(): ItemPointer values\n should never be equal" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> Any further thoughts on this, Tom?\n\nSorry for slow response ... my ISP had an equipment failure that took\nout my email service for most of a day.\n\n> This is clearly a live bug that we should fix before too long. I could\n> write the patch myself, but I would like to get your response to my\n> analysis before starting down that road.\n\nYeah. Looking at the code now, I note these relevant comments in\nheapam_index_build_range_scan:\n\n * Also, although our opinions about tuple liveness could change while\n * we scan the page (due to concurrent transaction commits/aborts),\n * the chain root locations won't, so this info doesn't need to be\n * rebuilt after waiting for another transaction.\n *\n * Note the implied assumption that there is no more than one live\n * tuple per HOT-chain --- else we could create more than one index\n * entry pointing to the same root tuple.\n\nThe core of the issue seems to be that in the presence of concurrent\nupdates, we might not have a stable opinion of which entry of the HOT\nchain is live, leading to trying to index multiple ones of them (using\nthe same root TID), which leads to the assertion failure.\n\nAlso relevant is 1ddc2703a93's commit-log comment that\n\n First, teach IndexBuildHeapScan to not wait for INSERT_IN_PROGRESS or\n DELETE_IN_PROGRESS tuples to commit unless the index build is checking\n uniqueness/exclusion constraints. If it isn't, there's no harm in just\n indexing the in-doubt tuple.\n\nI'm not sure if I was considering the HOT-chain case when I wrote that,\nbut \"no harm\" seems clearly wrong in that situation: indexing more than\none in-doubt chain member leads to having multiple index entries pointing\nat the same HOT chain. That could be really bad if they have distinct\nindex values (though we do not expect such a case to arise in a system\ncatalog, since broken HOT chains should never occur there).\n\n>> Perhaps 2011's commit 520bcd9c9bb missed similar\n>> HEAPTUPLE_INSERT_IN_PROGRESS issues that manifest themselves within\n>> Justin's test case now?\n\nIn the light of this, it bothers me that the DELETE_IN_PROGRESS case\nhas an exception for HOT chains:\n\n if (checking_uniqueness ||\n HeapTupleIsHotUpdated(heapTuple))\n // wait\n\nwhile the INSERT_IN_PROGRESS case has none. Simple symmetry\nwould suggest that the INSERT_IN_PROGRESS case should be\n\n if (checking_uniqueness ||\n HeapTupleIsHeapOnly(heapTuple))\n // wait\n\nbut I'm not 100% convinced that that's right.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 04 Aug 2020 16:31:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 13dev failed assert: comparetup_index_btree(): ItemPointer values\n should never be equal" }, { "msg_contents": "On Tue, Aug 4, 2020 at 1:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The core of the issue seems to be that in the presence of concurrent\n> updates, we might not have a stable opinion of which entry of the HOT\n> chain is live, leading to trying to index multiple ones of them (using\n> the same root TID), which leads to the assertion failure.\n\nI agree with that assessment. FWIW, I believe that contrib/amcheck\nwill detect this issue on Postgres 12+. If it happened all that often\nthen we probably would have heard about it by now.\n\nBTW, I backpatched the assertion that fails. All branches have it now.\nIt might not help, but it certainly can't hurt.\n\n> I'm not sure if I was considering the HOT-chain case when I wrote that,\n> but \"no harm\" seems clearly wrong in that situation: indexing more than\n> one in-doubt chain member leads to having multiple index entries pointing\n> at the same HOT chain. That could be really bad if they have distinct\n> index values (though we do not expect such a case to arise in a system\n> catalog, since broken HOT chains should never occur there).\n\nI think that it might accidentally be okay for those reasons, though I\nhave a hard time imagining that that's what you meant back then. I\ndoubt that the exact consequences of the problem will affect what the\nfix looks like now, so this may be somewhat of an academic question.\n\n> In the light of this, it bothers me that the DELETE_IN_PROGRESS case\n> has an exception for HOT chains:\n>\n> if (checking_uniqueness ||\n> HeapTupleIsHotUpdated(heapTuple))\n> // wait\n>\n> while the INSERT_IN_PROGRESS case has none. Simple symmetry\n> would suggest that the INSERT_IN_PROGRESS case should be\n>\n> if (checking_uniqueness ||\n> HeapTupleIsHeapOnly(heapTuple))\n> // wait\n\nI had exactly the same intuition.\n\n> but I'm not 100% convinced that that's right.\n\nWhy doubt that explanation?\n\nAs I've said, it's clear that the original HOT commit imagined that\nthis wait business was all about avoiding confusion about which heap\ntuple to index for the HOT chain -- nothing more or less than that.\nThe simplest explanation seems to be that 1ddc2703a93 missed that\nsubtlety. When some (though not all) of the problems came to light a\nfew years later, 520bcd9c9bb didn't go far enough. We know that\n1ddc2703a93 got the DELETE_IN_PROGRESS stuff wrong -- why doubt that\nit also got the INSERT_IN_PROGRESS stuff wrong?\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Tue, 4 Aug 2020 14:49:54 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: 13dev failed assert: comparetup_index_btree(): ItemPointer values\n should never be equal" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Tue, Aug 4, 2020 at 1:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> while the INSERT_IN_PROGRESS case has none. Simple symmetry\n>> would suggest that the INSERT_IN_PROGRESS case should be\n>> \n>> \tif (checking_uniqueness ||\n>> \t HeapTupleIsHeapOnly(heapTuple))\n>> \t // wait\n\n> I had exactly the same intuition.\n\n>> but I'm not 100% convinced that that's right.\n\n> Why doubt that explanation?\n\nFirst, it's not clear that this is an exact inverse, because\nHeapTupleIsHotUpdated does more than check the HOT_UPDATED flag.\nSecond, I think there remains some doubt as to whether the\nDELETE_IN_PROGRESS case is right either. If we were forcing\na wait for *every* in-doubt HOT-chain element, not only non-last\nones (or non-first ones for the INSERT case, if we use the above\nfix) then it'd be quite clear that we're safe. But if we want\nto keep the optimization then I think maybe closer analysis is\nwarranted.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 04 Aug 2020 18:00:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 13dev failed assert: comparetup_index_btree(): ItemPointer values\n should never be equal" }, { "msg_contents": "I have nothing new to add, but wanted to point out this is still an issue.\n\nThis is on the v13 Opened Items list - for lack of anywhere else to put them, I\nalso added two other, unresolved issues.\n\nhttps://wiki.postgresql.org/index.php?title=PostgreSQL_13_Open_Items&type=revision&diff=35624&oldid=35352\n\nOn Tue, Aug 04, 2020 at 06:00:34PM -0400, Tom Lane wrote:\n> Peter Geoghegan <pg@bowt.ie> writes:\n> > On Tue, Aug 4, 2020 at 1:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> while the INSERT_IN_PROGRESS case has none. Simple symmetry\n> >> would suggest that the INSERT_IN_PROGRESS case should be\n> >> \n> >> \tif (checking_uniqueness ||\n> >> \t HeapTupleIsHeapOnly(heapTuple))\n> >> \t // wait\n> \n> > I had exactly the same intuition.\n> \n> >> but I'm not 100% convinced that that's right.\n> \n> > Why doubt that explanation?\n> \n> First, it's not clear that this is an exact inverse, because\n> HeapTupleIsHotUpdated does more than check the HOT_UPDATED flag.\n> Second, I think there remains some doubt as to whether the\n> DELETE_IN_PROGRESS case is right either. If we were forcing\n> a wait for *every* in-doubt HOT-chain element, not only non-last\n> ones (or non-first ones for the INSERT case, if we use the above\n> fix) then it'd be quite clear that we're safe. But if we want\n> to keep the optimization then I think maybe closer analysis is\n> warranted.\n\n\n", "msg_date": "Tue, 26 Jan 2021 14:33:44 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: 13dev failed assert: comparetup_index_btree(): ItemPointer\n values should never be equal" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> I have nothing new to add, but wanted to point out this is still an issue.\n> This is on the v13 Opened Items list - for lack of anywhere else to put them, I\n> also added two other, unresolved issues.\n\nIt's probably time to make a v14 open items page, and move anything\nyou want to treat as a live issue to there.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 26 Jan 2021 16:39:19 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 13dev failed assert: comparetup_index_btree(): ItemPointer values\n should never be equal" }, { "msg_contents": "> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > I have nothing new to add, but wanted to point out this is still an issue.\n> > This is on the v13 Opened Items list - for lack of anywhere else to put them, I\n> > also added two other, unresolved issues.\n\nTwo minor things to add:\n1. This issue is still reproducible on 15Beta2 (c1d033fcb5) - Backtrace here [2]\n2. There was a mention that amcheck could throw up errors, but despite quickly\nstopping the workload, I didn't find anything interesting [1].\n\n\n(gdb) frame 3\n#3 0x000055bf9fe496c8 in comparetup_index_btree (a=0x7f0a1a50ba80,\nb=0x7f0a1a50b9d8, state=0x55bfa19ce960) at tuplesort.c:4454\n4454 Assert(false);\n\n(gdb) info locals\nsortKey = 0x55bfa19cef10\ntuple1 = 0x7f0a1a563ee0\ntuple2 = 0x7f0a1a5642a0\nkeysz = 2\ntupDes = 0x7f0a1a9e66a8\nequal_hasnull = false\nnkey = 3\ncompare = 0\ndatum1 = 2085305\ndatum2 = 2085305\nisnull1 = false\nisnull2 = false\n__func__ = \"comparetup_index_btree\"\n\n(gdb) p *tuple1\n$5 = {t_tid = {ip_blkid = {bi_hi = 0, bi_lo = 205}, ip_posid = 3}, t_info = 16}\n\n(gdb) p *tuple2\n$9 = {t_tid = {ip_blkid = {bi_hi = 0, bi_lo = 205}, ip_posid = 3}, t_info = 16}\n\n(gdb) p *sortKey\n$7 = {ssup_cxt = 0x40, ssup_collation = 0, ssup_reverse = false,\nssup_nulls_first = false, ssup_attno = 0, ssup_extra = 0x0, comparator\n= 0x7f7f7f7f7f7f7f7f, abbreviate = 127,\n abbrev_converter = 0x7f7f7f7f7f7f7f7f, abbrev_abort =\n0x7f7f7f7f7f7f7f7f, abbrev_full_comparator = 0x7f7f7f7f7f7f7f7f}\n\n(gdb) p *tupDes\n$8 = {natts = 2, tdtypeid = 2249, tdtypmod = -1, tdrefcount = 1,\nconstr = 0x0, attrs = 0x7f0a1a9e66c0}\n\nReference:\n1) postgres=# select\nbt_index_parent_check('pg_class_tblspc_relfilenode_index', true,\ntrue);\n bt_index_parent_check\n-----------------------\n\n(1 row)\n\npostgres=# select bt_index_check('pg_class_tblspc_relfilenode_index', true);\n bt_index_check\n----------------\n\n(1 row)\n\n\n2) Backtrace -\n(gdb) bt\n#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n#1 0x00007f0a25692859 in __GI_abort () at abort.c:79\n#2 0x000055bf9fdf1718 in ExceptionalCondition\n(conditionName=0x55bfa0036a0b \"false\", errorType=0x55bfa0035d5e\n\"FailedAssertion\", fileName=0x55bfa0035dbd \"tuplesort.c\",\nlineNumber=4454)\n at assert.c:69\n#3 0x000055bf9fe496c8 in comparetup_index_btree (a=0x7f0a1a50ba80,\nb=0x7f0a1a50b9d8, state=0x55bfa19ce960) at tuplesort.c:4454\n#4 0x000055bf9fe40901 in qsort_tuple (data=0x7f0a1a50b8e8, n=13,\ncompare=0x55bf9fe484ab <comparetup_index_btree>, arg=0x55bfa19ce960)\nat ../../../../src/include/lib/sort_template.h:341\n#5 0x000055bf9fe40b2f in qsort_tuple (data=0x7f0a1a50b3a8, n=61,\ncompare=0x55bf9fe484ab <comparetup_index_btree>, arg=0x55bfa19ce960)\nat ../../../../src/include/lib/sort_template.h:378\n#6 0x000055bf9fe40b9f in qsort_tuple (data=0x7f0a1a509e78, n=343,\ncompare=0x55bf9fe484ab <comparetup_index_btree>, arg=0x55bfa19ce960)\nat ../../../../src/include/lib/sort_template.h:392\n#7 0x000055bf9fe40b2f in qsort_tuple (data=0x7f0a1a509e78, n=833,\ncompare=0x55bf9fe484ab <comparetup_index_btree>, arg=0x55bfa19ce960)\nat ../../../../src/include/lib/sort_template.h:378\n#8 0x000055bf9fe40b9f in qsort_tuple (data=0x7f0a1a4d9050, n=2118,\ncompare=0x55bf9fe484ab <comparetup_index_btree>, arg=0x55bfa19ce960)\nat ../../../../src/include/lib/sort_template.h:392\n#9 0x000055bf9fe46df8 in tuplesort_sort_memtuples\n(state=0x55bfa19ce960) at tuplesort.c:3698\n#10 0x000055bf9fe44043 in tuplesort_performsort (state=0x55bfa19ce960)\nat tuplesort.c:2217\n#11 0x000055bf9f783a85 in _bt_leafbuild (btspool=0x55bfa1913318,\nbtspool2=0x0) at nbtsort.c:559\n#12 0x000055bf9f7835a6 in btbuild (heap=0x7f0a1a9df940,\nindex=0x7f0a1a9e2898, indexInfo=0x55bfa19bc740) at nbtsort.c:336\n#13 0x000055bf9f81c8cc in index_build (heapRelation=0x7f0a1a9df940,\nindexRelation=0x7f0a1a9e2898, indexInfo=0x55bfa19bc740,\nisreindex=true, parallel=true) at index.c:3018\n#14 0x000055bf9f81dbe6 in reindex_index (indexId=3455,\nskip_constraint_checks=false, persistence=112 'p',\nparams=0x7ffcfa60a524) at index.c:3718\n#15 0x000055bf9f925148 in ReindexIndex (indexRelation=0x55bfa18f09a0,\nparams=0x7ffcfa60a598, isTopLevel=true) at indexcmds.c:2727\n#16 0x000055bf9f924f67 in ExecReindex (pstate=0x55bfa1913070,\nstmt=0x55bfa18f09f8, isTopLevel=true) at indexcmds.c:2651\n#17 0x000055bf9fc3397f in standard_ProcessUtility\n(pstmt=0x55bfa18f0d48, queryString=0x55bfa18eff30 \"REINDEX INDEX\npg_class_tblspc_relfilenode_index;\", readOnlyTree=false,\n context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0,\ndest=0x55bfa18f0e38, qc=0x7ffcfa60ad20) at utility.c:960\n#18 0x00007f0a251d6887 in pgss_ProcessUtility (pstmt=0x55bfa18f0d48,\nqueryString=0x55bfa18eff30 \"REINDEX INDEX\npg_class_tblspc_relfilenode_index;\", readOnlyTree=false,\n context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0,\ndest=0x55bfa18f0e38, qc=0x7ffcfa60ad20) at pg_stat_statements.c:1143\n#19 0x000055bf9fc32d34 in ProcessUtility (pstmt=0x55bfa18f0d48,\nqueryString=0x55bfa18eff30 \"REINDEX INDEX\npg_class_tblspc_relfilenode_index;\", readOnlyTree=false,\n context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0,\ndest=0x55bfa18f0e38, qc=0x7ffcfa60ad20) at utility.c:526\n#20 0x000055bf9fc3180e in PortalRunUtility (portal=0x55bfa197d020,\npstmt=0x55bfa18f0d48, isTopLevel=true, setHoldSnapshot=false,\ndest=0x55bfa18f0e38, qc=0x7ffcfa60ad20) at pquery.c:1158\n#21 0x000055bf9fc31a84 in PortalRunMulti (portal=0x55bfa197d020,\nisTopLevel=true, setHoldSnapshot=false, dest=0x55bfa18f0e38,\naltdest=0x55bfa18f0e38, qc=0x7ffcfa60ad20) at pquery.c:1315\n#22 0x000055bf9fc30ef1 in PortalRun (portal=0x55bfa197d020,\ncount=9223372036854775807, isTopLevel=true, run_once=true,\ndest=0x55bfa18f0e38, altdest=0x55bfa18f0e38, qc=0x7ffcfa60ad20)\n at pquery.c:791\n#23 0x000055bf9fc2a14f in exec_simple_query\n(query_string=0x55bfa18eff30 \"REINDEX INDEX\npg_class_tblspc_relfilenode_index;\") at postgres.c:1250\n#24 0x000055bf9fc2ecdf in PostgresMain (dbname=0x55bfa1923be0\n\"postgres\", username=0x55bfa18eb8f8 \"ubuntu\") at postgres.c:4544\n#25 0x000055bf9fb52e93 in BackendRun (port=0x55bfa19218a0) at postmaster.c:4504\n#26 0x000055bf9fb52778 in BackendStartup (port=0x55bfa19218a0) at\npostmaster.c:4232\n#27 0x000055bf9fb4ea5e in ServerLoop () at postmaster.c:1806\n#28 0x000055bf9fb4e1f7 in PostmasterMain (argc=3, argv=0x55bfa18e9830)\nat postmaster.c:1478\n#29 0x000055bf9fa3f864 in main (argc=3, argv=0x55bfa18e9830) at main.c:202\n\n-\nRobins Tharakan\nAmazon Web Services\n\n\n", "msg_date": "Wed, 29 Jun 2022 22:13:19 +0930", "msg_from": "Robins Tharakan <tharakan@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 13dev failed assert: comparetup_index_btree(): ItemPointer values\n should never be equal" }, { "msg_contents": "\n\n> On 29 Jun 2022, at 17:43, Robins Tharakan <tharakan@gmail.com> wrote:\n\n\nSorry to bump ancient thread, I have some observations that might or might not be relevant.\nRecently we noticed a corruption on one of clusters. The corruption at hand is not in system catalog, but in user indexes.\nThe cluster was correctly configured: checksums, fsync, FPI etc.\nThe cluster never was restored from a backup. It’s a single-node cluster, so it was not ever promoted, pg_rewind-ed etc. VM had never been rebooted.\n\nBut, the cluster had been experiencing 10 OOMs a day. There were no torn pages, no checsum erros at log at all. Yet, B-tree indexes became corrupted.\n\n\nSorry for this wall of text, I’m posing everything as-is in case if there is some useful information.\n\n$ /etc/cron.yandex/pg_corruption_check.py --index\n2024-03-01 11:54:05,075 ERROR : Corrupted index: 96009 table1_table1message_table1_team_identity_06a95642 XX002 ERROR: posting list contains misplaced TID in index \"table1_table1message_table1_team_identity_06a95642\" DETAIL: Index tid=(267,34) posting list offset=137 page lsn=31B/62159608.\n2024-03-01 11:54:05,100 ERROR : Corrupted index: 96008 table1_table1message_organization_id_66c18ed2 XX002 ERROR: posting list contains misplaced TID in index \"table1_table1message_organization_id_66c18ed2\" DETAIL: Index tid=(267,34) posting list offset=137 page lsn=31B/62158BC8.\n2024-03-01 11:54:05,355 ERROR : Corrupted index: 95804 table2_aler_channel_81aeec_idx XX002 ERROR: posting list contains misplaced TID in index \"table2_aler_channel_81aeec_idx\" DETAIL: Index tid=(336,7) posting list offset=182 page lsn=314/9B794248.\n2024-03-01 11:54:05,716 ERROR : Corrupted index: 95816 table2_table3_channel_id_91a1912f XX002 ERROR: posting list contains misplaced TID in index \"table2_table3_channel_id_91a1912f\" DETAIL: Index tid=(384,2) posting list offset=72 page lsn=317/3F14F390.\n2024-03-01 11:54:06,068 ERROR : Corrupted index: 95815 table2_table3_channel_filter_id_6706c8b6 XX002 ERROR: posting list contains misplaced TID in index \"table2_table3_channel_filter_id_6706c8b6\" DETAIL: Index tid=(380,2) posting list offset=72 page lsn=317/3F0D8E30.\n2024-03-01 11:54:06,302 ERROR : Corrupted index: 95824 table2_table3_root_alert_group_id_f327f122 XX002 ERROR: item order invariant violated for index \"table2_table3_root_alert_group_id_f327f122\" DETAIL: Lower index tid=(368,204) (points to heap tid=(48901,2)) higher index tid=(368,205) (points to heap tid=(48901,2)) page lsn=319/3C234588.\n2024-03-01 11:54:06,538 ERROR : Corrupted index: 95810 table2_table3_acknowledged_by_user_id_dd6723dc XX002 ERROR: posting list contains misplaced TID in index \"table2_table3_acknowledged_by_user_id_dd6723dc\" DETAIL: Index tid=(380,69) posting list offset=35 page lsn=317/C14E2D50.\n2024-03-01 11:54:06,775 ERROR : Corrupted index: 95825 table2_table3_silenced_by_user_id_40a833a1 XX002 ERROR: posting list contains misplaced TID in index \"table2_table3_silenced_by_user_id_40a833a1\" DETAIL: Index tid=(371,11) posting list offset=144 page lsn=318/61171918.\n2024-03-01 11:54:07,009 ERROR : Corrupted index: 95829 table2_table3_wiped_by_id_4326ff61 XX002 ERROR: item order invariant violated for index \"table2_table3_wiped_by_id_4326ff61\" DETAIL: Lower index tid=(373,97) (points to heap tid=(48901,2)) higher index tid=(373,98) (points to heap tid=(48901,2)) page lsn=318/61172788.\n2024-03-01 11:54:07,245 ERROR : Corrupted index: 95823 table2_table3_resolved_by_user_id_463cdf3d XX002 ERROR: posting list contains misplaced TID in index \"table2_table3_resolved_by_user_id_463cdf3d\" DETAIL: Index tid=(375,89) posting list offset=144 page lsn=319/3C1DCFC8.\n2024-03-01 11:54:07,479 ERROR : Corrupted index: 95819 table2_table3_maintenance_uuid_9a7b8529_like XX002 ERROR: item order invariant violated for index \"table2_table3_maintenance_uuid_9a7b8529_like\" DETAIL: Lower index tid=(372,4) (points to heap tid=(48901,2)) higher index tid=(372,5) (points to heap tid=(48901,2)) page lsn=317/C1A210A8.\n2024-03-01 11:54:07,717 ERROR : Corrupted index: 95827 table2_table3_table1_message_id_58a31784_like XX002 ERROR: posting list contains misplaced TID in index \"table2_table3_table1_message_id_58a31784_like\" DETAIL: Index tid=(373,89) posting list offset=144 page lsn=319/3C3EE660.\n2024-03-01 11:54:08,162 ERROR : Corrupted index: 96066 webhooks_webhookresponse_webhook_id_db49ebcd XX002 ERROR: item order invariant violated for index \"webhooks_webhookresponse_webhook_id_db49ebcd\" DETAIL: Lower index tid=(522,24) (points to heap tid=(73981,1)) higher index tid=(522,25) (points to heap tid=(73981,1)) page lsn=31B/E522B640.\n2024-03-01 11:54:08,646 ERROR : Corrupted index: 95822 table2_table3_resolved_by_alert_id_bbdf0a83 XX002 ERROR: posting list contains misplaced TID in index \"table2_table3_resolved_by_alert_id_bbdf0a83\" DETAIL: Index tid=(618,2) posting list offset=150 page lsn=317/C1DE74B8.\n2024-03-01 11:54:08,873 ERROR : Corrupted index: 95427 table2_table3_table1_message_id_key XX002 ERROR: item order invariant violated for index \"table2_table3_table1_message_id_key\" DETAIL: Lower index tid=(369,134) (points to heap tid=(48901,2)) higher index tid=(369,135) (points to heap tid=(48901,2)) page lsn=319/3B629E58.\n2024-03-01 11:54:09,108 ERROR : Corrupted index: 95417 table2_table3_maintenance_uuid_key XX002 ERROR: posting list contains misplaced TID in index \"table2_table3_maintenance_uuid_key\" DETAIL: Index tid=(371,42) posting list offset=47 page lsn=318/6116FC50.\n2024-03-01 11:54:10,180 ERROR : Corrupted index: 95826 table2_table3_table1_log_message_id_587aaa8d_like XX002 ERROR: posting list contains misplaced TID in index \"table2_table3_table1_log_message_id_587aaa8d_like\" DETAIL: Index tid=(849,19) posting list offset=79 page lsn=319/3C389B60.\n2024-03-01 11:54:10,689 ERROR : Corrupted index: 95820 table2_table3_mattermost_log_message_id_69bc2ae4_like XX002 ERROR: item order invariant violated for index \"table2_table3_mattermost_log_message_id_69bc2ae4_like\" DETAIL: Lower index tid=(559,4) (points to heap tid=(48901,2)) higher index tid=(559,5) (points to heap tid=(48901,2)) page lsn=317/C1A7BA50.\n2024-03-01 11:54:11,760 ERROR : Corrupted index: 95425 table2_table3_table1_log_message_id_key XX002 ERROR: item order invariant violated for index \"table2_table3_table1_log_message_id_key\" DETAIL: Lower index tid=(849,22) (points to heap tid=(48901,2)) higher index tid=(849,23) (points to heap tid=(48901,2)) page lsn=317/3E7EC1F0.\n2024-03-01 11:54:12,282 ERROR : Corrupted index: 95419 table2_table3_mattermost_log_message_id_key XX002 ERROR: posting list contains misplaced TID in index \"table2_table3_mattermost_log_message_id_key\" DETAIL: Index tid=(566,84) posting list offset=65 page lsn=319/3B1901F8.\n2024-03-01 11:54:17,990 ERROR : Corrupted index: 95423 table2_table3_public_primary_key_key XX002 ERROR: cross page item order invariant violated for index \"table2_table3_public_primary_key_key\" DETAIL: Last item on page tid=(727,146) page lsn=31B/E104D660.\n\n\nMost of these messages look similar, except last one: “cross page item order invariant violated for index”. Indeed, index scans were hanging in a cycle.\nI could not locate problem in WAL yet, because a lot of other stuff is going on. But I have no other ideas, but suspect that posting list redo is corrupting index in case of a crash.\n\nThanks!\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Thu, 21 Mar 2024 11:16:42 +0500", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: 13dev failed assert: comparetup_index_btree(): ItemPointer values\n should never be equal" }, { "msg_contents": "On Thu, Mar 21, 2024 at 2:16 AM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> Most of these messages look similar, except last one: “cross page item order invariant violated for index”. Indeed, index scans were hanging in a cycle.\n> I could not locate problem in WAL yet, because a lot of other stuff is going on. But I have no other ideas, but suspect that posting list redo is corrupting index in case of a crash.\n\nSome of these errors seem unrelated to posting lists. For example, this one:\n\n2024-03-01 11:54:08,162 ERROR : Corrupted index: 96066\nwebhooks_webhookresponse_webhook_id_db49ebcd XX002 ERROR: item order\ninvariant violated for index\n\"webhooks_webhookresponse_webhook_id_db49ebcd\" DETAIL: Lower index\ntid=(522,24) (points to heap tid=(73981,1)) higher index tid=(522,25)\n(points to heap tid=(73981,1)) page lsn=31B/E522B640.\n\nNotice that there are duplicate heap TIDs here, but no posting list.\nThis is almost certainly a symptom of heap related corruption -- often\na problem with recovery. Do the posting lists that are corrupt\n(reported on elsewhere) also have duplicate TIDs?\n\nSuch problems tend to first get noticed when inserts fail with\n\"posting list split failed\" errors -- but that's just a symptom. It\njust so happens that the hardening added to places like\n_bt_swap_posting() and _bt_binsrch_insert() is much more likely to\nvisibly break than anything else, at least in practice.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 21 Mar 2024 09:54:58 -0400", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: 13dev failed assert: comparetup_index_btree(): ItemPointer values\n should never be equal" }, { "msg_contents": "On Thu, 21 Mar 2024 at 07:17, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>\n> > On 29 Jun 2022, at 17:43, Robins Tharakan <tharakan@gmail.com> wrote:\n>\n> Sorry to bump ancient thread, I have some observations that might or might not be relevant.\n> Recently we noticed a corruption on one of clusters. The corruption at hand is not in system catalog, but in user indexes.\n> The cluster was correctly configured: checksums, fsync, FPI etc.\n> The cluster never was restored from a backup. It’s a single-node cluster, so it was not ever promoted, pg_rewind-ed etc. VM had never been rebooted.\n>\n> But, the cluster had been experiencing 10 OOMs a day. There were no torn pages, no checsum erros at log at all. Yet, B-tree indexes became corrupted.\n\nWould you happen to have a PostgreSQL version number (or commit hash)\nto help debugging? Has it always had that PG version, or has the\nversion been upgraded?\n\nConsidering the age of this thread, and thus potential for v14 pre-.4:\nDid this cluster use REINDEX (concurrently) for the relevant indexes?\n\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Thu, 21 Mar 2024 16:21:20 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 13dev failed assert: comparetup_index_btree(): ItemPointer values\n should never be equal" }, { "msg_contents": "\n\n> On 21 Mar 2024, at 18:54, Peter Geoghegan <pg@bowt.ie> wrote:\n> Do the posting lists that are corrupt\n> (reported on elsewhere) also have duplicate TIDs?\n\nI do not have access now, but AFAIR yes.\nThanks for pointers!\n\nBTW there were also some errors in logs like\nERROR: index \"tablename\" contains unexpected zero page at block 1265985 HINT:\nand even\nMultiXactId 34043703 has not been created yet -- apparent wraparound\n\"right sibling's left-link doesn't match: right sibling 4 of target 2 with leafblkno 2 and scanblkno 2 spuriously links to non-target 1 on level 0 of index \"indexname\"\n\nMultixact problem was fixed by vacuum freeze, other indexes were repacked.\n\n\n> On 21 Mar 2024, at 20:21, Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:\n> \n> Would you happen to have a PostgreSQL version number (or commit hash)\n> to help debugging? Has it always had that PG version, or has the\n> version been upgraded?\n\nVanilla 14.11 (14.10 when created).\n\n> Considering the age of this thread, and thus potential for v14 pre-.4:\n> Did this cluster use REINDEX (concurrently) for the relevant indexes?\n\n\nAs now I see, chances are my case is not related to the original thread.\n\nThanks!\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Thu, 21 Mar 2024 21:08:57 +0500", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: 13dev failed assert: comparetup_index_btree(): ItemPointer values\n should never be equal" } ]
[ { "msg_contents": "Could maybe backpatch to v10.\n\ndiff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c\nindex 272f799c24..06ef658afb 100644\n--- a/src/bin/psql/tab-complete.c\n+++ b/src/bin/psql/tab-complete.c\n@@ -578,14 +578,23 @@ static const SchemaQuery Query_for_list_of_vacuumables = {\n \t.catname = \"pg_catalog.pg_class c\",\n \t.selcondition =\n \t\"c.relkind IN (\" CppAsString2(RELKIND_RELATION) \", \"\n+\tCppAsString2(RELKIND_PARTITIONED_TABLE) \", \"\n \tCppAsString2(RELKIND_MATVIEW) \")\",\n \t.viscondition = \"pg_catalog.pg_table_is_visible(c.oid)\",\n \t.namespace = \"c.relnamespace\",\n \t.result = \"pg_catalog.quote_ident(c.relname)\",\n };\n \n-/* Relations supporting CLUSTER are currently same as those supporting VACUUM */\n-#define Query_for_list_of_clusterables Query_for_list_of_vacuumables\n+/* Relations supporting CLUSTER */\n+static const SchemaQuery Query_for_list_of_clusterables = {\n+\t.catname = \"pg_catalog.pg_class c\",\n+\t.selcondition =\n+\t\"c.relkind IN (\" CppAsString2(RELKIND_RELATION) \", \"\n+\tCppAsString2(RELKIND_MATVIEW) \")\",\n+\t.viscondition = \"pg_catalog.pg_table_is_visible(c.oid)\",\n+\t.namespace = \"c.relnamespace\",\n+\t.result = \"pg_catalog.quote_ident(c.relname)\",\n+};\n \n static const SchemaQuery Query_for_list_of_constraints_with_schema = {\n \t.catname = \"pg_catalog.pg_constraint c\",", "msg_date": "Tue, 28 Jul 2020 12:04:08 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "[PATCH] Tab completion for VACUUM of partitioned tables" }, { "msg_contents": "On Wed, 29 Jul 2020 at 02:04, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> Could maybe backpatch to v10.\n>\n> diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c\n> index 272f799c24..06ef658afb 100644\n> --- a/src/bin/psql/tab-complete.c\n> +++ b/src/bin/psql/tab-complete.c\n> @@ -578,14 +578,23 @@ static const SchemaQuery Query_for_list_of_vacuumables = {\n> .catname = \"pg_catalog.pg_class c\",\n> .selcondition =\n> \"c.relkind IN (\" CppAsString2(RELKIND_RELATION) \", \"\n> + CppAsString2(RELKIND_PARTITIONED_TABLE) \", \"\n> CppAsString2(RELKIND_MATVIEW) \")\",\n> .viscondition = \"pg_catalog.pg_table_is_visible(c.oid)\",\n> .namespace = \"c.relnamespace\",\n> .result = \"pg_catalog.quote_ident(c.relname)\",\n> };\n>\n> -/* Relations supporting CLUSTER are currently same as those supporting VACUUM */\n> -#define Query_for_list_of_clusterables Query_for_list_of_vacuumables\n> +/* Relations supporting CLUSTER */\n> +static const SchemaQuery Query_for_list_of_clusterables = {\n> + .catname = \"pg_catalog.pg_class c\",\n> + .selcondition =\n> + \"c.relkind IN (\" CppAsString2(RELKIND_RELATION) \", \"\n> + CppAsString2(RELKIND_MATVIEW) \")\",\n> + .viscondition = \"pg_catalog.pg_table_is_visible(c.oid)\",\n> + .namespace = \"c.relnamespace\",\n> + .result = \"pg_catalog.quote_ident(c.relname)\",\n> +};\n>\n> static const SchemaQuery Query_for_list_of_constraints_with_schema = {\n> .catname = \"pg_catalog.pg_constraint c\",\n\nGood catch. The patch looks good to me.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 29 Jul 2020 13:27:07 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tab completion for VACUUM of partitioned tables" }, { "msg_contents": "On Wed, Jul 29, 2020 at 01:27:07PM +0900, Masahiko Sawada wrote:\n> Good catch. The patch looks good to me.\n\nWhile this patch is logically correct. I think that we should try to\nnot increase more the number of queries used to scan pg_class\ndepending on a list of relkinds. For example, did you notice that\nyour new Query_for_list_of_vacuumables becomes the same query as\nQuery_for_list_of_indexables? You can make your patch more simple.\n--\nMichael", "msg_date": "Wed, 29 Jul 2020 15:21:19 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tab completion for VACUUM of partitioned tables" }, { "msg_contents": "On Wed, 29 Jul 2020 at 15:21, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Jul 29, 2020 at 01:27:07PM +0900, Masahiko Sawada wrote:\n> > Good catch. The patch looks good to me.\n>\n> While this patch is logically correct. I think that we should try to\n> not increase more the number of queries used to scan pg_class\n> depending on a list of relkinds. For example, did you notice that\n> your new Query_for_list_of_vacuumables becomes the same query as\n> Query_for_list_of_indexables?\n\nOh, I didn't realize that.\n\nLooking at target relation kinds for operations in-depth, I think that\nthe relation list for index creation and the relation list for vacuum\nis different.\n\nQuery_for_list_of_indexables should search for:\n\nRELKIND_RELATION\nRELKIND_PARTITIONED_TABLE\nRELKIND_MATVIEW\n\nwhereas Query_for_list_of_vacuumables should search for:\n\nRELKIND_RELATION\nRELKIND_PARTITIONED_TABLE\nRELKIND_MATVIEW\nRELKIND_TOASTVALUE\n\nAlso, Query_for_list_of_clusterables is further different from the\nabove two lists. It should search for:\n\nRELKIND_RELATION\nRELKIND_MATVIEW\nRELKIND_TOASTVALUE\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 29 Jul 2020 18:41:16 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tab completion for VACUUM of partitioned tables" }, { "msg_contents": "On Wed, Jul 29, 2020 at 06:41:16PM +0900, Masahiko Sawada wrote:\n> whereas Query_for_list_of_vacuumables should search for:\n> \n> RELKIND_RELATION\n> RELKIND_PARTITIONED_TABLE\n> RELKIND_MATVIEW\n> RELKIND_TOASTVALUE\n\nFWIW, I don't think that we should make toast relations suggested to\nthe user at all for any command. This comes down to the same point\nthat we don't have pg_toast in search_path, and going down to this\nlevel of operations is an expert-level mode, not something we should\nrecommend to the average user in psql IMO.\n--\nMichael", "msg_date": "Wed, 29 Jul 2020 20:05:57 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tab completion for VACUUM of partitioned tables" }, { "msg_contents": "On Wed, Jul 29, 2020 at 03:21:19PM +0900, Michael Paquier wrote:\n> On Wed, Jul 29, 2020 at 01:27:07PM +0900, Masahiko Sawada wrote:\n> > Good catch. The patch looks good to me.\n> \n> While this patch is logically correct. I think that we should try to\n> not increase more the number of queries used to scan pg_class\n> depending on a list of relkinds. For example, did you notice that\n> your new Query_for_list_of_vacuumables becomes the same query as\n> Query_for_list_of_indexables? You can make your patch more simple.\n\nI didn't notice. There's an argument for keeping them separate, but as long as\nthere's a #define in between, this is fine, too.\n\nOn Wed, Jul 29, 2020 at 08:05:57PM +0900, Michael Paquier wrote:\n> On Wed, Jul 29, 2020 at 06:41:16PM +0900, Masahiko Sawada wrote:\n> > whereas Query_for_list_of_vacuumables should search for:\n> > \n> > RELKIND_RELATION\n> > RELKIND_PARTITIONED_TABLE\n> > RELKIND_MATVIEW\n> > RELKIND_TOASTVALUE\n> \n> FWIW, I don't think that we should make toast relations suggested to\n> the user at all for any command. This comes down to the same point\n> that we don't have pg_toast in search_path, and going down to this\n> level of operations is an expert-level mode, not something we should\n> recommend to the average user in psql IMO.\n\nRight. Tom's response to that suggestion a couple years ago I thought was\npretty funny (I picture Dr. Claw at his desk using psql tab completion being\npresented with a list of pg_toast.pg_toast_NNNNNN OIDs: \"which TOAST table\nshould I vacuum next..\")\n\nhttps://www.postgresql.org/message-id/14255.1536781029@sss.pgh.pa.us\n|I don't actually think that's a good idea. It's more likely to clutter\n|peoples' completion lists than offer anything they want. Even if someone\n|actually does want to vacuum a toast table, they are not likely to be\n|entering its name via tab completion; they're going to have identified\n|which table they want via some query, and then they'll be doing something\n|like copy-and-paste out of a query result.\n\n-- \nJustin", "msg_date": "Wed, 29 Jul 2020 13:33:45 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tab completion for VACUUM of partitioned tables" }, { "msg_contents": "\n\nOn 2020/07/30 3:33, Justin Pryzby wrote:\n> On Wed, Jul 29, 2020 at 03:21:19PM +0900, Michael Paquier wrote:\n>> On Wed, Jul 29, 2020 at 01:27:07PM +0900, Masahiko Sawada wrote:\n>>> Good catch. The patch looks good to me.\n>>\n>> While this patch is logically correct. I think that we should try to\n>> not increase more the number of queries used to scan pg_class\n>> depending on a list of relkinds. For example, did you notice that\n>> your new Query_for_list_of_vacuumables becomes the same query as\n>> Query_for_list_of_indexables? You can make your patch more simple.\n> \n> I didn't notice. There's an argument for keeping them separate, but as long as\n> there's a #define in between, this is fine, too.\n> \n> On Wed, Jul 29, 2020 at 08:05:57PM +0900, Michael Paquier wrote:\n>> On Wed, Jul 29, 2020 at 06:41:16PM +0900, Masahiko Sawada wrote:\n>>> whereas Query_for_list_of_vacuumables should search for:\n>>>\n>>> RELKIND_RELATION\n>>> RELKIND_PARTITIONED_TABLE\n>>> RELKIND_MATVIEW\n>>> RELKIND_TOASTVALUE\n>>\n>> FWIW, I don't think that we should make toast relations suggested to\n>> the user at all for any command. This comes down to the same point\n>> that we don't have pg_toast in search_path, and going down to this\n>> level of operations is an expert-level mode, not something we should\n>> recommend to the average user in psql IMO.\n> \n> Right. Tom's response to that suggestion a couple years ago I thought was\n> pretty funny (I picture Dr. Claw at his desk using psql tab completion being\n> presented with a list of pg_toast.pg_toast_NNNNNN OIDs: \"which TOAST table\n> should I vacuum next..\")\n> \n> https://www.postgresql.org/message-id/14255.1536781029@sss.pgh.pa.us\n> |I don't actually think that's a good idea. It's more likely to clutter\n> |peoples' completion lists than offer anything they want. Even if someone\n> |actually does want to vacuum a toast table, they are not likely to be\n> |entering its name via tab completion; they're going to have identified\n> |which table they want via some query, and then they'll be doing something\n> |like copy-and-paste out of a query result.\n\nIsn't it better to add the comment explaining why toast tables are\nexcluded from the tab completion for vacuum while they are vacuumable?\nThe patch looks good to me except that.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 30 Jul 2020 08:44:26 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tab completion for VACUUM of partitioned tables" }, { "msg_contents": "On Thu, Jul 30, 2020 at 08:44:26AM +0900, Fujii Masao wrote:\n> Isn't it better to add the comment explaining why toast tables are\n> excluded from the tab completion for vacuum while they are vacuumable?\n\nSounds sensible, still it does not apply only to vacuum. I would go\nas far as just adding a comment at the beginning of the block for\nschema queries:\n\"Never include toast tables in any of those queries to avoid\nunnecessary bloat in the completions.\"\n\n> The patch looks good to me except that.\n\nIndeed. FWIW, I would also adjust the comment on top of\nQuery_for_list_of_indexables to not say \"index creation\", but just\n\"supporting indexing\" instead.\n\nFujii-san, perhaps you would prefer taking care of this patch? I am\nfine to do it if you wish.\n--\nMichael", "msg_date": "Thu, 30 Jul 2020 10:46:11 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tab completion for VACUUM of partitioned tables" }, { "msg_contents": "\n\nOn 2020/07/30 10:46, Michael Paquier wrote:\n> On Thu, Jul 30, 2020 at 08:44:26AM +0900, Fujii Masao wrote:\n>> Isn't it better to add the comment explaining why toast tables are\n>> excluded from the tab completion for vacuum while they are vacuumable?\n> \n> Sounds sensible, still it does not apply only to vacuum. I would go\n> as far as just adding a comment at the beginning of the block for\n> schema queries:\n\nYes, that seems better.\nBTW, one thing I think a bit strange is that indexes for toast tables\nare included in tab-completion for REINDEX, for example. That is,\n\"REINDEX INDEX<tab>\" displays \"pg_toast.\", and \"REINDEX INDEX pg_toast.<tab>\"\ndisplays indexes for toast tables. Maybe it's better to exclude them,\ntoo. But there seems no simple way to do that.\nSo I'm ok with this current situation.\n\n\n> \"Never include toast tables in any of those queries to avoid\n> unnecessary bloat in the completions.\"\n> \n>> The patch looks good to me except that.\n> \n> Indeed. FWIW, I would also adjust the comment on top of\n> Query_for_list_of_indexables to not say \"index creation\", but just\n> \"supporting indexing\" instead.\n> \n> Fujii-san, perhaps you would prefer taking care of this patch? I am\n> fine to do it if you wish.\n\nOf course I'm fine if you work on this patch. So please feel free to do that!\n\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 30 Jul 2020 12:24:33 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tab completion for VACUUM of partitioned tables" }, { "msg_contents": "On Thu, 30 Jul 2020 at 12:24, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/07/30 10:46, Michael Paquier wrote:\n> > On Thu, Jul 30, 2020 at 08:44:26AM +0900, Fujii Masao wrote:\n> >> Isn't it better to add the comment explaining why toast tables are\n> >> excluded from the tab completion for vacuum while they are vacuumable?\n> >\n> > Sounds sensible, still it does not apply only to vacuum. I would go\n> > as far as just adding a comment at the beginning of the block for\n> > schema queries:\n>\n> Yes, that seems better.\n\nAgreed.\n\n> BTW, one thing I think a bit strange is that indexes for toast tables\n> are included in tab-completion for REINDEX, for example. That is,\n> \"REINDEX INDEX<tab>\" displays \"pg_toast.\", and \"REINDEX INDEX pg_toast.<tab>\"\n> displays indexes for toast tables. Maybe it's better to exclude them,\n> too. But there seems no simple way to do that.\n> So I'm ok with this current situation.\n\nYeah, that's the reason why I mentioned about toast tables. \"VACUUM\n<tab>\" displays \"pg_toast.\" but, \"VACUUM pg_to<tab>\" doesn't\ncomplement anything.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 30 Jul 2020 14:16:04 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tab completion for VACUUM of partitioned tables" }, { "msg_contents": "On Thu, Jul 30, 2020 at 02:16:04PM +0900, Masahiko Sawada wrote:\n> Yeah, that's the reason why I mentioned about toast tables. \"VACUUM\n> <tab>\" displays \"pg_toast.\" but, \"VACUUM pg_to<tab>\" doesn't\n> complement anything.\n\nSo am I OK with this situation. The same is true as well for CLUSTER\nand TRUNCATE, and \"pg_to\" would get completion with the toast tables\nonly if we begin to add RELKIND_TOASTVALUE to the queries. Note that\nthe schema completions come from _complete_from_query() where we would\nneed to be smarter regarding the filtering of pg_namespace rows and I\nhave not looked how to do that, but I feel that it may not be that\ncomplicated.\n\nFor now I have applied the proposed patch.\n--\nMichael", "msg_date": "Thu, 30 Jul 2020 18:08:47 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tab completion for VACUUM of partitioned tables" } ]
[ { "msg_contents": "Hi,\n\nI've attached a patch that implements \\si, \\sm, \\st and \\sr functions \nthat show the CREATE command for indexes, matviews, triggers and tables. \nThe functions are implemented similarly to the existing sf/sv functions \nwith some modifications.\n\nFor triggers, I've decided to change input format to \"table_name TRIGGER \ntrigger_name\", as multiple tables are allowed to have a trigger of the \nsame name. Because we need to verify not only the name of the trigger, \nbut also the name of the table, I've implemented a separate function \nlookup_trigger_oid that takes an additional argument.\n\nTriggers and indexes use pg_catalog.pg_get_triggerdef() and \npg_indexes.indexdef, while tables and matviews have separate queries for \nreconstruction. Get_create_object_cmd also runs three additional queries \nfor tables, to get information on constraints, parents and columns.\n\nThere is also the question, if this functionality should be realised on \nthe server instead of the client, but some people may think that changes \nto the language are \"not the postgres way\". However, server realisation \nmay have some advantages, such as independence from the client and \nserver version.\n\nBest regards,\nAlexandra Pervushina.", "msg_date": "Tue, 28 Jul 2020 20:46:04 +0300", "msg_from": "a.pervushina@postgrespro.ru", "msg_from_op": true, "msg_subject": "psql: add \\si, \\sm, \\st and \\sr functions to show CREATE commands for\n indexes, matviews, triggers and tables" }, { "msg_contents": "On 2020-07-28 20:46, a.pervushina@postgrespro.ru wrote:\n> I've attached a patch that implements \\si, \\sm, \\st and \\sr functions\n> that show the CREATE command for indexes, matviews, triggers and\n> tables. The functions are implemented similarly to the existing sf/sv\n> functions with some modifications.\n> \nTo me these functions seem useful.\nAs for adding them to server side, I don't see a big need for it. It \nfeels more logical to follow the already eatablished pattern for the \n\\s[...] commands.\n\nAbout the patch:\n\n1) There is some code duplication for the exec_command_[sm|si|st|sr] \nfunctions. Plus, it seems weird to separate sm (show matview) from sv \n(show view). Perhaps it would be more convenient to combine some of the \ncode? Maybe by editing the already-existing exec_command_sf_sv() \nfunction.\n\n2) Seeing how \\s and \\e functions were added together, I'm wondering - \nshould there be \\e functions too for any objects affected by this patch?\n\n-- \nAnna Akenteva\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Tue, 11 Aug 2020 13:37:10 +0300", "msg_from": "Anna Akenteva <a.akenteva@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: psql: add \\si, \\sm, \\st and \\sr functions to show CREATE commands\n for indexes, matviews, triggers and tables" }, { "msg_contents": "Anna Akenteva wrote 2020-08-11 13:37:\n> About the patch:\n> \n> 1) There is some code duplication for the exec_command_[sm|si|st|sr]\n> functions. Plus, it seems weird to separate sm (show matview) from sv\n> (show view). Perhaps it would be more convenient to combine some of\n> the code? Maybe by editing the already-existing exec_command_sf_sv()\n> function.\n\nI've combined most of the functions into one, as the code was mostly \nduplicated. Had to change the argument from is_func to object type, \nbecause the number of values has increased. I've attached a patch with \nthose changes.", "msg_date": "Tue, 18 Aug 2020 02:54:00 +0300", "msg_from": "a.pervushina@postgrespro.ru", "msg_from_op": true, "msg_subject": "Re: psql: add \\si, \\sm, \\st and \\sr functions to show CREATE commands\n for indexes, matviews, triggers and tables" }, { "msg_contents": "a.pervushina@postgrespro.ru writes:\n> [ si_st_sm_sr_v2.patch ]\n\nI hadn't particularly noticed this thread before, but I happened to\nlook through this patch, and I've got to say that this proposed feature\nseems like an absolute disaster from a maintenance standpoint. There\nwill be no value in an \\st command that is only 90% accurate; the produced\nDDL has to be 100% correct. This means that, if we accept this feature,\npsql will have to know everything pg_dump knows about how to construct the\nDDL describing tables, indexes, views, etc. That is a lot of code, and\nit's messy, and it changes nontrivially on a very regular basis. I can't\naccept that we want another copy in psql --- especially one that looks\nnothing like what pg_dump has.\n\nThere've been repeated discussions about somehow extracting pg_dump's\nknowledge into a library that would also be available to other client\nprograms (see e.g. the concurrent thread at [1]). That's quite a tall\norder, which is why it's not happened yet. But I think we really need\nto have something like that before we can accept this feature for psql.\n\nBTW, as an example of why this is far more difficult than it might\nseem at first glance, this patch doesn't even begin to meet the\nexpectation stated at the top of describe.c:\n\n * Support for the various \\d (\"describe\") commands. Note that the current\n * expectation is that all functions in this file will succeed when working\n * with servers of versions 7.4 and up. It's okay to omit irrelevant\n * information for an old server, but not to fail outright.\n\nIt might be okay for this to cut off at 8.0 or so, as I think pg_dump\ndoes, but not to just fail on older servers.\n\nAnother angle, which I'm not even sure how we want to think about it, is\nsecurity. It will not do for \"\\et\" to allow some attacker to replace\nfunction calls appearing in the table's CHECK constraints, for instance.\nSo this means you've got to be very aware of CVE-2018-1058-style attacks.\nOur answer to that for pg_dump has partially depended on restricting the\nsearch_path used at both dump and restore time ... but I don't think \\et\ngets to override the search path that the psql user is using. I'm not\nsure what that means in practice but it certainly requires some thought\nbefore we add the feature, not after.\n\nAnyway, I can see the attraction of having psql commands like these,\nbut \"write a bunch of new code that we'll have to maintain\" does not\nseem like a desirable way to get them.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/9df8a3d3-13d2-116d-26ab-6a273c1ed38c%402ndquadrant.com\n\n\n", "msg_date": "Tue, 18 Aug 2020 10:25:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql: add \\si, \\sm,\n \\st and \\sr functions to show CREATE commands for indexes, matviews,\n triggers and tables" }, { "msg_contents": "On 18.08.2020 17:25, Tom Lane wrote:\n> a.pervushina@postgrespro.ru writes:\n>> [ si_st_sm_sr_v2.patch ]\n> I hadn't particularly noticed this thread before, but I happened to\n> look through this patch, and I've got to say that this proposed feature\n> seems like an absolute disaster from a maintenance standpoint. There\n> will be no value in an \\st command that is only 90% accurate; the produced\n> DDL has to be 100% correct. This means that, if we accept this feature,\n> psql will have to know everything pg_dump knows about how to construct the\n> DDL describing tables, indexes, views, etc. That is a lot of code, and\n> it's messy, and it changes nontrivially on a very regular basis. I can't\n> accept that we want another copy in psql --- especially one that looks\n> nothing like what pg_dump has.\n>\n> There've been repeated discussions about somehow extracting pg_dump's\n> knowledge into a library that would also be available to other client\n> programs (see e.g. the concurrent thread at [1]). That's quite a tall\n> order, which is why it's not happened yet. But I think we really need\n> to have something like that before we can accept this feature for psql.\n>\n> BTW, as an example of why this is far more difficult than it might\n> seem at first glance, this patch doesn't even begin to meet the\n> expectation stated at the top of describe.c:\n>\n> * Support for the various \\d (\"describe\") commands. Note that the current\n> * expectation is that all functions in this file will succeed when working\n> * with servers of versions 7.4 and up. It's okay to omit irrelevant\n> * information for an old server, but not to fail outright.\n>\n> It might be okay for this to cut off at 8.0 or so, as I think pg_dump\n> does, but not to just fail on older servers.\n>\n> Another angle, which I'm not even sure how we want to think about it, is\n> security. It will not do for \"\\et\" to allow some attacker to replace\n> function calls appearing in the table's CHECK constraints, for instance.\n> So this means you've got to be very aware of CVE-2018-1058-style attacks.\n> Our answer to that for pg_dump has partially depended on restricting the\n> search_path used at both dump and restore time ... but I don't think \\et\n> gets to override the search path that the psql user is using. I'm not\n> sure what that means in practice but it certainly requires some thought\n> before we add the feature, not after.\n>\n> Anyway, I can see the attraction of having psql commands like these,\n> but \"write a bunch of new code that we'll have to maintain\" does not\n> seem like a desirable way to get them.\n>\n> \t\t\tregards, tom lane\n>\n> [1] https://www.postgresql.org/message-id/flat/9df8a3d3-13d2-116d-26ab-6a273c1ed38c%402ndquadrant.com\n>\n>\n\nSince there has been no activity on this thread since before the CF and\nno response from the author I have marked this \"returned with feedback\".\n\nAlexandra, feel free to resubmit it to the next commitfest, when you \nhave time to address the issues raised in the review.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Tue, 24 Nov 2020 13:04:58 +0300", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: psql: add \\si, \\sm, \\st and \\sr functions to show CREATE commands\n for indexes, matviews, triggers and tables" } ]
[ { "msg_contents": "Hi,hackers\r\n\r\nWhen I was using PostgresQL, I noticed that the output of the Japanese messages was inconsistent with the English messages.\r\nThe Japanese message needs to be modified,so I made the patch.\r\n\r\n\r\nSee the attachment for the patch.\r\n\r\n\r\nBest regards", "msg_date": "Wed, 29 Jul 2020 08:42:27 +0000", "msg_from": "\"Lu, Chenyang\" <lucy.fnst@cn.fujitsu.com>", "msg_from_op": true, "msg_subject": "[PATCH]Fix ja.po error" }, { "msg_contents": "On Wed, Jul 29, 2020 at 08:42:27AM +0000, Lu, Chenyang wrote:\n> When I was using PostgreSQL, I noticed that the output of the\n> Japanese messages was inconsistent with the English messages. \n> The Japanese message needs to be modified,so I made the patch.\n\nIndeed, good catch. This needs to be applied to the translation\nrepository first though:\nhttps://git.postgresql.org/gitweb/?p=pgtranslation/messages.git;a=summary\n\nI am adding Alvaro and Peter in CC as they take care of that usually\n(I don't think I have an access to this repo).\n--\nMichael", "msg_date": "Wed, 29 Jul 2020 18:24:54 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH]Fix ja.po error" }, { "msg_contents": "Ping: sorry, did Alvaro and Peter forget this email?( Maybe didn't see this email~ ), I found that the patch of ja.po has not been applied to the Translation Repository.\r\n\r\n-----Original Message-----\r\nFrom: Michael Paquier <michael@paquier.xyz> \r\nSent: Wednesday, July 29, 2020 5:25 PM\r\nTo: Lu, Chenyang/陆 晨阳 <lucy.fnst@cn.fujitsu.com>\r\nCc: pgsql-hackers@postgresql.org; alvherre@2ndquadrant.com; peter.eisentraut@2ndquadrant.com\r\nSubject: Re: [PATCH]Fix ja.po error\r\n\r\nOn Wed, Jul 29, 2020 at 08:42:27AM +0000, Lu, Chenyang wrote:\r\n> When I was using PostgreSQL, I noticed that the output of the Japanese \r\n> messages was inconsistent with the English messages.\r\n> The Japanese message needs to be modified,so I made the patch.\r\n\r\nIndeed, good catch. This needs to be applied to the translation repository first though:\r\nhttps://git.postgresql.org/gitweb/?p=pgtranslation/messages.git;a=summary\r\n\r\nI am adding Alvaro and Peter in CC as they take care of that usually (I don't think I have an access to this repo).\r\n--\r\nMichael\r\n\n\n", "msg_date": "Wed, 19 Aug 2020 03:32:51 +0000", "msg_from": "\"Lu, Chenyang\" <lucy.fnst@cn.fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [PATCH]Fix ja.po error" }, { "msg_contents": "On 2020-Aug-19, Lu, Chenyang wrote:\n\n> Ping: sorry, did Alvaro and Peter forget this email?( Maybe didn't see this email~ ), I found that the patch of ja.po has not been applied to the Translation Repository.\n\nApologies. I have pushed this to all branches of the translation repo\nnow.\n\nThe bogus 'msgstr' was not identical in Postgres 11 and back -- I think\nthe only difference was one extra whitespace. I suppose that's not\nimportant, so I used the translation as provided with no change.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 4 Sep 2020 11:04:46 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH]Fix ja.po error" } ]
[ { "msg_contents": "Hoi hackers,\n\nWe've been using the pg_stat_statements extension to get an idea of the\nqueries used in the database, but the table is being filled with entries\nlike:\n\nSAVEPOINT sa_savepoint_NNN;\nRELEASE SAVEPOINT sa_savepoint_NNN;\nDECLARE \"c_7f9451c4dcc0_5\" CURSOR WITHOUT HOLD ...\nFETCH FORWARD 250 FROM \"c_7f9451b03908_5\"\n\nSince the unique id is different for each query, the aggregation does\nnothing and there are quite a lot of these drowning out the normal queries\n(yes, I'm aware this is an issue of itself). The only way to deal with this\nis \"pg_stat_statements.track_utility=off\". However, it occurs to me that if\nyou just tracked the tags rather than the full query text you could at\nleast track the number of such queries and how much time they take. So the\nabove queries would be tracked under SAVEPOINT, RELEASE, DECLARE CURSOR and\n(I guess) FETCH respectively. But it would also catch DDL.\n\nDoes this sound like something for which a patch would be accepted?\n\nHave a nice day,\n-- \nMartijn van Oosterhout <kleptog@gmail.com> http://svana.org/kleptog/\n\nHoi hackers,We've been using the pg_stat_statements extension to get an idea of the queries used in the database, but the table is being filled with entries like:SAVEPOINT sa_savepoint_NNN;RELEASE SAVEPOINT sa_savepoint_NNN;DECLARE \"c_7f9451c4dcc0_5\" CURSOR WITHOUT HOLD ...FETCH FORWARD 250 FROM \"c_7f9451b03908_5\"             Since the unique id is different for each query, the aggregation does nothing and there are quite a lot of these drowning out the normal queries (yes, I'm aware this is an issue of itself). The only way to deal with this is \"pg_stat_statements.track_utility=off\". However, it occurs to me that if you just tracked the tags rather than the full query text you could at least track the number of such queries and how much time they take. So the above queries would be tracked under SAVEPOINT, RELEASE, DECLARE CURSOR and (I guess) FETCH respectively. But it would also catch DDL.Does this sound like something for which a patch would be accepted?Have a nice day,-- Martijn van Oosterhout <kleptog@gmail.com> http://svana.org/kleptog/", "msg_date": "Wed, 29 Jul 2020 11:24:57 +0200", "msg_from": "Martijn van Oosterhout <kleptog@gmail.com>", "msg_from_op": true, "msg_subject": "IDEA: pg_stat_statements tracking utility statements by tag?" }, { "msg_contents": "\n\nOn 2020/07/29 18:24, Martijn van Oosterhout wrote:\n> Hoi hackers,\n> \n> We've been using the pg_stat_statements extension to get an idea of the queries used in the database, but the table is being filled with entries like:\n> \n> SAVEPOINT sa_savepoint_NNN;\n> RELEASE SAVEPOINT sa_savepoint_NNN;\n> DECLARE \"c_7f9451c4dcc0_5\" CURSOR WITHOUT HOLD ...\n> FETCH FORWARD 250 FROM \"c_7f9451b03908_5\"\n> \n> Since the unique id is different for each query, the aggregation does nothing and there are quite a lot of these drowning out the normal queries (yes, I'm aware this is an issue of itself). The only way to deal with this is \"pg_stat_statements.track_utility=off\". However, it occurs to me that if you just tracked the tags rather than the full query text you could at least track the number of such queries and how much time they take. So the above queries would be tracked under SAVEPOINT, RELEASE, DECLARE CURSOR and (I guess) FETCH respectively. But it would also catch DDL.\n> \n> Does this sound like something for which a patch would be accepted?\n\nOr, we should extend the existing query normalization to handle also DDL?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 29 Jul 2020 21:42:05 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: IDEA: pg_stat_statements tracking utility statements by tag?" }, { "msg_contents": "On Wed, Jul 29, 2020 at 2:42 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> On 2020/07/29 18:24, Martijn van Oosterhout wrote:\n> > Hoi hackers,\n> >\n> > We've been using the pg_stat_statements extension to get an idea of the queries used in the database, but the table is being filled with entries like:\n> >\n> > SAVEPOINT sa_savepoint_NNN;\n> > RELEASE SAVEPOINT sa_savepoint_NNN;\n> > DECLARE \"c_7f9451c4dcc0_5\" CURSOR WITHOUT HOLD ...\n> > FETCH FORWARD 250 FROM \"c_7f9451b03908_5\"\n> >\n> > Since the unique id is different for each query, the aggregation does nothing and there are quite a lot of these drowning out the normal queries (yes, I'm aware this is an issue of itself). The only way to deal with this is \"pg_stat_statements.track_utility=off\". However, it occurs to me that if you just tracked the tags rather than the full query text you could at least track the number of such queries and how much time they take. So the above queries would be tracked under SAVEPOINT, RELEASE, DECLARE CURSOR and (I guess) FETCH respectively. But it would also catch DDL.\n> >\n> > Does this sound like something for which a patch would be accepted?\n>\n> Or, we should extend the existing query normalization to handle also DDL?\n\n+1, introducing DDL normalization seems like a better way to go in the\nlong run. Defining what should and shouldn't be normalized can be\ntricky though.\n\n\n", "msg_date": "Wed, 29 Jul 2020 15:40:16 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: IDEA: pg_stat_statements tracking utility statements by tag?" }, { "msg_contents": "On Wed, 29 Jul 2020 at 15:40, Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> On Wed, Jul 29, 2020 at 2:42 PM Fujii Masao <masao.fujii@oss.nttdata.com>\n> wrote:\n> >\n> >\n> > Or, we should extend the existing query normalization to handle also DDL?\n>\n> +1, introducing DDL normalization seems like a better way to go in the\n> long run. Defining what should and shouldn't be normalized can be\n> tricky though.\n>\n\nIn principle, the only thing that really needs to be normalised is\nSAVEPOINT/CURSOR names which are essentially random strings which have no\neffect on the result. Most other stuff is material to the query.\n\nThat said, I think \"aggregate by tag\" has value all by itself. Being able\nto collapse all CREATE TABLES into a single line can be useful in some\nsituations.\n\nIdeally the results of FETCH \"cursor\" should be combined with the DECLARE,\nbut I really don't know how to go about that.\n\nHave a nice day,\n-- \nMartijn van Oosterhout <kleptog@gmail.com> http://svana.org/kleptog/\n\nOn Wed, 29 Jul 2020 at 15:40, Julien Rouhaud <rjuju123@gmail.com> wrote:On Wed, Jul 29, 2020 at 2:42 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n> Or, we should extend the existing query normalization to handle also DDL?\n\n+1, introducing DDL normalization seems like a better way to go in the\nlong run.  Defining what should and shouldn't be normalized can be\ntricky though.\nIn principle, the only thing that really needs to be normalised is SAVEPOINT/CURSOR names which are essentially random strings which have no effect on the result. Most other stuff is material to the query.That said, I think \"aggregate by tag\" has value all by itself. Being able to collapse all CREATE TABLES into a single line can be useful in some situations.Ideally the results of FETCH \"cursor\" should be combined with the DECLARE, but I really don't know how to go about that.Have a nice day,-- Martijn van Oosterhout <kleptog@gmail.com> http://svana.org/kleptog/", "msg_date": "Wed, 29 Jul 2020 17:29:42 +0200", "msg_from": "Martijn van Oosterhout <kleptog@gmail.com>", "msg_from_op": true, "msg_subject": "Re: IDEA: pg_stat_statements tracking utility statements by tag?" }, { "msg_contents": "On Wed, Jul 29, 2020 at 5:29 PM Martijn van Oosterhout\n<kleptog@gmail.com> wrote:\n>\n>\n> On Wed, 29 Jul 2020 at 15:40, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>>\n>> On Wed, Jul 29, 2020 at 2:42 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> >\n>> >\n>> > Or, we should extend the existing query normalization to handle also DDL?\n>>\n>> +1, introducing DDL normalization seems like a better way to go in the\n>> long run. Defining what should and shouldn't be normalized can be\n>> tricky though.\n>\n>\n> In principle, the only thing that really needs to be normalised is SAVEPOINT/CURSOR names which are essentially random strings which have no effect on the result. Most other stuff is material to the query.\n>\n> That said, I think \"aggregate by tag\" has value all by itself. Being able to collapse all CREATE TABLES into a single line can be useful in some situations.\n\nThere's at least PREPARE TRANSACTION / COMMIT PREPARED / ROLLBACK\nPREPARED that should be normalized too. I also don't think that we\nreally want to have different entries for begin / Begin / BEGIN /\nbEgin and similar for many other commands, as the hash is computed\nbased on the query text.\n\n\n", "msg_date": "Wed, 29 Jul 2020 18:35:41 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: IDEA: pg_stat_statements tracking utility statements by tag?" }, { "msg_contents": "On Wed, Jul 29, 2020 at 06:35:41PM +0200, Julien Rouhaud wrote:\n> There's at least PREPARE TRANSACTION / COMMIT PREPARED / ROLLBACK\n> PREPARED that should be normalized too. I also don't think that we\n> really want to have different entries for begin / Begin / BEGIN /\n> bEgin and similar for many other commands, as the hash is computed\n> based on the query text.\n\nHmm. Do we really want to those commands fully normalized all the\ntime? There may be applications that care about the stats of some\ncommands that are for example prefixed the same way and would prefer\ngroup those things together. By fully normalizing those commands all\nthe time, we would lose this option.\n\nAn example. The ODBC driver uses its own grammar for internal\nsavepoint names, aka _EXEC_SVP_%p. If you mix that with a second\napplication that has its own naming policy for savepoints it would not\nbe possible anymore to make the difference in the stats between what\none or the other do.\n--\nMichael", "msg_date": "Thu, 30 Jul 2020 10:54:34 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: IDEA: pg_stat_statements tracking utility statements by tag?" }, { "msg_contents": "On Thu, Jul 30, 2020 at 3:54 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Jul 29, 2020 at 06:35:41PM +0200, Julien Rouhaud wrote:\n> > There's at least PREPARE TRANSACTION / COMMIT PREPARED / ROLLBACK\n> > PREPARED that should be normalized too. I also don't think that we\n> > really want to have different entries for begin / Begin / BEGIN /\n> > bEgin and similar for many other commands, as the hash is computed\n> > based on the query text.\n>\n> Hmm. Do we really want to those commands fully normalized all the\n> time? There may be applications that care about the stats of some\n> commands that are for example prefixed the same way and would prefer\n> group those things together. By fully normalizing those commands all\n> the time, we would lose this option.\n>\n> An example. The ODBC driver uses its own grammar for internal\n> savepoint names, aka _EXEC_SVP_%p. If you mix that with a second\n> application that has its own naming policy for savepoints it would not\n> be possible anymore to make the difference in the stats between what\n> one or the other do.\n\nBut if you have an OLTP application that uses ODBC, won't you already\nhave 80+% of pgss entries being savepoint orders, which is really not\nhelpful at all? We'd technically lose the ability to group such\ncommands together, but in most cases the current behavior is quite\nharmful.\n\n\n", "msg_date": "Thu, 30 Jul 2020 06:57:31 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: IDEA: pg_stat_statements tracking utility statements by tag?" } ]
[ { "msg_contents": "Is there a reason that HyperLogLog doesn't use pg_leftmost_one_pos32()?\n\nI tried the following patch and some brief performance tests seem to\nshow an improvement.\n\nThis came up because my recent commit 9878b643 uses HLL for estimating\nthe cardinality of spill files, which solves a few annoyances with\noverpartitioning[1]. I think it's overall an improvement, but\naddHyperLogLog() itself seemed to show up as a cost, so it can hurt\nspilled-but-still-in-memory cases. I'd also like to backpatch this to\n13 (as I already did for 9878b643), if that's acceptable.\n\nRegards,\n\tJeff Davis\n\n[1] \nhttps://www.postgresql.org/message-id/CAH2-Wznidojad-zbObnFOzDA5RTCS0JLsqcZkDNu+ou1NGYQYQ@mail.gmail.com", "msg_date": "Wed, 29 Jul 2020 10:07:51 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "HyperLogLog.c and pg_leftmost_one_pos32()" }, { "msg_contents": "On Wed, Jul 29, 2020 at 10:08 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> Is there a reason that HyperLogLog doesn't use pg_leftmost_one_pos32()?\n\nYes: HyperLogLog predates pg_leftmost_one_pos32().\n\n> I tried the following patch and some brief performance tests seem to\n> show an improvement.\n\nMakes sense.\n\nHow did you test this? What kind of difference are we talking about? I\nported this code from the upstream C++ as part of the original\nabbreviated keys commit. Note that the cardinality of abbreviated keys\nare displayed when you set \"trace_sort = on\".\n\n> This came up because my recent commit 9878b643 uses HLL for estimating\n> the cardinality of spill files, which solves a few annoyances with\n> overpartitioning[1].\n\nI think that you should change back the rhs() variable names to match\nHyperLogLog upstream (as well as the existing rhs() comments).\n\n> I think it's overall an improvement, but\n> addHyperLogLog() itself seemed to show up as a cost, so it can hurt\n> spilled-but-still-in-memory cases. I'd also like to backpatch this to\n> 13 (as I already did for 9878b643), if that's acceptable.\n\nI still wonder if it was ever necessary to add HLL to abbreviated\nkeys. It only served to avoid some pretty narrow worse cases, at the\nexpense of typical cases. Given that the existing users of HLL are\npretty narrow, and given the importance of preserving the favorable\nperformance characteristics of hash aggregate, I'm inclined to agree\nthat it's worth backpatching to 13 now. Assuming it is a really\nmeasurable cost in practice.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 29 Jul 2020 17:32:55 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: HyperLogLog.c and pg_leftmost_one_pos32()" }, { "msg_contents": "On Wed, 2020-07-29 at 17:32 -0700, Peter Geoghegan wrote:\n> How did you test this? What kind of difference are we talking about?\n\nEssentially:\n initHyperLogLog(&hll, 5)\n for i in 0 .. one billion\n addHyperLogLog(&hll, hash(i))\n estimateHyperLogLog\n\nThe numbers are the same regardless of bwidth.\n\nBefore my patch, it takes about 15.6s. After my patch, it takes about\n6.6s, so it's more than a 2X speedup (including the hash calculation).\n\nAs a part of HashAgg, when I test the worst case, it goes from a 4-5%\npenalty to ~1% (within noise).\n\n> I think that you should change back the rhs() variable names to match\n> HyperLogLog upstream (as well as the existing rhs() comments).\n\nDone.\n\n> > I think it's overall an improvement, but\n> > addHyperLogLog() itself seemed to show up as a cost, so it can hurt\n> > spilled-but-still-in-memory cases. I'd also like to backpatch this\n> > to\n> > 13 (as I already did for 9878b643), if that's acceptable.\n> \n> I still wonder if it was ever necessary to add HLL to abbreviated\n> keys. It only served to avoid some pretty narrow worse cases, at the\n> expense of typical cases. Given that the existing users of HLL are\n> pretty narrow, and given the importance of preserving the favorable\n> performance characteristics of hash aggregate, I'm inclined to agree\n> that it's worth backpatching to 13 now. Assuming it is a really\n> measurable cost in practice.\n\nYes, the difference (at least in a tight loop, on my machine) is not\nsubtle. I went ahead and committed and backpatched.\n\nRegards,\n\tJeff Davis\n\n\n\n\n\n", "msg_date": "Thu, 30 Jul 2020 09:21:23 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: HyperLogLog.c and pg_leftmost_one_pos32()" }, { "msg_contents": "On Thu, Jul 30, 2020 at 09:21:23AM -0700, Jeff Davis wrote:\n>On Wed, 2020-07-29 at 17:32 -0700, Peter Geoghegan wrote:\n>> How did you test this? What kind of difference are we talking about?\n>\n>Essentially:\n> initHyperLogLog(&hll, 5)\n> for i in 0 .. one billion\n> addHyperLogLog(&hll, hash(i))\n> estimateHyperLogLog\n>\n>The numbers are the same regardless of bwidth.\n>\n>Before my patch, it takes about 15.6s. After my patch, it takes about\n>6.6s, so it's more than a 2X speedup (including the hash calculation).\n>\n\nWow. That's a huge improvements.\n\nHow does the whole test (data + query) look like? Is it particularly\nrare / special case, or something reasonable to expect in practice?\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 30 Jul 2020 19:16:19 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: HyperLogLog.c and pg_leftmost_one_pos32()" }, { "msg_contents": "On Thu, 2020-07-30 at 19:16 +0200, Tomas Vondra wrote:\n> > Essentially:\n> > initHyperLogLog(&hll, 5)\n> > for i in 0 .. one billion\n> > addHyperLogLog(&hll, hash(i))\n> > estimateHyperLogLog\n> > \n> > The numbers are the same regardless of bwidth.\n> > \n> > Before my patch, it takes about 15.6s. After my patch, it takes\n> > about\n> > 6.6s, so it's more than a 2X speedup (including the hash\n> > calculation).\n> > \n> \n> Wow. That's a huge improvements.\n\nTo be clear: the 2X+ speedup was on the tight loop test.\n\n> How does the whole test (data + query) look like? Is it particularly\n> rare / special case, or something reasonable to expect in practice?\n\nThe whole-query test was:\n\nconfig:\n shared_buffers=8GB\n jit = off\n max_parallel_workers_per_gather=0\n\nsetup:\n create table t_1m_20(i int);\n vacuum (freeze, analyze) t_1m_20;\n insert into t_1m_20 select (random()*1000000)::int4\n from generate_series(1,20000000);\n\nquery:\n set work_mem='2048kB';\n SELECT pg_prewarm('t_1m_20', 'buffer');\n\n -- median of the three runs\n select distinct i from t_1m_20 offset 10000000;\n select distinct i from t_1m_20 offset 10000000;\n select distinct i\nfrom t_1m_20 offset 10000000;\n\nresults:\n f2130e77 (before using HLL): 6787ms \n f1af75c5 (before my recent commit): 7170ms\n fd734f38 (master now): 6990ms\n\nMy previous results before I committed the patch (and therefore not on\nthe same exact SHA1s) were 6812, 7158, and 6898. So my most recent\nbatch of results is slightly worse, but the most recent commit\n(fd734f38) still does show an improvement of a couple percent.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 30 Jul 2020 11:25:06 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: HyperLogLog.c and pg_leftmost_one_pos32()" } ]
[ { "msg_contents": "Hi,\n\nJust found a minor error in source code comment.\nsrc/include/executor/instrument.h\n\nAttached is the fix.\n\n-\tlong\t\tlocal_blks_dirtied; /* # of shared blocks dirtied */\n+\tlong\t\tlocal_blks_dirtied; /* # of local blocks dirtied */\n\n\nRegards,\nKirk Jamison", "msg_date": "Thu, 30 Jul 2020 08:03:09 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "Fix minor source code comment mistake" }, { "msg_contents": "On Thu, Jul 30, 2020 at 08:03:09AM +0000, k.jamison@fujitsu.com wrote:\n> Just found a minor error in source code comment.\n> src/include/executor/instrument.h\n> \n> Attached is the fix.\n> \n> -\tlong\t\tlocal_blks_dirtied; /* # of shared blocks dirtied */\n> +\tlong\t\tlocal_blks_dirtied; /* # of local blocks dirtied */\n\nIndeed. Let's fix this.\n--\nMichael", "msg_date": "Thu, 30 Jul 2020 17:57:40 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix minor source code comment mistake" }, { "msg_contents": "On Thu, Jul 30, 2020 at 05:57:40PM +0900, Michael Paquier wrote:\n> Indeed. Let's fix this.\n\nAnd done.\n--\nMichael", "msg_date": "Fri, 31 Jul 2020 14:34:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix minor source code comment mistake" } ]
[ { "msg_contents": "Hi\n\nI’d like to add icu/openssl support to my postgresql build on windows\n\ndocumentation says that I have to modify config.pl file, however it's not clear what exactly I have to do\n\nconfig-default.pl for example has the following line\n\n icu => undef, # --with-icu=<path>\n\nso, if I want to add icu support what exactly should I do?\n\nshould I replace undef with path?\n\n\n icu => <path_to_icu_install_area>,\n\nis it correct?\n\nif it’s correct does build support UNC paths?\n\nthanks in advance\n\nDimitry Markman\nDmitry Markman\n\n\nHiI’d like to add icu/openssl support to my postgresql build on windowsdocumentation says that I have to modify config.pl file, however it's not clear what exactly I have to doconfig-default.pl for example has the following line icu       => undef,    # --with-icu=<path>so, if I want to add icu support what exactly should I do?should I replace undef with path? icu       => <path_to_icu_install_area>,is it correct?if it’s correct does build support UNC paths?thanks in advanceDimitry MarkmanDmitry Markman", "msg_date": "Thu, 30 Jul 2020 06:55:28 -0400", "msg_from": "Dmitry Markman <dmarkman@mac.com>", "msg_from_op": true, "msg_subject": "windows config.pl question" }, { "msg_contents": "On Thu, Jul 30, 2020 at 06:55:28AM -0400, Dmitry Markman wrote:\n> icu => <path_to_icu_install_area>,\n> \n> is it correct?\n\nExactly.\n\n> if it’s correct does build support UNC paths?\n\nYes, these work. One take to be aware of is that the quoting of the\npaths can be annoying. FWIW, I just use single quotes with normal\nslashes as that's a no-brainer, say:\nopenssl => 'C:/OpenSSL-hoge/',\n\nIf you are able to break the scripts with a given path, this would\nlikely be a bug we should address. For example, we had lately\ncomplains about the build scripts breaking once you inserted spaces in\nthe python or OpenSSL paths (see beb2516 and ad7595b).\n--\nMichael", "msg_date": "Fri, 31 Jul 2020 10:59:53 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: windows config.pl question" }, { "msg_contents": "Hi Michael, thanks a lot\n\nI figured it out, UNC works indeed\n\nhowever I found at least 2 problems (at least in our 3p harness)\n\n1. in our configuration openssl executable went to lib (I don’t know why), so I had to change line in Configure script\nand point to openssl.exe file. Sure I probably can change our makefile to create bin directory and put openssl.exe there\nbut it’s not my file, maybe later\n\n2. if PostgreSQL is situated on network drive that mapped to say disk z:, then build script failed:\n\nZ:\\3p\\derived\\win64\\PostgreSQL\\source\\src\\tools\\msvc>build\nDetected hardware platform: Win32\nGenerating win32ver.rc for src/backend\nGenerating win32ver.rc for src/timezone\nGenerating win32ver.rc for src/backend/snowball\nGenerating win32ver.rc for src/pl/plpgsql/src\nGenerating win32ver.rc for src/backend/replication/libpqwalreceiver\nGenerating win32ver.rc for src/backend/replication/pgoutput\nGenerating win32ver.rc for src/interfaces/ecpg/pgtypeslib\n\n. . . . . . . . . . . . \n\nBuilding the projects in this solution one at a time. To enable parallel build, please add the \"/m\" switch.\nBuild started 7/30/2020 5:52:12 PM.\nProject \"Z:\\3p\\derived\\win64\\PostgreSQL\\source\\pgsql.sln\" on node 1 (default targets).\nZ:\\3p\\derived\\win64\\PostgreSQL\\source\\pgsql.sln.metaproj : error MSB4126: The specified solution configuration \"Release\n|x64\" is invalid. Please specify a valid solution configuration using the Configuration and Platform properties (e.g. M\nSBuild.exe Solution.sln /p:Configuration=Debug /p:Platform=\"Any CPU\") or leave those properties blank to use the defaul\nt solution configuration. [Z:\\3p\\derived\\win64\\PostgreSQL\\source\\pgsql.sln]\nDone Building Project \"Z:\\3p\\derived\\win64\\PostgreSQL\\source\\pgsql.sln\" (default targets) -- FAILED.\n\n\nBuild FAILED.\n\n\"Z:\\3p\\derived\\win64\\PostgreSQL\\source\\pgsql.sln\" (default target) (1) ->\n(ValidateSolutionConfiguration target) ->\n Z:\\3p\\derived\\win64\\PostgreSQL\\source\\pgsql.sln.metaproj : error MSB4126: The specified solution configuration \"Relea\nse|x64\" is invalid. Please specify a valid solution configuration using the Configuration and Platform properties (e.g.\n MSBuild.exe Solution.sln /p:Configuration=Debug /p:Platform=\"Any CPU\") or leave those properties blank to use the defa\nult solution configuration. [Z:\\3p\\derived\\win64\\PostgreSQL\\source\\pgsql.sln]\n\n 0 Warning(s)\n 1 Error(s)\n\n\nTime Elapsed 00:00:00.37\n\nZ:\\3p\\derived\\win64\\PostgreSQL\\source\\src\\tools\\msvc>\n\n\nthe same works just fine if it’s on c: drive\n\nall PostgreSQL distribution is in the Z:\\3p\\derived\\win64\\PostgreSQL\\source folder\n\n\n\nnetwork UNC path is mapped to Z:\n\nthanks again for your help\n\n dm\n\n\n\n> On Jul 30, 2020, at 9:59 PM, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Thu, Jul 30, 2020 at 06:55:28AM -0400, Dmitry Markman wrote:\n>> icu => <path_to_icu_install_area>,\n>> \n>> is it correct?\n> \n> Exactly.\n> \n>> if it’s correct does build support UNC paths?\n> \n> Yes, these work. One take to be aware of is that the quoting of the\n> paths can be annoying. FWIW, I just use single quotes with normal\n> slashes as that's a no-brainer, say:\n> openssl => 'C:/OpenSSL-hoge/',\n> \n> If you are able to break the scripts with a given path, this would\n> likely be a bug we should address. For example, we had lately\n> complains about the build scripts breaking once you inserted spaces in\n> the python or OpenSSL paths (see beb2516 and ad7595b).\n> --\n> Michael\n\n\n\n", "msg_date": "Thu, 30 Jul 2020 22:25:46 -0400", "msg_from": "Dmitry Markman <dmarkman@mac.com>", "msg_from_op": true, "msg_subject": "Re: windows config.pl question" }, { "msg_contents": "sorry I meant file src/tools/msvc/Solution.pm\n\n\nroutine sub GetOpenSSLVersion\n\nhas the follwing line:\n\nqq(\"$self->{options}->{openssl}\\\\bin\\\\openssl.exe\" version 2>&1);\n\nin our distribution openssl.exe isn’t in the $self->{options}->{openssl}\\bin\\ location\n\ndm\n\n\n\n\n> On Jul 30, 2020, at 10:25 PM, Dmitry Markman <dmarkman@mac.com> wrote:\n> \n> Hi Michael, thanks a lot\n> \n> I figured it out, UNC works indeed\n> \n> however I found at least 2 problems (at least in our 3p harness)\n> \n> 1. in our configuration openssl executable went to lib (I don’t know why), so I had to change line in Configure script\n> and point to openssl.exe file. Sure I probably can change our makefile to create bin directory and put openssl.exe there\n> but it’s not my file, maybe later\n> \n> 2. if PostgreSQL is situated on network drive that mapped to say disk z:, then build script failed:\n> \n> Z:\\3p\\derived\\win64\\PostgreSQL\\source\\src\\tools\\msvc>build\n> Detected hardware platform: Win32\n> Generating win32ver.rc for src/backend\n> Generating win32ver.rc for src/timezone\n> Generating win32ver.rc for src/backend/snowball\n> Generating win32ver.rc for src/pl/plpgsql/src\n> Generating win32ver.rc for src/backend/replication/libpqwalreceiver\n> Generating win32ver.rc for src/backend/replication/pgoutput\n> Generating win32ver.rc for src/interfaces/ecpg/pgtypeslib\n> \n> . . . . . . . . . . . . \n> \n> Building the projects in this solution one at a time. To enable parallel build, please add the \"/m\" switch.\n> Build started 7/30/2020 5:52:12 PM.\n> Project \"Z:\\3p\\derived\\win64\\PostgreSQL\\source\\pgsql.sln\" on node 1 (default targets).\n> Z:\\3p\\derived\\win64\\PostgreSQL\\source\\pgsql.sln.metaproj : error MSB4126: The specified solution configuration \"Release\n> |x64\" is invalid. Please specify a valid solution configuration using the Configuration and Platform properties (e.g. M\n> SBuild.exe Solution.sln /p:Configuration=Debug /p:Platform=\"Any CPU\") or leave those properties blank to use the defaul\n> t solution configuration. [Z:\\3p\\derived\\win64\\PostgreSQL\\source\\pgsql.sln]\n> Done Building Project \"Z:\\3p\\derived\\win64\\PostgreSQL\\source\\pgsql.sln\" (default targets) -- FAILED.\n> \n> \n> Build FAILED.\n> \n> \"Z:\\3p\\derived\\win64\\PostgreSQL\\source\\pgsql.sln\" (default target) (1) ->\n> (ValidateSolutionConfiguration target) ->\n> Z:\\3p\\derived\\win64\\PostgreSQL\\source\\pgsql.sln.metaproj : error MSB4126: The specified solution configuration \"Relea\n> se|x64\" is invalid. Please specify a valid solution configuration using the Configuration and Platform properties (e.g.\n> MSBuild.exe Solution.sln /p:Configuration=Debug /p:Platform=\"Any CPU\") or leave those properties blank to use the defa\n> ult solution configuration. [Z:\\3p\\derived\\win64\\PostgreSQL\\source\\pgsql.sln]\n> \n> 0 Warning(s)\n> 1 Error(s)\n> \n> \n> Time Elapsed 00:00:00.37\n> \n> Z:\\3p\\derived\\win64\\PostgreSQL\\source\\src\\tools\\msvc>\n> \n> \n> the same works just fine if it’s on c: drive\n> \n> all PostgreSQL distribution is in the Z:\\3p\\derived\\win64\\PostgreSQL\\source folder\n> \n> \n> \n> network UNC path is mapped to Z:\n> \n> thanks again for your help\n> \n> dm\n> \n> \n> \n>> On Jul 30, 2020, at 9:59 PM, Michael Paquier <michael@paquier.xyz> wrote:\n>> \n>> On Thu, Jul 30, 2020 at 06:55:28AM -0400, Dmitry Markman wrote:\n>>> icu => <path_to_icu_install_area>,\n>>> \n>>> is it correct?\n>> \n>> Exactly.\n>> \n>>> if it’s correct does build support UNC paths?\n>> \n>> Yes, these work. One take to be aware of is that the quoting of the\n>> paths can be annoying. FWIW, I just use single quotes with normal\n>> slashes as that's a no-brainer, say:\n>> openssl => 'C:/OpenSSL-hoge/',\n>> \n>> If you are able to break the scripts with a given path, this would\n>> likely be a bug we should address. For example, we had lately\n>> complains about the build scripts breaking once you inserted spaces in\n>> the python or OpenSSL paths (see beb2516 and ad7595b).\n>> --\n>> Michael\n> \n\n\n\n", "msg_date": "Thu, 30 Jul 2020 23:16:01 -0400", "msg_from": "Dmitry Markman <dmarkman@mac.com>", "msg_from_op": true, "msg_subject": "Re: windows config.pl question" }, { "msg_contents": "On Thu, Jul 30, 2020 at 11:16:01PM -0400, Dmitry Markman wrote:\n> sorry I meant file src/tools/msvc/Solution.pm\n>\n> routine sub GetOpenSSLVersion\n> \n> has the follwing line:\n> \n> qq(\"$self->{options}->{openssl}\\\\bin\\\\openssl.exe\" version 2>&1);\n> \n> in our distribution openssl.exe isn’t in the $self->{options}->{openssl}\\bin\\ location\n\nNo idea what you are using as OpenSSL installation , so I cannot say\nfor sure. FWIW, the scripts in the code tree are made compatible with\nwhat we suggest in the documentation here:\nhttps://www.postgresql.org/docs/current/install-windows-full.html#id-1.6.4.8.8\n--\nMichael", "msg_date": "Fri, 31 Jul 2020 18:43:54 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: windows config.pl question" }, { "msg_contents": "\nHi Michael, I found the problem\n\ncommand \n\ncl /? \n\ngives different answer if you start that command from c: or from z: (where z: is mapped drive)\n\n\nif current directory is on c:\n\nthen cl /? returns\n\nC:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\Professional>cl /?\nMicrosoft (R) C/C++ Optimizing Compiler Version 19.16.27042 for x64\nCopyright (C) Microsoft Corporation. All rights reserved.\n\n C/C++ COMPILER OPTIONS\n\n\n -OPTIMIZATION-\n\n/O1 maximum optimizations (favor space) /O2 maximum optimizations (favor speed)\n/Ob<n> inline expansion (default n=0) /Od disable optimizations (default)\n/Og enable global optimization /Oi[-] enable intrinsic functions\n/Os favor code space /Ot favor code speed\n/Ox optimizations (favor speed)\n/favor:<blend|AMD64|INTEL64|ATOM> select processor to optimize for, one of:\n blend - a combination of optimizations for several different x64 processors\n AMD64 - 64-bit AMD processors\n INTEL64 - Intel(R)64 architecture processors\n ATOM - Intel(R) Atom(TM) processors\n\n -CODE GENERATION-\n\n/Gu[-] ensure distinct functions have distinct addresses\n/Gw[-] separate global variables for linker\n/GF enable read-only string pooling /Gm[-] enable minimal rebuild\n/Gy[-] separate functions for linker /GS[-] enable security checks\n/GR[-] enable C++ RTTI /GX[-] enable C++ EH (same as /EHsc)\n\n. . . . . . . . . . . . . . .\n\n\nbut if I issue that command if the current folder is on z:\n\nZ:\\>cl /?\nMicrosoft (R) C/C++ Optimizing Compiler Version 19.16.27042 for x64\nCopyright (C) Microsoft Corporation. All rights reserved.\n\nusage: cl [ option... ] filename... [ /link linkoption... ]\n\nfrom other hand\n\ncl -help\n\nreturns consisten answer from c: or from z:\n\nso platform wasn’t identified properly if build started from z:\n\n\nafter I changed cl /? to cl -help\n\nbuild and install went successfully\n\nall but one test (tablespace) finished successfully.\n\nthat one failure also related to network drive, because\n\nafter build finished I copied whole directory that contain PostgreSQL distro to\n\nc: and run tests everything went smoothly\n\nthanks\n\n dm\n\n\n\n\n\n\n> On Jul 31, 2020, at 5:43 AM, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Thu, Jul 30, 2020 at 11:16:01PM -0400, Dmitry Markman wrote:\n>> sorry I meant file src/tools/msvc/Solution.pm\n>> \n>> routine sub GetOpenSSLVersion\n>> \n>> has the follwing line:\n>> \n>> qq(\"$self->{options}->{openssl}\\\\bin\\\\openssl.exe\" version 2>&1);\n>> \n>> in our distribution openssl.exe isn’t in the $self->{options}->{openssl}\\bin\\ location\n> \n> No idea what you are using as OpenSSL installation , so I cannot say\n> for sure. FWIW, the scripts in the code tree are made compatible with\n> what we suggest in the documentation here:\n> https://www.postgresql.org/docs/current/install-windows-full.html#id-1.6.4.8.8\n> --\n> Michael\n\n\n\n", "msg_date": "Fri, 31 Jul 2020 22:41:46 -0400", "msg_from": "Dmitry Markman <dmarkman@mac.com>", "msg_from_op": true, "msg_subject": "Re: windows config.pl question" }, { "msg_contents": "On Fri, Jul 31, 2020 at 10:41:46PM -0400, Dmitry Markman wrote:\n> but if I issue that command if the current folder is on z:\n> \n> Z:\\>cl /?\n> Microsoft (R) C/C++ Optimizing Compiler Version 19.16.27042 for x64\n> Copyright (C) Microsoft Corporation. All rights reserved.\n> \n> usage: cl [ option... ] filename... [ /link linkoption... ]\n> \n> from other hand\n\nInteresting. We rely on the presence of \"favor:\" in the output to\ndetermine which platform to use, aka x64 or Win32.\n\n> cl -help\n> \n> returns consistent answer from c: or from z:\n> \n> so platform wasn’t identified properly if build started from z:\n\nWhat's the output of cl -help on \"z:\" in this case? Is the exact same\noutput as \"cl /?\" or \"cl -help\" on c: generated? I have to admit that\nI don't really know why things would behave this way, but Windows is a\nplatform full of undiscovered mysteries, and I have never seen the\noutput of cl being an issue even for some of my company work, which\nuses stuff much more fancy than the normal way of compiling on\nWindows, requiring me to patch a bit the scripts of src/tools/msvc/ in\na different way.\n--\nMichael", "msg_date": "Sat, 1 Aug 2020 11:58:53 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: windows config.pl question" }, { "msg_contents": "cl -help works the same\n\non z: or on c:\n\nand it’s equivalent to the output of cl /? from c:\n\nZ:\\>cat cl_out.txt | grep favor\n/O1 maximum optimizations (favor space) /O2 maximum optimizations (favor speed)\n/Os favor code space /Ot favor code speed\n/Ox optimizations (favor speed)\n/favor:<blend|AMD64|INTEL64|ATOM> select processor to optimize for, one of:\n\n\nwhere cl_out.txt is result cl -help > cl_out.txt\n\n\n\n\n> On Jul 31, 2020, at 10:58 PM, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Fri, Jul 31, 2020 at 10:41:46PM -0400, Dmitry Markman wrote:\n>> but if I issue that command if the current folder is on z:\n>> \n>> Z:\\>cl /?\n>> Microsoft (R) C/C++ Optimizing Compiler Version 19.16.27042 for x64\n>> Copyright (C) Microsoft Corporation. All rights reserved.\n>> \n>> usage: cl [ option... ] filename... [ /link linkoption... ]\n>> \n>> from other hand\n> \n> Interesting. We rely on the presence of \"favor:\" in the output to\n> determine which platform to use, aka x64 or Win32.\n> \n>> cl -help\n>> \n>> returns consistent answer from c: or from z:\n>> \n>> so platform wasn’t identified properly if build started from z:\n> \n> What's the output of cl -help on \"z:\" in this case? Is the exact same\n> output as \"cl /?\" or \"cl -help\" on c: generated? I have to admit that\n> I don't really know why things would behave this way, but Windows is a\n> platform full of undiscovered mysteries, and I have never seen the\n> output of cl being an issue even for some of my company work, which\n> uses stuff much more fancy than the normal way of compiling on\n> Windows, requiring me to patch a bit the scripts of src/tools/msvc/ in\n> a different way.\n> --\n> Michael\n\n\n\n", "msg_date": "Fri, 31 Jul 2020 23:05:25 -0400", "msg_from": "Dmitry Markman <dmarkman@mac.com>", "msg_from_op": true, "msg_subject": "Re: windows config.pl question" } ]
[ { "msg_contents": "Commit 896ddf9b added prefetching to logtape.c to avoid excessive\nfragmentation in the context of hash aggs that spill and have many\nbatches/tapes. Apparently the preallocation doesn't actually perform\nany filesystem operations, so the new mechanism should be zero\noverhead when \"preallocated\" blocks aren't actually used after all\n(right?). However, I notice that this breaks the statistics shown by\nthings like trace_sort, and even EXPLAIN ANALYZE.\nLogicalTapeSetBlocks() didn't get the memo about preallocation.\n\nThe easiest way to spot the issue is to compare trace_sort output on\nv13 with output for the same case in v12 -- the \"%u disk blocks used\"\nstatistics are consistently higher on v13, especially for cases with\nmany tapes. I spotted the bug when I noticed that v13 external sorts\nreportedly use more or less disk space when fewer or more tapes are\ninvolved (again, this came from trace_sort). That doesn't make sense\n-- the total amount of space used for external sort temp files should\npractically be fixed, aside from insignificant rounding effects.\nReducing the amount of memory by orders of magnitude in a Postgres 12\ntuplesort will hardly affect the \"%u disk blocks used\" trace_sort\noutput at all. That's what we need to get back to.\n\nThis bug probably won't be difficult to fix. Actually, we have had\nsimilar problems in the past. The fix could be as simple as teaching\nLogicalTapeSetBlocks() about this new variety of \"sparse allocation\".\nAlthough maybe the preallocation stuff should somehow be rolled into\nthe much older nHoleBlocks stuff. Unsure.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 30 Jul 2020 15:51:00 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "logtape.c stats don't account for unused \"prefetched\" block numbers" }, { "msg_contents": "On 2020-Jul-30, Peter Geoghegan wrote:\n\n> Commit 896ddf9b added prefetching to logtape.c to avoid excessive\n> fragmentation in the context of hash aggs that spill and have many\n> batches/tapes. Apparently the preallocation doesn't actually perform\n> any filesystem operations, so the new mechanism should be zero\n> overhead when \"preallocated\" blocks aren't actually used after all\n> (right?). However, I notice that this breaks the statistics shown by\n> things like trace_sort, and even EXPLAIN ANALYZE.\n> LogicalTapeSetBlocks() didn't get the memo about preallocation.\n\nThis open item hasn't received any replies. I think Peter knows how to\nfix it already, but no patch has been posted ... It'd be good to get a\nmove on it.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 1 Sep 2020 19:36:02 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: logtape.c stats don't account for unused \"prefetched\" block\n numbers" }, { "msg_contents": "On Tue, Sep 1, 2020 at 4:36 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> This open item hasn't received any replies. I think Peter knows how to\n> fix it already, but no patch has been posted ... It'd be good to get a\n> move on it.\n\nI picked this up again today.\n\nIt's not obvious what we should do. It's true that the instrumentation\ndoesn't accurately reflect the on-disk temp file overhead. That is, it\ndoesn't agree with the high watermark temp file size I see in the\npgsql_tmp directory, which is a clear regression compared to earlier\nreleases (where tuplesort was the only user of logtape.c). But it's\nalso true that we need to use somewhat more temp file space for a\ntuplesort in Postgres 13, because we use the preallocation stuff for\ntuplesort -- though probably without getting any benefit for it.\n\nI haven't figured out how to correct the accounting just yet. In fact,\nI'm not sure that this isn't some kind of leak of blocks from the\nfreelist, which shouldn't happen at all. The code is complicated\nenough that I wasn't able to work that out in the couple of hours I\nspent on it today. I can pick it up again tomorrow.\n\nBTW, this MaxAllocSize freeBlocksLen check is wrong -- doesn't match\nthe later repalloc allocation:\n\n if (lts->nFreeBlocks >= lts->freeBlocksLen)\n {\n /*\n * If the freelist becomes very large, just return and leak this free\n * block.\n */\n if (lts->freeBlocksLen * 2 > MaxAllocSize)\n return;\n\n lts->freeBlocksLen *= 2;\n lts->freeBlocks = (long *) repalloc(lts->freeBlocks,\n lts->freeBlocksLen * sizeof(long));\n }\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 1 Sep 2020 17:24:27 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: logtape.c stats don't account for unused \"prefetched\" block\n numbers" }, { "msg_contents": "On Tue, Sep 1, 2020 at 5:24 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Tue, Sep 1, 2020 at 4:36 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > This open item hasn't received any replies. I think Peter knows how to\n> > fix it already, but no patch has been posted ... It'd be good to get a\n> > move on it.\n>\n> I picked this up again today.\n\nOne easy way to get logtape.c to behave in the same way as Postgres 12\nfor a multi-pass external sort (i.e. to use fewer blocks and to report\nthe number of blocks used accurately) is to #define\nTAPE_WRITE_PREALLOC_MIN and TAPE_WRITE_PREALLOC_MAX to 1. So it looks\nlike the problem is in the preallocation stuff added by commit\n896ddf9b3cd, and not the new heap-based free list logic added by\ncommit c02fdc92230. That's good news, because it means that the\nproblem may be fairly well isolated -- commit 896ddf9b3cd was a pretty\nsmall and isolated thing.\n\nThe comments in ltsWriteBlock() added by the 2017 bugfix commit\n7ac4a389a7d clearly say that the zero block writing stuff is only\nsupposed to happen at the edge of a tape boundary, which ought to be\nrare -- see the comment block in ltsWriteBlock(). And yet the new\npreallocation stuff explicitly relies on that it writing zero blocks\nmuch more frequently. I'm concerned that that can result in increased\nand unnecessary I/O, especially for sorts, but also for hash aggs that\nspill. I'm also concerned that having preallocated-but-allocated\nblocks confuses the accounting used by\ntrace_sort/LogicalTapeSetBlocks().\n\nSeparately, it's possible to make the\ntrace_sort/LogicalTapeSetBlocks() instrumentation agree with the\nfilesystem by replacing the use of nBlocksAllocated within\nLogicalTapeSetBlocks() with nBlocksWritten -- that seems to make the\ninstrumentation correct without changing the current behavior at all.\nBut I'm not ready to endorse that approach, since it's not quite clear\nwhat nBlocksAllocated and nBlocksWritten mean right now -- those two\nfields were both added by the aforementioned 2017 bugfix commit, which\nintroduced the \"allocated vs written\" distinction in the first place.\n\nWe should totally disable the preallocation stuff for external sorts\nin any case. External sorts are naturally characterized by relatively\nlarge, distinct batching of reads and writes -- preallocation cannot\nhelp.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 5 Sep 2020 12:03:43 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: logtape.c stats don't account for unused \"prefetched\" block\n numbers" }, { "msg_contents": "On Sat, 2020-09-05 at 12:03 -0700, Peter Geoghegan wrote:\n> We should totally disable the preallocation stuff for external sorts\n> in any case. External sorts are naturally characterized by relatively\n> large, distinct batching of reads and writes -- preallocation cannot\n> help.\n\nPatch attached to disable preallocation for Sort.\n\nI'm still looking into the other concerns.\n\nRegards,\n\tJeff Davis", "msg_date": "Tue, 08 Sep 2020 10:27:06 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: logtape.c stats don't account for unused \"prefetched\" block\n numbers" }, { "msg_contents": "On Sat, 2020-09-05 at 12:03 -0700, Peter Geoghegan wrote:\n> The comments in ltsWriteBlock() added by the 2017 bugfix commit\n> 7ac4a389a7d clearly say that the zero block writing stuff is only\n> supposed to happen at the edge of a tape boundary, which ought to be\n> rare -- see the comment block in ltsWriteBlock(). And yet the new\n> preallocation stuff explicitly relies on that it writing zero blocks\n> much more frequently. I'm concerned that that can result in increased\n> and unnecessary I/O, especially for sorts, but also for hash aggs\n> that\n> spill. I'm also concerned that having preallocated-but-allocated\n> blocks confuses the accounting used by\n> trace_sort/LogicalTapeSetBlocks().\n\nPreallocation showed significant gains for HashAgg, and BufFile doesn't\nsupport sparse writes. So, for HashAgg, it seems like we should just\nupdate the comment and consider it the price of using BufFile.\n\n(Aside: is there a reason why BufFile doesn't support sparse writes, or\nis it just a matter of implementation?)\n\nFor Sort, we can just disable preallocation.\n\n> Separately, it's possible to make the\n> trace_sort/LogicalTapeSetBlocks() instrumentation agree with the\n> filesystem by replacing the use of nBlocksAllocated within\n> LogicalTapeSetBlocks() with nBlocksWritten -- that seems to make the\n> instrumentation correct without changing the current behavior at all.\n> But I'm not ready to endorse that approach, since it's not quite\n> clear\n> what nBlocksAllocated and nBlocksWritten mean right now -- those two\n> fields were both added by the aforementioned 2017 bugfix commit,\n> which\n> introduced the \"allocated vs written\" distinction in the first place\n\nRight now, it seems nBlocksAllocated means \"number of blocks returned\nby ltsGetFreeBlock(), plus nHoleBlocks\".\n\nnBlocksWritten seems to mean \"the logical size of the BufFile\". The\nBufFile can have holes in it after concatenation, but from the\nperspective of logtape.c, nBlocksWritten seems like a better fit for\ninstrumentation purposes. So I'd be inclined to return (nBlocksWritten\n- nHoleBlocks).\n\nThe only thing I can think of that would be better is if BufFile\ntracked for itself the logical vs. physical size, which might be a good\nimprovement to make (and would mean that logtape.c wouldn't be\nresponsible for tracking the holes itself).\n\nThoughts?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Tue, 08 Sep 2020 23:28:49 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: logtape.c stats don't account for unused \"prefetched\" block\n numbers" }, { "msg_contents": "On Tue, Sep 8, 2020 at 11:28 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> Preallocation showed significant gains for HashAgg, and BufFile doesn't\n> support sparse writes. So, for HashAgg, it seems like we should just\n> update the comment and consider it the price of using BufFile.\n\n> For Sort, we can just disable preallocation.\n\n+1.\n\nI think that you can push sort-no-prealloc.patch without delay. That\nlooks good to me.\n\n> Right now, it seems nBlocksAllocated means \"number of blocks returned\n> by ltsGetFreeBlock(), plus nHoleBlocks\".\n\nJust to be clear: I'm assuming that we must honor the original intent\nof earlier code in my remarks here (in particular, code added by\n7ac4a389a7d). This may not be exactly where we end up, but it's a good\nstarting point.\n\nnHoleBlocks was added by the parallel CREATE INDEX commit in 2018,\nwhich added BufFile/logtape.c concatenation. Whereas nBlocksAllocated\nwas added by the earlier bugfix commit 7ac4a389a7d in 2017 (the bugfix\nI've referenced several times upthread). Clearly nBlocksAllocated\ncannot be defined in terms of some other thing that wasn't there when\nit was first added (I refer to nHoleBlocks). In general nHoleBlocks\ncan only be non-zero when the logical tapeset has been unified in a\nleader process (for a parallel CREATE INDEX).\n\nnBlocksAllocated is not the same thing as nBlocksWritten, though the\ndifference is more subtle than you suggest. nBlocksAllocated actually\nmeans (or should mean) \"the number of blocks allocated to the file\",\nwhich is usually the same thing that a stat() call or tools like \"ls\"\nare expected report for the underlying temp file once the merge phase\nof an external sort is reached (assuming that you only need one temp\nfile for a BufFile, and didn't use parallelism/concatenation, which is\nthe common case). That's why nBlocksAllocated is what\nLogicalTapeSetBlocks() returns (pretty much). At least, the original\npost-7ac4a389a7d version of LogicalTapeSetBlocks() was just \"return\nnBlocksAllocated;\". nHoleBlocks was added for parallel CI, but that\nwas only supposed to compensate for the holes left behind by\nconcatenation/parallel sort, without changing the logtape.c space\nmanagement design in any fundamental way.\n\nObviously you must be wondering what the difference is, if it's not\njust the nHoleBlocks thing. nBlocksAllocated is not necessarily equal\nto nBlocksWritten (even when we ignore concatenation/nHoleBlocks), but\nit's almost always equal in practice (again, barring nHoleBlocks !=\n0). It's possible that a tuplesort will not have flushed the last\nblock at a point when LogicalTapeSetBlocks() is called -- it will have\nallocated the block, but not yet written it to the BufFile. IOW, as\nfar as tuplesort.c is concerned the data is written to tape, but it\nhappens to not have been passed through to the OS via write(), or even\npassed through to BufFileWrite() -- it happens to still be in one of\nthe small per-tape write buffers. When this occurs, a small amount of\ndirty data in said per-tape buffer is considered written by\ntuplesort.c, but from the point of view of logtape.c it is allocated\nbut not yet \"written\" (by which I mean not yet passed to buffile.c,\nwhich actually does its own buffering, which it can neglect to flush\nimmediately in turn).\n\nIt's possible that tuplesort.c will need to call\nLogicalTapeSetBlocks() at an earlier point after all tuples are\nwritten but before they're \"flushed\" in logtape.c/buffile.c. We need\nto avoid confusion when that happens. We want to insulate tuplesort.c\nfrom implementation details that are private to logtape.c and/or\nbuffile.c. Bear in mind that nBlocksAllocated was originally only ever\nsupposed to have a value equal to nBlocksWritten, or the value\nnBlocksWritten + 1. It is reasonable to want to hide the buffering\nfrom LogicalTapeSetBlocks() once you realize that this mechanism is\nonly supposed to smooth-over an edge case involving one extra block\nthat will be written out in a moment anyway.\n\nWhat does all of this mean for the new preallocation stuff that\nbenefits HashAggs that spill? Well, I'm not sure. I was specifically\nconcerned that somebody would end up misusing the ltsWriteBlock()\nallocated-but-not-written thing in this way back in 2017, and said so\nat the time -- that's why commit 7ac4a389a7d added comments about all\nthis to ltsWriteBlock(). For external sorts, that we're agreed won't\nbe using preallocation anyway, I think that we should go back to\nreporting allocated blocks from LogicalTapeSetBlocks() -- very often\nthis is nBlocksWritten, but occasionally it's nBlocksWritten + 1. I\nhaven't yet refreshed my memory on the exact details of when you get\none behavior rather than the other, but I know it is possible in\npractice with a tuplesort on Postgres 12. It might depend on subtle\nissues like the alignment with BufFile segments -- see my test case\nfrom 2017 to get an idea of how to make it easier to reveal problems\nin this area:\n\nhttps://www.postgresql.org/message-id/CAM3SWZRWdNtkhiG0GyiX_1mUAypiK3dV6-6542pYe2iEL-foTA@mail.gmail.com\n\nWe still need to put the reliance on ltsWriteBlock() allocating many\nblocks before they've been logically written on some kind of formal\nfooting for Postgres 13 -- it is now possible that an all-zero block\nwill be left behind even after we're done writing and have flushed all\ntemp buffers, which is a new thing. In cases when the\nzero-block-written thing happened on Postgres 12, we would later flush\nout a block that overwrote every zero block -- that happened reliably.\nltsWriteBlock()'s loop only \"preallocated\" blocks it *knew* would get\nfilled with real data shortly afterwards, as an implementation\nexpedient -- not as an optimization. This is no longer the case.\n\nAt a minimum, we need to update the old ltsWriteBlock()\nallocated-but-not-written comments to acknowledge that the HashAgg\ncase exists and has different concerns. We must also determine whether\nwe have the same issue with written-but-not-yet-flushed data for the\nnew nodeAgg.c caller. You're not doing the ltsWriteBlock()\nloop-that-writes-zero-blocks thing because you have an unflushed\nbuffer from another tape -- you're doing it to preallocate and avoid\npossible fragmentation. I'm mostly okay with doing the preallocation\nthat way, but that needs to be reconciled with the original design.\nAnd the original design needs to continue to do the same things for\ntuplesort.c, and maybe nodeAgg.c, too.\n\nI think that the return value of LogicalTapeSetBlocks() should be at\nleast nBlocksWritten, while also including blocks that we know that\nflushing dirty buffered data out will write in a moment, too (note\nthat I'm still pretending nHoleBlocks doesn't exist because it's not\nimportant in my remarks here). IOW, it ought to include preallocated\nblocks (for HashAgg), while not failing to count one extra block that\nhappens to still be buffered but is written as far as the logtape.c\ncaller is concerned (certainly for tuplesort caller, and maybe for\nHashAgg caller too).\n\n> nBlocksWritten seems to mean \"the logical size of the BufFile\". The\n> BufFile can have holes in it after concatenation, but from the\n> perspective of logtape.c, nBlocksWritten seems like a better fit for\n> instrumentation purposes. So I'd be inclined to return (nBlocksWritten\n> - nHoleBlocks).\n>\n> The only thing I can think of that would be better is if BufFile\n> tracked for itself the logical vs. physical size, which might be a good\n> improvement to make (and would mean that logtape.c wouldn't be\n> responsible for tracking the holes itself).\n\nI don't really think that that's workable, for what it's worth. The\n\"holes\" left behind by concatenation (and counted by nHoleBlocks) are\nranges that logtape.c can never reuse that are \"between\" worker tapes.\nThey are necessary because logtape.c needs to be able to read back\nblock number metadata from worker temp files (it makes sense of them\nby applying an offset). ISTM that the logical vs physical size\ndistinction will have to be tracked by logtape.c for as long as it\nbuffers data for writes. It's the natural way to do it IMV.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 10 Sep 2020 18:42:16 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: logtape.c stats don't account for unused \"prefetched\" block\n numbers" }, { "msg_contents": "On Thu, Sep 10, 2020 at 6:42 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Obviously you must be wondering what the difference is, if it's not\n> just the nHoleBlocks thing. nBlocksAllocated is not necessarily equal\n> to nBlocksWritten (even when we ignore concatenation/nHoleBlocks), but\n> it's almost always equal in practice (again, barring nHoleBlocks !=\n> 0).\n\nNoticing that you pushed a commit to disable preallocation for\nexternal sorts, I tried to determine if there are any remaining\nproblem. As far as I can tell there are no remaining problems --\nevidently the loop logic in ltsWriteBlock() both performs its original\ntask (per commit 7ac4a389a7d), as well as the new task of\npreallocation for its HashAggs-that-spill caller.\n\nThere is a case in the regression tests (including the Postgres 12\nregression tests) that relies on the loop within ltsWriteBlock() for\nan external sort. FWIW, that happens in the \"cluster clstr_tst4 using\ncluster_sort\" cluster tuplesort. The trace_sort output (and the temp\nfile size) is now consistent across versions 12 and 13.\n\nI'll probably close out this open item tomorrow. I need to think about\nit some more, but right now everything looks good. I think I'll\nprobably end up pushing a commit with more explanatory comments.\n\nThank you\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 11 Sep 2020 18:29:08 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: logtape.c stats don't account for unused \"prefetched\" block\n numbers" }, { "msg_contents": "On Fri, Sep 11, 2020 at 6:29 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I'll probably close out this open item tomorrow. I need to think about\n> it some more, but right now everything looks good. I think I'll\n> probably end up pushing a commit with more explanatory comments.\n\nThat said, we still need to make sure that the preallocation\ninstrumentation for HashAggs-that-spill is sensible -- it has to\nactually match the temp file size.\n\nIt would be awkward if we just used nBlocksWritten within\nLogicalTapeSetBlocks() in the case where we didn't preallocate (or in\nall cases). Not entirely sure what to do about that just yet.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 11 Sep 2020 18:37:13 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: logtape.c stats don't account for unused \"prefetched\" block\n numbers" }, { "msg_contents": "On Fri, Sep 11, 2020 at 6:37 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> It would be awkward if we just used nBlocksWritten within\n> LogicalTapeSetBlocks() in the case where we didn't preallocate (or in\n> all cases). Not entirely sure what to do about that just yet.\n\nI guess that that's the logical thing to do, as in the attached patch.\n\nWhat do you think, Jeff?\n\n-- \nPeter Geoghegan", "msg_date": "Mon, 14 Sep 2020 14:24:54 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: logtape.c stats don't account for unused \"prefetched\" block\n numbers" }, { "msg_contents": "On Mon, 2020-09-14 at 14:24 -0700, Peter Geoghegan wrote:\n> On Fri, Sep 11, 2020 at 6:37 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > It would be awkward if we just used nBlocksWritten within\n> > LogicalTapeSetBlocks() in the case where we didn't preallocate (or\n> > in\n> > all cases). Not entirely sure what to do about that just yet.\n> \n> I guess that that's the logical thing to do, as in the attached\n> patch.\n\nHi Peter,\n\nIn the comment in the patch, you say:\n\n\"In practice this probably doesn't matter because we'll be called after\nthe flush anyway, but be tidy.\"\n\nBy which I assume you mean that LogicalTapeRewindForRead() will be\ncalled before LogicalTapeSetBlocks().\n\nIf that's the intention of LogicalTapeSetBlocks(), should we just make\nit a requirement that there are no open write buffers for any tapes\nwhen it's called? Then we could just use nBlocksWritten in both cases,\nright?\n\n(Aside: HashAgg calls it before LogicalTapeRewindForRead(). That might\nbe a mistake in HashAgg where it will keep the write buffers around\nlonger than necessary. If I recall correctly, it was my intention to\nrewind for reading immediately after the batch was finished, which is\nwhy I made the read buffer lazily-allocated.)\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Mon, 14 Sep 2020 15:20:02 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: logtape.c stats don't account for unused \"prefetched\" block\n numbers" }, { "msg_contents": "On 2020-Sep-14, Peter Geoghegan wrote:\n\n> On Fri, Sep 11, 2020 at 6:37 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > It would be awkward if we just used nBlocksWritten within\n> > LogicalTapeSetBlocks() in the case where we didn't preallocate (or in\n> > all cases). Not entirely sure what to do about that just yet.\n> \n> I guess that that's the logical thing to do, as in the attached patch.\n\nI don't understand this patch. Or maybe I should say I don't understand\nthe code you're patching. Why isn't the correct answer *always*\nnBlocksWritten? The comment in LogicalTapeSet says:\n\n\"nBlocksWritten is the size of the underlying file, in BLCKSZ blocks.\"\n\nso if LogicalTapeSetBlocks wants to do what its comment says, that is,\n\n\"Obtain total disk space currently used by a LogicalTapeSet, in blocks.\"\n\nthen it seems like they're an exact match. Either that, or more than\nzero of those comments are lying.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 14 Sep 2020 19:23:59 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: logtape.c stats don't account for unused \"prefetched\" block\n numbers" }, { "msg_contents": "On Mon, Sep 14, 2020 at 3:24 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> I don't understand this patch. Or maybe I should say I don't understand\n> the code you're patching. Why isn't the correct answer *always*\n> nBlocksWritten? The comment in LogicalTapeSet says:\n\nI think that they are an exact match in practice (i.e. nBlocksWritten\n== nBlocksAllocated), given when and how we call\nLogicalTapeSetBlocks().\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 14 Sep 2020 15:39:08 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: logtape.c stats don't account for unused \"prefetched\" block\n numbers" }, { "msg_contents": "On Mon, Sep 14, 2020 at 3:20 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> In the comment in the patch, you say:\n>\n> \"In practice this probably doesn't matter because we'll be called after\n> the flush anyway, but be tidy.\"\n>\n> By which I assume you mean that LogicalTapeRewindForRead() will be\n> called before LogicalTapeSetBlocks().\n\nYeah, I'm pretty sure that that's an equivalent way of expressing the\nsame idea. It appears that this assumption holds, though only when\nwe're not using preallocation (i.e. it doesn't necessarily hold for\nthe HashAggs-that-spill case, as I go into below).\n\n> If that's the intention of LogicalTapeSetBlocks(), should we just make\n> it a requirement that there are no open write buffers for any tapes\n> when it's called? Then we could just use nBlocksWritten in both cases,\n> right?\n\nThat does seem appealing. Perhaps it could be enforced by an assertion.\n\n> (Aside: HashAgg calls it before LogicalTapeRewindForRead(). That might\n> be a mistake in HashAgg where it will keep the write buffers around\n> longer than necessary. If I recall correctly, it was my intention to\n> rewind for reading immediately after the batch was finished, which is\n> why I made the read buffer lazily-allocated.)\n\nIf I add the assertion described above and run the regression tests,\nit fails within \"select_distinct\" (and at other points). This is the\nspecific code:\n\n--- a/src/backend/utils/sort/logtape.c\n+++ b/src/backend/utils/sort/logtape.c\n@@ -1284,6 +1284,7 @@ LogicalTapeSetBlocks(LogicalTapeSet *lts)\n * (In practice this probably doesn't matter because we'll be called after\n * the flush anyway, but be tidy.)\n */\n+ Assert(lts->nBlocksWritten == lts->nBlocksAllocated);\n if (lts->enable_prealloc)\n return lts->nBlocksWritten;\n\nMaybe the LogicalTapeRewindForRead() inconsistency you mention could\nbe fixed, which would enable the simplification you suggested. What do\nyou think?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 14 Sep 2020 15:54:48 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: logtape.c stats don't account for unused \"prefetched\" block\n numbers" }, { "msg_contents": "On Mon, Sep 14, 2020 at 3:39 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I think that they are an exact match in practice (i.e. nBlocksWritten\n> == nBlocksAllocated), given when and how we call\n> LogicalTapeSetBlocks().\n\nJust to be clear: this is only true for external sorts. The\npreallocation stuff can make nBlocksAllocated quite a lot higher.\nThat's probably why adding a new \"Assert(lts->nBlocksWritten ==\nlts->nBlocksAllocated)\" assertion fails during the regression tests,\nthough there might be other reasons as well (I'm thinking of the\nLogicalTapeRewindForRead() inconsistency Jeff mentioned).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 14 Sep 2020 15:58:08 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: logtape.c stats don't account for unused \"prefetched\" block\n numbers" }, { "msg_contents": "On Mon, Sep 14, 2020 at 3:54 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> If I add the assertion described above and run the regression tests,\n> it fails within \"select_distinct\" (and at other points). This is the\n> specific code:\n\nThis variant of the same assertion works fine:\n\n+ Assert(lts->enable_prealloc || lts->nBlocksWritten ==\nlts->nBlocksAllocated);\n\n(This is hardly surprising, though.)\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 14 Sep 2020 16:03:14 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: logtape.c stats don't account for unused \"prefetched\" block\n numbers" }, { "msg_contents": "On Mon, 2020-09-14 at 15:54 -0700, Peter Geoghegan wrote:\n> Maybe the LogicalTapeRewindForRead() inconsistency you mention could\n> be fixed, which would enable the simplification you suggested. What\n> do\n> you think?\n\nYes, it was apparently an oversight. Patch attached.\n\nRC1 was just stamped, are we in a sensitive time or is it still\npossible to backport this to REL_13_STABLE?\n\nIf not, that's fine, I'll just commit it to master. It's a little less\nimportant after 9878b643, which reduced the overpartitioning issue.\n\nRegards,\n\tJeff Davis", "msg_date": "Mon, 14 Sep 2020 17:50:15 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: logtape.c stats don't account for unused \"prefetched\" block\n numbers" }, { "msg_contents": "On Mon, Sep 14, 2020 at 5:50 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> Yes, it was apparently an oversight. Patch attached.\n\nThis is closer to how logical tapes are used within tuplesort.c. I\nnotice that this leads to about a 50% reduction in temp file usage for\na test case involving very little work_mem (work_mem is set to 64).\nBut it doesn't seem to make as much difference with more work_mem. It\nprobably has something to do with recursion during spilling.\n\n> RC1 was just stamped, are we in a sensitive time or is it still\n> possible to backport this to REL_13_STABLE?\n\nTesting indicates that this still doesn't make \"nBlocksWritten ==\nnBlocksAllocated\" when the instrumentation is used for\nHashAggs-that-spill.\n\nI'm not sure what I was talking about earlier when I connected this\nwith the main/instrumentation issue, since preallocation used by\nlogtape.c to help HashAggs-that-spill necessarily reserves blocks\nwithout writing them out for a while (the fires in California have\nmade it difficult to be productive). You might write blocks out as\nzero blocks first, and then only write the real data later\n(overwriting the zero blocks). But no matter how the writes among\ntapes are interlaced, the fact is that nBlocksAllocated can exceed\nnBlocksWritten by at least one block per active tape.\n\nIf we really wanted to ensure \"nBlocksWritten == nBlocksAllocated\",\nwouldn't it be necessary for LogicalTapeSetBlocks() to go through the\nremaining preallocated blocks from each tape and count the number of\nblocks \"logically preallocated\" (by ltsGetPreallocBlock()) but not yet\n\"physically preallocated\" (by being written out as zero blocks within\nltsWriteBlock())? That count would have to be subtracted, because\nnBlocksAllocated includes logically preallocated blocks, without\nregard for whether they've been physically preallocated. But we only\nknow the difference by checking against nBlocksWritten, so we might as\nwell just use my patch from earlier. (I'm not arguing that we should,\nI'm just pointing out the logical though perhaps absurd conclusion.)\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 14 Sep 2020 18:56:50 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: logtape.c stats don't account for unused \"prefetched\" block\n numbers" }, { "msg_contents": "On Mon, Sep 14, 2020 at 6:56 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I'm not sure what I was talking about earlier when I connected this\n> with the main/instrumentation issue, since preallocation used by\n> logtape.c to help HashAggs-that-spill necessarily reserves blocks\n> without writing them out for a while (the fires in California have\n> made it difficult to be productive). You might write blocks out as\n> zero blocks first, and then only write the real data later\n> (overwriting the zero blocks). But no matter how the writes among\n> tapes are interlaced, the fact is that nBlocksAllocated can exceed\n> nBlocksWritten by at least one block per active tape.\n\nOh, wait. Of course the point was that we wouldn't even have to use\nnBlocksAllocated in LogicalTapeSetBlocks() anymore -- we would make\nthe assumption that nBlocksWritten could be used for all callers in\nall cases. Which is a reasonable assumption once you enforce that\nthere are no active write buffers. Which is evidently a good idea\nanyway, since it saves on temp file disk space in\nHashAggs-that-spill/prealloc cases with very little work_mem.\n\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 14 Sep 2020 19:09:17 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: logtape.c stats don't account for unused \"prefetched\" block\n numbers" }, { "msg_contents": "On Mon, Sep 14, 2020 at 7:09 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Oh, wait. Of course the point was that we wouldn't even have to use\n> nBlocksAllocated in LogicalTapeSetBlocks() anymore -- we would make\n> the assumption that nBlocksWritten could be used for all callers in\n> all cases. Which is a reasonable assumption once you enforce that\n> there are no active write buffers. Which is evidently a good idea\n> anyway, since it saves on temp file disk space in\n> HashAggs-that-spill/prealloc cases with very little work_mem.\n\nLet's assume that we'll teach LogicalTapeSetBlocks() to use\nnBlocksWritten in place of nBlocksAllocated in all cases, as now seems\nlikely. Rather than asserting \"nBlocksWritten == nBlocksAllocated\"\ninside LogicalTapeSetBlocks() (as I suggested earlier at one point),\nwe could instead teach LogicalTapeSetBlocks() to iterate through each\ntape from the tapeset and make sure each tape has no writes buffered\n(so everything must be flushed). We could add a loop that would only\nbe used on assert-enabled builds.\n\nThis looping-through-tapes-to assert code would justify relying on\nnBlocksWritten in LogicalTapeSetBlocks(), and would make sure that we\ndon't let any bugs like this slip in in the future. It would\nnecessitate that we commit Jeff's hashagg-release-write-buffers.patch\npatch from earlier, I think, but that seems like a good idea anyway.\n\nYou suggested this yourself, Jeff (my suggestion about the assertion\nis just an expansion on your suggestion from earlier). This all seems\nlike a good idea to me. Can you write a patch that adjusts\nLogicalTapeSetBlocks() along these lines? Hopefully the assertion loop\nthing won't reveal some other problem with this plan.\n\nThanks\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 14 Sep 2020 19:29:33 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: logtape.c stats don't account for unused \"prefetched\" block\n numbers" }, { "msg_contents": "On Mon, 2020-09-14 at 19:29 -0700, Peter Geoghegan wrote:\n> Let's assume that we'll teach LogicalTapeSetBlocks() to use\n> nBlocksWritten in place of nBlocksAllocated in all cases, as now\n> seems\n> likely. Rather than asserting \"nBlocksWritten == nBlocksAllocated\"\n> inside LogicalTapeSetBlocks() (as I suggested earlier at one point),\n> we could instead teach LogicalTapeSetBlocks() to iterate through each\n> tape from the tapeset and make sure each tape has no writes buffered\n> (so everything must be flushed). We could add a loop that would only\n> be used on assert-enabled builds.\n\nSounds reasonable.\n\n> You suggested this yourself, Jeff (my suggestion about the assertion\n> is just an expansion on your suggestion from earlier). This all seems\n> like a good idea to me. Can you write a patch that adjusts\n> LogicalTapeSetBlocks() along these lines? Hopefully the assertion\n> loop\n> thing won't reveal some other problem with this plan.\n\nSure. Will backporting either patch into REL_13_STABLE now interfere\nwith RC1 release in any way?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Mon, 14 Sep 2020 20:07:44 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: logtape.c stats don't account for unused \"prefetched\" block\n numbers" }, { "msg_contents": "On Mon, Sep 14, 2020 at 8:07 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> Sure. Will backporting either patch into REL_13_STABLE now interfere\n> with RC1 release in any way?\n\nThe RMT will discuss this.\n\nIt would help if there was a patch ready to go.\n\nThanks\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 14 Sep 2020 20:48:05 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: logtape.c stats don't account for unused \"prefetched\" block\n numbers" }, { "msg_contents": "On Mon, 2020-09-14 at 20:48 -0700, Peter Geoghegan wrote:\n> On Mon, Sep 14, 2020 at 8:07 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> > Sure. Will backporting either patch into REL_13_STABLE now\n> > interfere\n> > with RC1 release in any way?\n> \n> The RMT will discuss this.\n> \n> It would help if there was a patch ready to go.\n\nAttached. HashAgg seems to accurately reflect the filesize, as does\nSort.\n\nRegards,\n\tJeff Davis", "msg_date": "Mon, 14 Sep 2020 23:44:47 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: logtape.c stats don't account for unused \"prefetched\" block\n numbers" }, { "msg_contents": "On Thu, 2020-09-10 at 18:42 -0700, Peter Geoghegan wrote:\n> We still need to put the reliance on ltsWriteBlock() allocating many\n> blocks before they've been logically written on some kind of formal\n> footing for Postgres 13 -- it is now possible that an all-zero block\n> will be left behind even after we're done writing and have flushed\n> all\n> temp buffers, which is a new thing.\n\nIs the current direction of this thread (i.e. the two posted patches)\naddressing your concern here?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Mon, 14 Sep 2020 23:52:32 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: logtape.c stats don't account for unused \"prefetched\" block\n numbers" }, { "msg_contents": "On Mon, Sep 14, 2020 at 11:52 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> On Thu, 2020-09-10 at 18:42 -0700, Peter Geoghegan wrote:\n> > We still need to put the reliance on ltsWriteBlock() allocating many\n> > blocks before they've been logically written on some kind of formal\n> > footing for Postgres 13 -- it is now possible that an all-zero block\n> > will be left behind even after we're done writing and have flushed\n> > all\n> > temp buffers, which is a new thing.\n>\n> Is the current direction of this thread (i.e. the two posted patches)\n> addressing your concern here?\n\nYes.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 15 Sep 2020 00:00:54 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: logtape.c stats don't account for unused \"prefetched\" block\n numbers" }, { "msg_contents": "On Mon, Sep 14, 2020 at 8:48 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Mon, Sep 14, 2020 at 8:07 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> > Sure. Will backporting either patch into REL_13_STABLE now interfere\n> > with RC1 release in any way?\n>\n> The RMT will discuss this.\n\nIt is okay to skip RC1 and commit the patch/patches for 13 itself.\nPlease wait until after Tom has pushed the rc1 tag. This will probably\nhappen tomorrow.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 15 Sep 2020 09:03:50 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: logtape.c stats don't account for unused \"prefetched\" block\n numbers" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Mon, Sep 14, 2020 at 8:48 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>> On Mon, Sep 14, 2020 at 8:07 PM Jeff Davis <pgsql@j-davis.com> wrote:\n>>> Sure. Will backporting either patch into REL_13_STABLE now interfere\n>>> with RC1 release in any way?\n\n> It is okay to skip RC1 and commit the patch/patches for 13 itself.\n> Please wait until after Tom has pushed the rc1 tag. This will probably\n> happen tomorrow.\n\nI plan to tag rc1 in around six hours, ~2200UTC today, barring\ntrouble reports from packagers (none so far). Feel free to push your\npatch once the tag appears.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 15 Sep 2020 12:08:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: logtape.c stats don't account for unused \"prefetched\" block\n numbers" }, { "msg_contents": "On Mon, Sep 14, 2020 at 11:44 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> Attached. HashAgg seems to accurately reflect the filesize, as does\n> Sort.\n\nFor the avoidance of doubt: I think that this is the right way to go,\nand that it should be committed shortly, before we stamp 13.0. The\nsame goes for hashagg-release-write-buffers.patch.\n\nThanks\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 15 Sep 2020 11:33:00 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: logtape.c stats don't account for unused \"prefetched\" block\n numbers" }, { "msg_contents": "I wrote:\n> I plan to tag rc1 in around six hours, ~2200UTC today, barring\n> trouble reports from packagers (none so far). Feel free to push your\n> patch once the tag appears.\n\nThe tag is applied, though for some reason the pgsql-committers auto\ne-mail about new tags hasn't been working lately.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 15 Sep 2020 18:02:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: logtape.c stats don't account for unused \"prefetched\" block\n numbers" }, { "msg_contents": "On Tue, Sep 15, 2020 at 3:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The tag is applied, though for some reason the pgsql-committers auto\n> e-mail about new tags hasn't been working lately.\n\nThanks. FWIW I did get the automated email shortly after you sent this email.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 15 Sep 2020 15:27:21 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: logtape.c stats don't account for unused \"prefetched\" block\n numbers" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Tue, Sep 15, 2020 at 3:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The tag is applied, though for some reason the pgsql-committers auto\n>> e-mail about new tags hasn't been working lately.\n\n> Thanks. FWIW I did get the automated email shortly after you sent this email.\n\nYeah, it did show up here too, about an hour after I pushed the tag.\nThe last several taggings have been delayed similarly, and I think\nat least one never was reported at all.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 15 Sep 2020 19:00:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: logtape.c stats don't account for unused \"prefetched\" block\n numbers" }, { "msg_contents": "On 2020-Sep-15, Tom Lane wrote:\n\n> Peter Geoghegan <pg@bowt.ie> writes:\n> > On Tue, Sep 15, 2020 at 3:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> The tag is applied, though for some reason the pgsql-committers auto\n> >> e-mail about new tags hasn't been working lately.\n> \n> > Thanks. FWIW I did get the automated email shortly after you sent this email.\n> \n> Yeah, it did show up here too, about an hour after I pushed the tag.\n> The last several taggings have been delayed similarly, and I think\n> at least one never was reported at all.\n\nI approved it about half an hour after it got in the moderation queue.\n\nThey get moderated because the noreply@postgresql.org address which\nappears as sender is not subscribed to any list. I also added that\naddress to the whitelist now, but whether that's a great fix in the long\nrun is debatable. \n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 15 Sep 2020 21:10:27 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: logtape.c stats don't account for unused \"prefetched\" block\n numbers" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> They get moderated because the noreply@postgresql.org address which\n> appears as sender is not subscribed to any list.\n\nAh-hah.\n\n> I also added that\n> address to the whitelist now, but whether that's a great fix in the long\n> run is debatable. \n\nCan/should we change the address that originates such messages?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 15 Sep 2020 20:26:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: logtape.c stats don't account for unused \"prefetched\" block\n numbers" } ]
[ { "msg_contents": "Hello,\n\nI found that \"make installcheck\" could not work in PGXS.\n\n--[/src/foo_project/Makefile]--\nSUBDIRS = foo\n\nTAP_TESTS = 1\n\nPG_CONFIG = pg_config\nPGXS := $(shell $(PG_CONFIG) --pgxs)\n\ninclude $(PGXS)\n\n$(recurse)\n$(recurse_always)\n\n--[/src/foo_project/t/001_foo_test.pl]\nuse strict;\nuse warnings;\nuse PostgresNode;\nuse TestLib;\n\n# Replace with the number of tests to execute:\nuse Test::More tests => 1;\n\nmy $node = PostgresNode->get_new_node('primary');\n$node->init; ## --> Bailout called in PostgresNode.pm that refers PG_REGRESS environment variable.\n\n--[log]--\ncd /src/foo_project\nmake installcheck\n :\n :\nrm -rf '/src/foo_project'/tmp_check\n/bin/mkdir -p '/src/foo_project'/tmp_check\ncd ./ && TESTDIR='/src/foo_project' PATH=\"/installdir/bin:$PATH\" PGPORT='65432' top_builddir='/src/foo_project//installdir/lib/pgxs/src/makefiles/../..' PG_REGRESS='/src/foo_project//installdir/lib/pgxs/src/makefiles/../../src/test/regress/pg_regress' REGRESS_SHLIB='/src/test/regress/regress.so' /bin/prove -I /installdir/lib/pgxs/src/makefiles/../../src/test/perl/ -I ./ t/*.pl\nt/001_foo_test.pl .... Bailout called. Further testing stopped: system /src/foo_project//installdir/lib/pgxs/src/makefiles/../../src/test/regress/pg_regress failed\nFAILED--Further testing stopped: system /src/foo_project//installdir/lib/pgxs/src/makefiles/../../src/test/regress/pg_regress failed\nmake: *** [installcheck] Error 255\n\n\nThe cause is in [Makefile.global.in].\nAlthogh $(CURDIR) is '/src/foo_project' and $(top_builddir) is '/installdir/lib/pgxs',\nthe code concatenates them for setting PG_REGRESS.\n\n``\n define prove_installcheck\n rm -rf '$(CURDIR)'/tmp_check\n $(MKDIR_P) '$(CURDIR)'/tmp_check\n cd $(srcdir) && TESTDIR='$(CURDIR)' PATH=\"$(bindir):$$PATH\" PGPORT='6$(DEF_PGPORT)' top_builddir='$(CURDIR)/$(top_builddir)' PG_REGRESS='$(CURDIR)/$(top_builddir)/src/test/regress/pg_regress' REGRESS_SHLIB='$(abs_top_builddir)/src/test/regress/regress$(DLSUFFIX)' $(PROVE) $(PG_PROVE_FLAGS) $(PROVE_FLAGS) $(if $(PROVE_TESTS),$(PROVE_TESTS),t/*.pl)\n endef\n``\n\nIn non-PGXS environment, top_builddir is a relative path against the top of postgresql source tree.\nBut, in PGXS, top_builddir is a absolute path like /installdir/lib/pgxs/src/makefiles intentionally.\nThe existing code of [Makefile.global.in] does not consider it.\n\nI make a patch. (It may not to be smart.)\nPlease your comments.\n\n\nRegards\nRyo Matsumura", "msg_date": "Fri, 31 Jul 2020 08:31:56 +0000", "msg_from": "\"matsumura.ryo@fujitsu.com\" <matsumura.ryo@fujitsu.com>", "msg_from_op": true, "msg_subject": "[bugfix]\"make installcheck\" could not work in PGXS" }, { "msg_contents": "On Fri, Jul 31, 2020 at 08:31:56AM +0000, matsumura.ryo@fujitsu.com wrote:\n> I found that \"make installcheck\" could not work in PGXS.\n\nYeah, that's a known problem. One way to counter that is for example\nto grab the path of pg_regress from pg_config --libdir and set\n$ENV{PG_REGRESS} to it, but that's hacky. So I agree that it would be\ngood to do something.\n\n> In non-PGXS environment, top_builddir is a relative path against the top of postgresql source tree.\n> But, in PGXS, top_builddir is a absolute path like /installdir/lib/pgxs/src/makefiles intentionally.\n> The existing code of [Makefile.global.in] does not consider it.\n> \n> I make a patch. (It may not to be smart.)\n> Please your comments.\n\nNot sure that this goes completely to the right direction. It seems\nto me that we should have room to set and use PG_REGRESS also for\npg_regress_check and pg_regress_installcheck.\n--\nMichael", "msg_date": "Fri, 31 Jul 2020 19:57:56 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [bugfix]\"make installcheck\" could not work in PGXS" }, { "msg_contents": "Hello, \n\nOn Fri, Aug 5, 2020 at 10:57:56 +0000, Michael Paquier <michael(at)paquier(dot)xyz> wrote:\n> Yeah, that's a known problem. One way to counter that is for example\n> to grab the path of pg_regress from pg_config --libdir and set\n> $ENV{PG_REGRESS} to it, but that's hacky. So I agree that it would be\n> good to do something.\n\nThank you.\nI attach a new patch.\n\n> Not sure that this goes completely to the right direction. It seems\n> to me that we should have room to set and use PG_REGRESS also for\n> pg_regress_check and pg_regress_installcheck.\n\nI understand that PG_REGRESS is an environment variable for each test program.\nSo I add a gmake variable PG_REGRESS_PATH.\n\nThe followings are other changings.\n- Change REGRESS_SHLIB like as PG_REGRESS.\n- Replace $(CURDIR)/$(top_builddir) to $(abs_top_builddir).\n- Remove setting of environment variable 'top_builddir' in command line for prove_installcheck.\n I wonder what I should set to it and then I remove it.\n Because top_builddir is used for gmake genellaly not for test programs and PostgreSQL's test framework doesn't use it.\n Is it going too far?\n\nRegards\nRyo Matsumura", "msg_date": "Fri, 7 Aug 2020 01:43:49 +0000", "msg_from": "\"matsumura.ryo@fujitsu.com\" <matsumura.ryo@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [bugfix]\"make installcheck\" could not work in PGXS" } ]
[ { "msg_contents": "Hello.\n\nPostgreSQL server accepts only one CRL file. It is easy to expand\nbe_tls_init to accept a directory set in ssl_crl_file. But I'm not\nsure CRL is actually even utilized in the field so that could ends\nwith just bloating the documentation.\n\nIs it work doing?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 31 Jul 2020 17:39:11 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Is it worth accepting multiple CRLs?" }, { "msg_contents": "A CA may issue a CRL infrequently, but issue a delta-CRL frequently. Does the logic support this properly?\n\nPersonal email. hbhotz@oxy.edu\n\n> On Jul 31, 2020, at 1:39 AM, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> \n> Hello.\n> \n> PostgreSQL server accepts only one CRL file. It is easy to expand\n> be_tls_init to accept a directory set in ssl_crl_file. But I'm not\n> sure CRL is actually even utilized in the field so that could ends\n> with just bloating the documentation.\n> \n> Is it work doing?\n> \n> regards.\n> \n> -- \n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n> \n> \n\n\n\n", "msg_date": "Fri, 31 Jul 2020 05:53:53 -0700", "msg_from": "Henry B Hotz <hbhotz@oxy.edu>", "msg_from_op": false, "msg_subject": "Re: Is it worth accepting multiple CRLs?" }, { "msg_contents": "Greetings,\n\n* Kyotaro Horiguchi (horikyota.ntt@gmail.com) wrote:\n> PostgreSQL server accepts only one CRL file. It is easy to expand\n> be_tls_init to accept a directory set in ssl_crl_file. But I'm not\n> sure CRL is actually even utilized in the field so that could ends\n> with just bloating the documentation.\n> \n> Is it work doing?\n\nYes, CRLs are absolutely used in the field and having this would be\nnice.\n\nThanks,\n\nStephen", "msg_date": "Fri, 31 Jul 2020 09:00:14 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Is it worth accepting multiple CRLs?" }, { "msg_contents": "At Fri, 31 Jul 2020 05:53:53 -0700, Henry B Hotz <hbhotz@oxy.edu> wrote in \n> A CA may issue a CRL infrequently, but issue a delta-CRL frequently. Does the logic support this properly?\n\nIf you are talking about regsitering new revokations while server is\nrunning, it checks newer CRLs upon each lookup according to the\ndocumentation [1], so a new Delta-CRL can be added after server\nstart. If server restart is allowed, the CRL file specified by\nssl_crl_file can contain multiple CRLs by just concatenation.\n\n[1]: https://www.openssl.org/docs/man1.1.1/man3/X509_LOOKUP_hash_dir.html\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 03 Aug 2020 16:19:37 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is it worth accepting multiple CRLs?" }, { "msg_contents": "At Fri, 31 Jul 2020 09:00:14 -0400, Stephen Frost <sfrost@snowman.net> wrote in \n> Greetings,\n> \n> * Kyotaro Horiguchi (horikyota.ntt@gmail.com) wrote:\n> > PostgreSQL server accepts only one CRL file. It is easy to expand\n> > be_tls_init to accept a directory set in ssl_crl_file. But I'm not\n> > sure CRL is actually even utilized in the field so that could ends\n> > with just bloating the documentation.\n> > \n> > Is it work doing?\n> \n> Yes, CRLs are absolutely used in the field and having this would be\n> nice.\n\nThanks for the opinion. I'll continue working on this.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 03 Aug 2020 16:20:40 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is it worth accepting multiple CRLs?" }, { "msg_contents": "Uggg.\n\nAt Mon, 03 Aug 2020 16:19:37 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Fri, 31 Jul 2020 05:53:53 -0700, Henry B Hotz <hbhotz@oxy.edu> wrote in \n> > A CA may issue a CRL infrequently, but issue a delta-CRL frequently. Does the logic support this properly?\n> \n> If you are talking about regsitering new revokations while server is\n> running, it checks newer CRLs upon each lookup according to the\n> documentation [1], so a new Delta-CRL can be added after server\n> start. If server restart is allowed, the CRL file specified by\n\nI didin't know that ssl files are reloaded by SIGHUP (pg_ctl\nreload). So the ssl_crl_file is also reloaded on server reload.\n\n> ssl_crl_file can contain multiple CRLs by just concatenation.\n> \n> [1]: https://www.openssl.org/docs/man1.1.1/man3/X509_LOOKUP_hash_dir.html\n\nStill on-demand loading is the advantage of the hashed directory\nmethod. I'll continue working..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 03 Aug 2020 18:17:56 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is it worth accepting multiple CRLs?" }, { "msg_contents": "At Mon, 03 Aug 2020 16:20:40 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Thanks for the opinion. I'll continue working on this.\n\nThis is it, but.. \n\nLooking closer I realized that certificates are verified in each\nbackend so CRL cache doesn't work at all for the hashed directory\nmethod. Therefore, all CRL files relevant to a certificate to be\nverfied are loaded every time a backend starts.\n\nThe only advantage of this is avoiding irrelevant CRLs from being\nloaded in exchange of loading relevant CRLs at every session\nstart. Session startup gets slower by many delta CRLs from the same\nCA.\n\nSeems far from promising.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 04 Aug 2020 17:37:08 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is it worth accepting multiple CRLs?" }, { "msg_contents": "Greetings,\n\n* Kyotaro Horiguchi (horikyota.ntt@gmail.com) wrote:\n> At Mon, 03 Aug 2020 16:20:40 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > Thanks for the opinion. I'll continue working on this.\n> \n> This is it, but.. \n\nThanks!\n\n> Looking closer I realized that certificates are verified in each\n> backend so CRL cache doesn't work at all for the hashed directory\n> method. Therefore, all CRL files relevant to a certificate to be\n> verfied are loaded every time a backend starts.\n> \n> The only advantage of this is avoiding irrelevant CRLs from being\n> loaded in exchange of loading relevant CRLs at every session\n> start. Session startup gets slower by many delta CRLs from the same\n> CA.\n> \n> Seems far from promising.\n\nI agree that it's not ideal, but I don't know that this is a reason to\nnot move forward with this feature..?\n\nWe could certainly have a later patch which improves this in some way\n(though exactly how isn't clear... if we move the CRL loading into\npostmaster then we'd have to load *all* of them, and then we'd still\nneed to check if they've changed since we loaded them, and presumably\nhave some way to signal the postmaster to update its set from time to\ntime..), but that can be a future effort.\n\nI took a quick look through the patch and it seemed pretty straight\nforward to me and a good improvement.\n\nWould love to hear other thoughts. I hope you'll submit this for the\nSeptember CF and ping me when you do and I'll see if I can get it\ncommitted.\n\nThanks!\n\nStephen", "msg_date": "Sat, 15 Aug 2020 13:18:22 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Is it worth accepting multiple CRLs?" }, { "msg_contents": "Hello.\n\nAt Sat, 15 Aug 2020 13:18:22 -0400, Stephen Frost <sfrost@snowman.net> wrote in \n> > Looking closer I realized that certificates are verified in each\n> > backend so CRL cache doesn't work at all for the hashed directory\n> > method. Therefore, all CRL files relevant to a certificate to be\n> > verfied are loaded every time a backend starts.\n> > \n> > The only advantage of this is avoiding irrelevant CRLs from being\n> > loaded in exchange of loading relevant CRLs at every session\n> > start. Session startup gets slower by many delta CRLs from the same\n> > CA.\n> > \n> > Seems far from promising.\n> \n> I agree that it's not ideal, but I don't know that this is a reason to\n> not move forward with this feature..?\n\nSince one of the significant advantage of the directory method is\ndifferential loading of new CRLs. But actually it has other advanges\nlike easier file handling and not needing server reload.\n\n> We could certainly have a later patch which improves this in some way\n> (though exactly how isn't clear... if we move the CRL loading into\n> postmaster then we'd have to load *all* of them, and then we'd still\n> need to check if they've changed since we loaded them, and presumably\n> have some way to signal the postmaster to update its set from time to\n> time..), but that can be a future effort.\n> \n> I took a quick look through the patch and it seemed pretty straight\n> forward to me and a good improvement.\n> \n> Would love to hear other thoughts. I hope you'll submit this for the\n> September CF and ping me when you do and I'll see if I can get it\n> committed.\n\nThank you very much. I'll do that after some polishing.\n\nA near-by discussion about OpenSSL3.0 conflicts with this but it's\neasy to follow.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 18 Aug 2020 16:43:47 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is it worth accepting multiple CRLs?" }, { "msg_contents": "At Tue, 18 Aug 2020 16:43:47 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Thank you very much. I'll do that after some polishing.\n> \n> A near-by discussion about OpenSSL3.0 conflicts with this but it's\n> easy to follow.\n\nRebased. Fixed bogus tests and strange tentative API change of\nSSLServer.pm. Corrected a (maybe) spelling mistake. I'm going to\nregister this to the coming CF.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Mon, 31 Aug 2020 18:03:02 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is it worth accepting multiple CRLs?" }, { "msg_contents": "On Mon, Aug 31, 2020 at 06:03:02PM +0900, Kyotaro Horiguchi wrote:\n> Rebased. Fixed bogus tests and strange tentative API change of\n> SSLServer.pm. Corrected a (maybe) spelling mistake. I'm going to\n> register this to the coming CF.\n\nStephen, are you planning to look at that? I know that you are not\nregistered as a reviewer, but you mentioned upthread that you may be\nable to look at it.\n\nThe changes in libpq's backend/frontend are rather simple, but the\ndocs and the changes in the TAP tests require a careful lookup. \n--\nMichael", "msg_date": "Thu, 17 Sep 2020 12:06:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Is it worth accepting multiple CRLs?" }, { "msg_contents": "On 2020-08-31 11:03, Kyotaro Horiguchi wrote:\n> At Tue, 18 Aug 2020 16:43:47 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n>> Thank you very much. I'll do that after some polishing.\n>>\n>> A near-by discussion about OpenSSL3.0 conflicts with this but it's\n>> easy to follow.\n> \n> Rebased. Fixed bogus tests and strange tentative API change of\n> SSLServer.pm. Corrected a (maybe) spelling mistake. I'm going to\n> register this to the coming CF.\n\nOther systems that offer both a CRL file and a CRL directory usually \nspecify those using two separate configuration settings. Examples:\n\nhttps://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_ssl_crlpath\nhttps://httpd.apache.org/docs/current/mod/mod_ssl.html#sslcarevocationpath\n\nThese are then presumably both passed to X509_STORE_load_locations(), \nwhich supports specifying a file and directory concurrently.\n\nI think that would be a preferable approach. In practical terms, it \nwould allow a user to introduce the directory method gradually without \nhaving to convert the existing CRL file at the same time.\n\n\n", "msg_date": "Fri, 15 Jan 2021 08:56:27 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Is it worth accepting multiple CRLs?" }, { "msg_contents": "At Fri, 15 Jan 2021 08:56:27 +0100, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote in \n> On 2020-08-31 11:03, Kyotaro Horiguchi wrote:\n> > At Tue, 18 Aug 2020 16:43:47 +0900 (JST), Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote in\n> >> Thank you very much. I'll do that after some polishing.\n> >>\n> >> A near-by discussion about OpenSSL3.0 conflicts with this but it's\n> >> easy to follow.\n> > Rebased. Fixed bogus tests and strange tentative API change of\n> > SSLServer.pm. Corrected a (maybe) spelling mistake. I'm going to\n> > register this to the coming CF.\n> \n> Other systems that offer both a CRL file and a CRL directory usually\n> specify those using two separate configuration settings. Examples:\n> \n> https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_ssl_crlpath\n> https://httpd.apache.org/docs/current/mod/mod_ssl.html#sslcarevocationpath\n> \n> These are then presumably both passed to X509_STORE_load_locations(),\n> which supports specifying a file and directory concurrently.\n> \n> I think that would be a preferable approach. In practical terms, it\n> would allow a user to introduce the directory method gradually without\n> having to convert the existing CRL file at the same time.\n\nThank you for the information. The only reason for sharing the same\nvariable for both file and directory is to avoid additional variable\nonly for this reason. I'll post a new version where new GUC\nssl_crl_path is added.\n\nBy the way we can do the same thing on CA file/dir, but I personally\nthink that the benefit from the specify-by-directory for CA files is\nfar less than CRL files. So I'm not going to do this for CA files for\nnow.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 19 Jan 2021 09:17:34 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is it worth accepting multiple CRLs?" }, { "msg_contents": "On 2021-01-19 01:17, Kyotaro Horiguchi wrote:\n> Thank you for the information. The only reason for sharing the same\n> variable for both file and directory is to avoid additional variable\n> only for this reason. I'll post a new version where new GUC\n> ssl_crl_path is added.\n\nOkay, I look forward to that patch.\n\n> By the way we can do the same thing on CA file/dir, but I personally\n> think that the benefit from the specify-by-directory for CA files is\n> far less than CRL files. So I'm not going to do this for CA files for\n> now.\n\nYeah, that seems not so commonly used.\n\n\n", "msg_date": "Tue, 19 Jan 2021 09:01:35 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Is it worth accepting multiple CRLs?" }, { "msg_contents": "At Tue, 19 Jan 2021 09:17:34 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> By the way we can do the same thing on CA file/dir, but I personally\n> think that the benefit from the specify-by-directory for CA files is\n> far less than CRL files. So I'm not going to do this for CA files for\n> now.\n\nThis is it. A new guc ssl_crl_dir and connection option crldir are\nadded.\n\nOne problem raised upthread is the footprint for test is quite large\nbecause all certificate and key files are replaced by this patch. I\nthink we can shrink the footprint by generating that files on-demand\nbut that needs openssl frontend to be installed on the development\nenvironment.\n\nIf we agree that requirement, I'm going to go that direction.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 19 Jan 2021 17:32:00 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is it worth accepting multiple CRLs?" }, { "msg_contents": "On 2021-01-19 09:32, Kyotaro Horiguchi wrote:\n> At Tue, 19 Jan 2021 09:17:34 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n>> By the way we can do the same thing on CA file/dir, but I personally\n>> think that the benefit from the specify-by-directory for CA files is\n>> far less than CRL files. So I'm not going to do this for CA files for\n>> now.\n> \n> This is it. A new guc ssl_crl_dir and connection option crldir are\n> added.\n\nThis looks pretty good to me overall.\n\nYou need to update the expected result of the postgres_fdw test.\n\nAlso check your patch for whitespace errors with git diff --check or \nsimilar.\n\n> One problem raised upthread is the footprint for test is quite large\n> because all certificate and key files are replaced by this patch. I\n> think we can shrink the footprint by generating that files on-demand\n> but that needs openssl frontend to be installed on the development\n> environment.\n\nI don't understand why you need to recreate all these files. All your \npatch should contain are the new *.r0 files that are computed from the \nexisting *.crl files. Nothing else should change, AIUI.\n\nSome of the makefile rules for generating the CRL files need some \nrefinement. In\n\n+ssl/root+server-crldir: ssl/server.crl\n+ mkdir ssl/root+server-crldir\n+ cp ssl/server.crl ssl/root+server-crldir/`openssl crl -hash -noout \n-in ssl/server.crl`.r0\n+ cp ssl/root.crl ssl/root+server-crldir/`openssl crl -hash -noout -in \nssl/root.crl`.r0\n+ssl/root+client-crldir: ssl/client.crl\n+ mkdir ssl/root+client-crldir\n+ cp ssl/client.crl ssl/root+client-crldir/`openssl crl -hash -noout \n-in ssl/client.crl`.r0\n+ cp ssl/root.crl ssl/root+client-crldir/`openssl crl -hash -noout -in \nssl/root.crl`.r0\n\nthe rules should also have a dependency on ssl/root.crl in addition to \nssl/server.crl.\n\nBy the way:\n\n- print $sslconf \"ssl_crl_file='root+client.crl'\\n\";\n+ print $sslconf \"ssl_crl_file='$crlfile'\\n\" if (defined $crlfile);\n+ print $sslconf \"ssl_crl_dir='$crldir'\\n\" if (defined $crldir);\n\nTrailing \"if\" doesn't need parentheses.\n\n\n", "msg_date": "Sat, 30 Jan 2021 22:20:19 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Is it worth accepting multiple CRLs?" }, { "msg_contents": "At Sat, 30 Jan 2021 22:20:19 +0100, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote in \n> On 2021-01-19 09:32, Kyotaro Horiguchi wrote:\n> > At Tue, 19 Jan 2021 09:17:34 +0900 (JST), Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote in\n> >> By the way we can do the same thing on CA file/dir, but I personally\n> >> think that the benefit from the specify-by-directory for CA files is\n> >> far less than CRL files. So I'm not going to do this for CA files for\n> >> now.\n> > This is it. A new guc ssl_crl_dir and connection option crldir are\n> > added.\n> \n> This looks pretty good to me overall.\n\nThanks!\n\n> You need to update the expected result of the postgres_fdw test.\n\nOops. Fixed.\n\n> Also check your patch for whitespace errors with git diff --check or\n> similar.\n\nSorry for forgetting that. I found an extra new line in\nbe-secure-openssl.c and remved it.\n\n> > One problem raised upthread is the footprint for test is quite large\n> > because all certificate and key files are replaced by this patch. I\n> > think we can shrink the footprint by generating that files on-demand\n> > but that needs openssl frontend to be installed on the development\n> > environment.\n> \n> I don't understand why you need to recreate all these files. All your\n> patch should contain are the new *.r0 files that are computed from the\n> existing *.crl files. Nothing else should change, AIUI.\n\nAh. If I ran make with this patch, it complains of\nssl/root_ca-certindex lacking and I ran \"make clean\" to avoid the\ncomplaint. Instead, I created the additional crl directories by\nmanually executing the recipes of the additional rules.\n\nv3: 41 files changed, 496 insertions(+), 255 deletions(-)\nv4: 21 files changed, 258 insertions(+), 18 deletions(-)\n\nI checked that 001_ssltests.pl succedds both with the preexisting ssl/\nfiles and with the files created by \"make sslfiles\" after \"make\nsslfiles-clean\".\n\n> Some of the makefile rules for generating the CRL files need some\n> refinement. In\n> \n> +ssl/root+server-crldir: ssl/server.crl\n> + mkdir ssl/root+server-crldir\n> + cp ssl/server.crl ssl/root+server-crldir/`openssl crl -hash -noout\n> -in ssl/server.crl`.r0\n> + cp ssl/root.crl ssl/root+server-crldir/`openssl crl -hash -noout -in\n> ssl/root.crl`.r0\n> +ssl/root+client-crldir: ssl/client.crl\n> + mkdir ssl/root+client-crldir\n> + cp ssl/client.crl ssl/root+client-crldir/`openssl crl -hash -noout\n> -in ssl/client.crl`.r0\n> + cp ssl/root.crl ssl/root+client-crldir/`openssl crl -hash -noout -in\n> ssl/root.crl`.r0\n> \n> the rules should also have a dependency on ssl/root.crl in addition to\n> ssl/server.crl.\n\nRight. Added.\n\n> By the way:\n> \n> - print $sslconf \"ssl_crl_file='root+client.crl'\\n\";\n> + print $sslconf \"ssl_crl_file='$crlfile'\\n\" if (defined $crlfile);\n> + print $sslconf \"ssl_crl_dir='$crldir'\\n\" if (defined $crldir);\n> \n> Trailing \"if\" doesn't need parentheses.\n\nI know. However I preferred to have them at the time, I don't have a\nstrong opinion about how it should be. Ripped off them.\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Mon, 01 Feb 2021 11:42:32 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is it worth accepting multiple CRLs?" }, { "msg_contents": "The commit fe61df7f82 shot down this.\n\nThis patch allows a new GUC ssl_crl_dir and a new libpq connection\noption sslcrldir to specify CRL directory, which stores multiple files\nthat contains one CRL. With that method server loads only CRLs for the\nCA of the certificate being validated.\n\nAlong with rebasing, the documentation is slightly reworded.\n\nrevocation list (CRL). Certificates listed in this file, if it\n exists, will be rejected while attempting to authenticate the\n- server's certificate. If both sslcrl and sslcrldir are not set,\n- this setting is assumed to be\n+ server's certificate. If neither sslcrl sslcrldir is set, this\n+ setting is assumed to be\n <filename>~/.postgresql/root.crl</filename>. See\n\nAnd added a line for the new variable in postgresql.conf.sample.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 17 Feb 2021 13:05:26 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is it worth accepting multiple CRLs?" }, { "msg_contents": "On 2021-02-17 05:05, Kyotaro Horiguchi wrote:\n> The commit fe61df7f82 shot down this.\n> \n> This patch allows a new GUC ssl_crl_dir and a new libpq connection\n> option sslcrldir to specify CRL directory, which stores multiple files\n> that contains one CRL. With that method server loads only CRLs for the\n> CA of the certificate being validated.\n> \n> Along with rebasing, the documentation is slightly reworded.\n\nCommitted this.\n\nI changed the documentation a bit. Instead of having a separate section \ndescribing the CRL options, I put that information directly into the \nlibpq and GUC sections. Some of the information, such as that the \ndirectory files are loaded on demand, isn't so obviously useful in the \nlibpq case, so I found that a bit confusing. Also, I got the impression \nthat the hashed directory format is sort of internal to OpenSSL, and \nthere are several versions of that format, so I didn't want to copy over \nthe description of these internals. Instead, I referred to the openssl \nrehash/c_rehash commands for information. If we get support for \nnon-OpenSSL providers, we'll probably have to revisit this.\n\n\n\n", "msg_date": "Thu, 18 Feb 2021 08:24:23 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Is it worth accepting multiple CRLs?" }, { "msg_contents": "Thanks for committing this!\n\nAt Thu, 18 Feb 2021 08:24:23 +0100, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote in \n> On 2021-02-17 05:05, Kyotaro Horiguchi wrote:\n> > The commit fe61df7f82 shot down this.\n> > This patch allows a new GUC ssl_crl_dir and a new libpq connection\n> > option sslcrldir to specify CRL directory, which stores multiple files\n> > that contains one CRL. With that method server loads only CRLs for the\n> > CA of the certificate being validated.\n> > Along with rebasing, the documentation is slightly reworded.\n> \n> Committed this.\n> \n> I changed the documentation a bit. Instead of having a separate\n> section describing the CRL options, I put that information directly\n> into the libpq and GUC sections. Some of the information, such as\n> that the directory files are loaded on demand, isn't so obviously\n> useful in the libpq case, so I found that a bit confusing. Also, I\n\nAgreed.\n\n> got the impression that the hashed directory format is sort of\n> internal to OpenSSL, and there are several versions of that format, so\n> I didn't want to copy over the description of these internals.\n> Instead, I referred to the openssl rehash/c_rehash commands for\n> information. If we get support for non-OpenSSL providers, we'll\n> probably have to revisit this.\n\nThanks. I'm fine with that, either.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 18 Feb 2021 17:06:25 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is it worth accepting multiple CRLs?" } ]
[ { "msg_contents": "Hi\n\nIn a recent audit, I noticed that application developers have a tendency to \nabuse the distinct clause. For instance they use an ORM and add a distinct at \nthe top level just because they don't know the cost it has, or they don't know \nthat using EXISTS is a better way to express their queries than doing JOINs \n(or worse, they can't do better).\n\nThey thus have this kind of queries (considering tbl1 has a PK of course):\nSELECT DISTINCT * FROM tbl1;\nSELECT DISTINCT * FROM tbl1 ORDER BY a;\nSELECT DISTINCT tbl1.* FROM tbl1\n\tJOIN tbl2 ON tbl2.a = tbl1.id;\n\nThese can be transformed into:\nSELECT * FROM tbl1 ORDER BY *;\nSELECT * FROM tbl1 ORDER BY a;\nSELECT tbl1.* FROM tbl1 SEMI-JOIN tbl2 ON tbl2.a = tbl1.id ORDER BY tbl1.*;\n\nThe attached patch does that.\nI added extra safeties in several place just to be sure I don't touch \nsomething I can not handle, but I may have been very candid with the distinct \nto sort transformation.\nThe cost of this optimization is quite low : for queries that don't have any \ndistinct, it's just one if. If there is a distinct, we iterate once through \nevery target, then we fetch the PK and iterate through the DISTINCT clause \nfields. If it is possible to optimize, we then iterate through the JOINs.\n\nAny comment on this would be more than welcome!\n\nRegards\n\n Pierre", "msg_date": "Fri, 31 Jul 2020 10:41:19 +0200", "msg_from": "Pierre Ducroquet <p.psql@pinaraf.info>", "msg_from_op": true, "msg_subject": "[PATCH] Remove useless distinct clauses" }, { "msg_contents": "Hi Pierre,\n\nOn Fri, Jul 31, 2020 at 2:11 PM Pierre Ducroquet <p.psql@pinaraf.info> wrote:\n>\n> Hi\n>\n> In a recent audit, I noticed that application developers have a tendency to\n> abuse the distinct clause. For instance they use an ORM and add a distinct at\n> the top level just because they don't know the cost it has, or they don't know\n> that using EXISTS is a better way to express their queries than doing JOINs\n> (or worse, they can't do better).\n>\n> They thus have this kind of queries (considering tbl1 has a PK of course):\n> SELECT DISTINCT * FROM tbl1;\n> SELECT DISTINCT * FROM tbl1 ORDER BY a;\n> SELECT DISTINCT tbl1.* FROM tbl1\n> JOIN tbl2 ON tbl2.a = tbl1.id;\n>\n> These can be transformed into:\n> SELECT * FROM tbl1 ORDER BY *;\n\nWe don't need an ORDER BY here since there's primary key on tbl1 and\nDISTINCT doesn't ensure ordered result.\n\n> SELECT * FROM tbl1 ORDER BY a;\n> SELECT tbl1.* FROM tbl1 SEMI-JOIN tbl2 ON tbl2.a = tbl1.id ORDER BY tbl1.*;\n>\n> The attached patch does that.\n> I added extra safeties in several place just to be sure I don't touch\n> something I can not handle, but I may have been very candid with the distinct\n> to sort transformation.\n> The cost of this optimization is quite low : for queries that don't have any\n> distinct, it's just one if. If there is a distinct, we iterate once through\n> every target, then we fetch the PK and iterate through the DISTINCT clause\n> fields. If it is possible to optimize, we then iterate through the JOINs.\n\nWe are already discussing this feature at\nhttps://www.postgresql.org/message-id/flat/CAKJS1f-wH83Fi2coEVNUWFxOGQ4BJRRTGqDMvidCoiR9WEwxsw%40mail.gmail.com#56a08b441cc61afaf85c6232c5d40a3f.\nYou are welcome to contribute your ideas/code/review on that thread.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 31 Jul 2020 15:06:31 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Remove useless distinct clauses" }, { "msg_contents": "On Fri, 31 Jul 2020 at 20:41, Pierre Ducroquet <p.psql@pinaraf.info> wrote:\n>\n> In a recent audit, I noticed that application developers have a tendency to\n> abuse the distinct clause. For instance they use an ORM and add a distinct at\n> the top level just because they don't know the cost it has, or they don't know\n> that using EXISTS is a better way to express their queries than doing JOINs\n> (or worse, they can't do better).\n>\n> They thus have this kind of queries (considering tbl1 has a PK of course):\n> SELECT DISTINCT * FROM tbl1;\n> SELECT DISTINCT * FROM tbl1 ORDER BY a;\n> SELECT DISTINCT tbl1.* FROM tbl1\n> JOIN tbl2 ON tbl2.a = tbl1.id;\n\nThis is a common anti-pattern that I used to see a couple of jobs ago.\nWhat seemed to happen was that someone would modify some query or a\nview to join in an additional table to fetch some information that was\nnow required. At some later time, there'd be a bug report to say that\nthe query is returning certain records more than once. The\ndeveloper's solution was to add DISTINCT, instead of figuring out that\nthe join that was previously added missed some column from the join\nclause.\n\n> These can be transformed into:\n> SELECT * FROM tbl1 ORDER BY *;\n> SELECT * FROM tbl1 ORDER BY a;\n> SELECT tbl1.* FROM tbl1 SEMI-JOIN tbl2 ON tbl2.a = tbl1.id ORDER BY tbl1.*;\n>\n> The attached patch does that.\n\nUnfortunately, there are quite a few issues with what you have:\n\nFirst off, please see\nhttps://www.postgresql.org/docs/devel/source-format.html about how we\nformat the source code. Please pay attention to how we do code\ncomments and braces on a separate line.\n\nAnother problem is that we shouldn't be really wiping out the distinct\nclause like you are with \"root->parse->distinctClause = NULL;\" there's\nsome discussion in [1] about that.\n\nAlso, the processing of the join tree where you switch inner joins to\nsemi joins looks broken. This would require much more careful and\nrecursive processing to do properly. However, I'm not really sure\nwhat that is as I'm not sure of all the cases that you can optimise\nthis way, and more importantly, which ones you can't. There's also no\nhope of anyone else knowing this as you've not left any comments about\nwhy what you're doing is valid.\n\nIf you want an example of what can cause what you have to brake:\n\ncreate table t (a int primary key);\nexplain select distinct a from t cross join pg_class cross join pg_attribute;\n QUERY PLAN\n-------------------------------------------------------------------------------------\n Nested Loop Semi Join (cost=0.15..37682.53 rows=999600 width=4)\n -> Nested Loop (cost=0.15..12595.31 rows=999600 width=4)\n -> Index Only Scan using t_pkey on t (cost=0.15..82.41\nrows=2550 width=4)\n -> Materialize (cost=0.00..18.88 rows=392 width=0)\n -> Seq Scan on pg_class (cost=0.00..16.92 rows=392 width=0)\n -> Materialize (cost=0.00..105.17 rows=3145 width=0)\n -> Seq Scan on pg_attribute (cost=0.00..89.45 rows=3145 width=0)\n(7 rows)\n\n\n-- Note the join to pg_attribute remains a cross join.\ninsert into t values(1);\n-- the following should only return 1 row. It returns many more than that.\nselect distinct a from t cross join pg_class cross join pg_attribute;\n\nI can't figure out why you're doing this either:\n\n+ /**\n+ * If there was no sort clause, we change the distinct into a sort clause.\n+ */\n+ if (!root->parse->sortClause)\n+ root->parse->sortClause = root->parse->distinctClause;\n\nIt's often better to say \"why\" rather than \"what\" when it comes to\ncode comments. It's pretty easy to see \"what\". It's the \"why\" part\nthat people more often get stuck on. Although, sometimes what you're\ndoing is complex and it does need a mention of \"what\". That's not the\ncase for the above though.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAExHW5t7ALZmaN8gL5DZV%2Ben5G%3D4QTbKSYhBrXnSrKgCYNr_AA%40mail.gmail.com\n\n\n", "msg_date": "Tue, 15 Sep 2020 22:57:04 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Remove useless distinct clauses" }, { "msg_contents": "On Tue, Sep 15, 2020 at 10:57:04PM +1200, David Rowley wrote:\n> On Fri, 31 Jul 2020 at 20:41, Pierre Ducroquet <p.psql@pinaraf.info> wrote:\n> >\n> > In a recent audit, I noticed that application developers have a tendency to\n> > abuse the distinct clause. For instance they use an ORM and add a distinct at\n> > the top level just because they don't know the cost it has, or they don't know\n> > that using EXISTS is a better way to express their queries than doing JOINs\n> > (or worse, they can't do better).\n> >\n> > They thus have this kind of queries (considering tbl1 has a PK of course):\n> > SELECT DISTINCT * FROM tbl1;\n> > SELECT DISTINCT * FROM tbl1 ORDER BY a;\n> > SELECT DISTINCT tbl1.* FROM tbl1\n> > JOIN tbl2 ON tbl2.a = tbl1.id;\n> \n> This is a common anti-pattern that I used to see a couple of jobs ago.\n> What seemed to happen was that someone would modify some query or a\n> view to join in an additional table to fetch some information that was\n> now required. At some later time, there'd be a bug report to say that\n> the query is returning certain records more than once. The\n> developer's solution was to add DISTINCT, instead of figuring out that\n> the join that was previously added missed some column from the join\n> clause.\n\nI can 100% imagine that happening. :-(\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Sat, 19 Sep 2020 20:46:49 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Remove useless distinct clauses" }, { "msg_contents": "On Tue, Sep 15, 2020 at 10:57:04PM +1200, David Rowley wrote:\n> Unfortunately, there are quite a few issues with what you have:\n\nThis review has not been answered after two weeks, so this is marked\nas RwF.\n--\nMichael", "msg_date": "Wed, 30 Sep 2020 16:07:42 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Remove useless distinct clauses" } ]
[ { "msg_contents": "Hi hackers,\n\nEvery time I have to look up what kinds of operations each index type is\nsuitable for, I get annoyed by the index types page being virtually\nunskimmable due to not having headings for each index type.\n\nAttached is a patch that adds <sect2> tags for each index type to make\nit easier to see where the description of each one starts.\n\n- ilmari\n-- \n\"The surreality of the universe tends towards a maximum\" -- Skud's Law\n\"Never formulate a law or axiom that you're not prepared to live with\n the consequences of.\" -- Skud's Meta-Law", "msg_date": "Fri, 31 Jul 2020 10:30:59 +0100", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": true, "msg_subject": "[PATCH] Add section headings to index types doc" }, { "msg_contents": "ilmari@ilmari.org (Dagfinn Ilmari Mannsåker) writes:\n\n> Hi hackers,\n>\n> Every time I have to look up what kinds of operations each index type is\n> suitable for, I get annoyed by the index types page being virtually\n> unskimmable due to not having headings for each index type.\n>\n> Attached is a patch that adds <sect2> tags for each index type to make\n> it easier to see where the description of each one starts.\n\nAdded to the next commitfest:\n\nhttps://commitfest.postgresql.org/29/2665/\n\nAlso, for easier review, here's the `git diff -w` output, since the\n<sect2> tags caused most of the page to have to be renidented.\n\nTangentially, does anyone know of a tool to strip whitespace changes\nfrom an existing diff, as if it had been generated with `-w` in the\nfirst place?\n\ndiff --git a/doc/src/sgml/indices.sgml b/doc/src/sgml/indices.sgml\nindex 28adaba72d..332d161547 100644\n--- a/doc/src/sgml/indices.sgml\n+++ b/doc/src/sgml/indices.sgml\n@@ -122,6 +122,9 @@\n B-tree indexes, which fit the most common situations.\n </para>\n \n+ <sect2 id=\"indexes-types-btree\">\n+ <title>B-tree</title>\n+\n <para>\n <indexterm>\n <primary>index</primary>\n@@ -172,6 +175,10 @@\n This is not always faster than a simple scan and sort, but it is\n often helpful.\n </para>\n+ </sect2>\n+\n+ <sect2 id=\"indexes-types-hash\">\n+ <title>Hash</title>\n \n <para>\n <indexterm>\n@@ -191,6 +198,10 @@\n CREATE INDEX <replaceable>name</replaceable> ON <replaceable>table</replaceable> USING HASH (<replaceable>column</replaceable>);\n </synopsis>\n </para>\n+ </sect2>\n+\n+ <sect2 id=\"indexes-type-gist\">\n+ <title>GiST</title>\n \n <para>\n <indexterm>\n@@ -246,6 +257,10 @@\n In <xref linkend=\"gist-builtin-opclasses-table\"/>, operators that can be\n used in this way are listed in the column <quote>Ordering Operators</quote>.\n </para>\n+ </sect2>\n+\n+ <sect2 id=\"indexes-type-spgist\">\n+ <title>SP-GiST</title>\n \n <para>\n <indexterm>\n@@ -286,6 +301,10 @@\n corresponding operator is specified in the <quote>Ordering Operators</quote>\n column in <xref linkend=\"spgist-builtin-opclasses-table\"/>.\n </para>\n+ </sect2>\n+\n+ <sect2 id=\"indexes-types-gin\">\n+ <title>GIN</title>\n \n <para>\n <indexterm>\n@@ -327,6 +346,10 @@\n classes are available in the <literal>contrib</literal> collection or as separate\n projects. For more information see <xref linkend=\"gin\"/>.\n </para>\n+ </sect2>\n+\n+ <sect2 id=\"indexes-types-brin\">\n+ <title>BRIN</title>\n \n <para>\n <indexterm>\n@@ -360,6 +383,7 @@\n documented in <xref linkend=\"brin-builtin-opclasses-table\"/>.\n For more information see <xref linkend=\"brin\"/>.\n </para>\n+ </sect2>\n </sect1>\n \n\n- ilmari \n-- \n\"A disappointingly low fraction of the human race is,\n at any given time, on fire.\" - Stig Sandbeck Mathisen\n\n\n", "msg_date": "Mon, 03 Aug 2020 12:32:17 +0100", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add section headings to index types doc" }, { "msg_contents": "On Mon, Aug 3, 2020 at 1:32 PM Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>\nwrote:\n\n> ilmari@ilmari.org (Dagfinn Ilmari Mannsåker) writes:\n>\n> > Hi hackers,\n> >\n> > Every time I have to look up what kinds of operations each index type is\n> > suitable for, I get annoyed by the index types page being virtually\n> > unskimmable due to not having headings for each index type.\n> >\n> > Attached is a patch that adds <sect2> tags for each index type to make\n> > it easier to see where the description of each one starts.\n>\n> Added to the next commitfest:\n>\n> https://commitfest.postgresql.org/29/2665/\n>\n> Also, for easier review, here's the `git diff -w` output, since the\n> <sect2> tags caused most of the page to have to be renidented.\n>\n> Tangentially, does anyone know of a tool to strip whitespace changes\n> from an existing diff, as if it had been generated with `-w` in the\n> first place?\n>\n\nI think you can do something like:\n\ncombinediff -w 0001-Add-section-headers-to-index-types-doc.patch /dev/null\n\n(combinediff requires two diffs, but one can be /dev/null)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, Aug 3, 2020 at 1:32 PM Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> wrote:ilmari@ilmari.org (Dagfinn Ilmari Mannsåker) writes:\n\n> Hi hackers,\n>\n> Every time I have to look up what kinds of operations each index type is\n> suitable for, I get annoyed by the index types page being virtually\n> unskimmable due to not having headings for each index type.\n>\n> Attached is a patch that adds <sect2> tags for each index type to make\n> it easier to see where the description of each one starts.\n\nAdded to the next commitfest:\n\nhttps://commitfest.postgresql.org/29/2665/\n\nAlso, for easier review, here's the `git diff -w` output, since the\n<sect2> tags caused most of the page to have to be renidented.\n\nTangentially, does anyone know of a tool to strip whitespace changes\nfrom an existing diff, as if it had been generated with `-w` in the\nfirst place?I think you can do something like:combinediff -w 0001-Add-section-headers-to-index-types-doc.patch  /dev/null(combinediff requires two diffs, but one can be /dev/null)--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Mon, 3 Aug 2020 13:56:50 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add section headings to index types doc" }, { "msg_contents": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=) writes:\n> Also, for easier review, here's the `git diff -w` output, since the\n> <sect2> tags caused most of the page to have to be renidented.\n\nTBH, I'd suggest just not being anal about whether the indentation\nnesting is perfect ;-). There are certainly plenty of places in\nthe SGML files today where it is not. And for something like this,\nI doubt the gain is worth the loss of \"git blame\" tracking and\npossible back-patching hazards.\n\nI'm a compulsive neatnik when it comes to indentation of the\nC code, but much less so about the SGML docs. YMMV of course.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 03 Aug 2020 08:16:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add section headings to index types doc" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: tested, passed\n\na) I'm wondering if we should apply one more change to this page. The line-by-line listing of operators occupies much space. What do you think about an inline list (in a separate line) of the operators?\r\n\r\nold source:\r\n <simplelist>\r\n<member><literal>&lt;</literal></member>\r\n<member><literal>&lt;=</literal></member>\r\n<member><literal>=</literal></member>\r\n<member><literal>&gt;=</literal></member>\r\n<member><literal>&gt;</literal></member>\r\n </simplelist>\r\n\r\nnew source:\r\n <synopsis>&lt; &nbsp; &lt;= &nbsp; = &nbsp; &gt;= &nbsp; &gt;</synopsis>\r\nIt looks nice in HTML as well as in PDF.\r\n\r\nb) I'm in favor of the indentation of all affected lines as it is done in the patch.\n\nThe new status of this patch is: Waiting on Author\n", "msg_date": "Mon, 10 Aug 2020 12:52:17 +0000", "msg_from": "=?utf-8?q?J=C3=BCrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add section headings to index types doc" }, { "msg_contents": "On Mon, Aug 10, 2020 at 12:52:17PM +0000, Jürgen Purtz wrote:\n> The new status of this patch is: Waiting on Author\n\nThis has not been answered yet, so I have marked the patch as returned\nwith feedback.\n--\nMichael", "msg_date": "Wed, 30 Sep 2020 15:34:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add section headings to index types doc" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n\n> On Mon, Aug 10, 2020 at 12:52:17PM +0000, Jürgen Purtz wrote:\n>> The new status of this patch is: Waiting on Author\n>\n> This has not been answered yet, so I have marked the patch as returned\n> with feedback.\n\nUpdated patch attached, wich reformats the operator lists as requested\nby Jürgen, and skips the reindentation as suggested by Tom.\n\nThe reindentation patch is attached separately, in case the committer\ndecides they want it properly indented after all.\n\n- ilmari\n-- \n- Twitter seems more influential [than blogs] in the 'gets reported in\n the mainstream press' sense at least. - Matt McLeod\n- That'd be because the content of a tweet is easier to condense down\n to a mainstream media article. - Calle Dybedahl", "msg_date": "Wed, 30 Sep 2020 12:25:03 +0100", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add section headings to index types doc" }, { "msg_contents": "On 30/09/2020 14:25, Dagfinn Ilmari Mannsåker wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n> \n>> On Mon, Aug 10, 2020 at 12:52:17PM +0000, Jürgen Purtz wrote:\n>>> The new status of this patch is: Waiting on Author\n>>\n>> This has not been answered yet, so I have marked the patch as returned\n>> with feedback.\n> \n> Updated patch attached, wich reformats the operator lists as requested\n> by Jürgen, and skips the reindentation as suggested by Tom.\n\nI wonder if \"synopsis\" is the right markup for the operator lists. I'm \nnot too familiar with SGML, but the closest similar list I could find is \nthis in create_operator.sgml:\n\n> The operator name is a sequence of up to <symbol>NAMEDATALEN</symbol>-1\n> (63 by default) characters from the following list:\n> <literallayout>\n> + - * / &lt; &gt; = ~ ! @ # % ^ &amp; | ` ?\n> </literallayout>\n\nReading up on the meaning of \"literallayout\" at \nhttps://tdg.docbook.org/tdg/4.5/literallayout.html, though, it doesn't \nsound quite right either. Maybe \"<simplelist type=horiz'>\" ?\n\n- Heikki\n\n\n", "msg_date": "Wed, 30 Sep 2020 15:53:41 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add section headings to index types doc" }, { "msg_contents": "On 30.09.20 14:53, Heikki Linnakangas wrote:\n> On 30/09/2020 14:25, Dagfinn Ilmari Mannsåker wrote:\n>> Michael Paquier <michael@paquier.xyz> writes:\n>>\n>>> On Mon, Aug 10, 2020 at 12:52:17PM +0000, Jürgen Purtz wrote:\n>>>> The new status of this patch is: Waiting on Author\n>>>\n>>> This has not been answered yet, so I have marked the patch as returned\n>>> with feedback.\n>>\n>> Updated patch attached, wich reformats the operator lists as requested\n>> by Jürgen, and skips the reindentation as suggested by Tom.\n>\n> I wonder if \"synopsis\" is the right markup for the operator lists. I'm \n> not too familiar with SGML, but the closest similar list I could find \n> is this in create_operator.sgml:\n>\n>>    The operator name is a sequence of up to \n>> <symbol>NAMEDATALEN</symbol>-1\n>>    (63 by default) characters from the following list:\n>> <literallayout>\n>> + - * / &lt; &gt; = ~ ! @ # % ^ &amp; | ` ?\n>> </literallayout>\n>\n> Reading up on the meaning of \"literallayout\" at \n> https://tdg.docbook.org/tdg/4.5/literallayout.html, though, it doesn't \n> sound quite right either. Maybe \"<simplelist type=horiz'>\" ?\n>\n> - Heikki\n\n<literallyout> loses the aqua background color (in comparison to the \nexisting documentation).\n\n<simplelist type=\"horiz\" \ncolumns=\"5\"><member><literal>&lt;</literal></member> ... is very chatty: \nit needs the additional 'columns' attribute and the additional 'member' \nelement.\n\nTherefor I am in favor of the <synopsis> solution as given in the last \npatch of Dagfinn.\n\nPlaying around I found another solution, which also looks quite good. \nThe chapter uses operators within the text flow at different places. All \nof them are embedded in a <literal> element (inline). Using this \n<literal> element also for the index-specific operators, the reading of \nthe page gets easier and the rendering is consistent. But the drawback \nis that these operator are no longer accentuated. Because they \n'represents' the possible access methods per index-type, one can argue \nthat they should be shown in a special way, eg.: in a separate paragraph \nas in Dagfin's patch. (I suppose that this was the original intention of \nthe huge number of line-breaks.) It would look like the following, but I \ndon't recommend to use it:\n\n--\n\nJürgen Purtz", "msg_date": "Sat, 3 Oct 2020 06:59:39 +0200", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add section headings to index types doc" }, { "msg_contents": "On Wed, Sep 30, 2020 at 4:25 AM Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>\nwrote:\n\n> Michael Paquier <michael@paquier.xyz> writes:\n>\n> > On Mon, Aug 10, 2020 at 12:52:17PM +0000, Jürgen Purtz wrote:\n> >> The new status of this patch is: Waiting on Author\n> >\n> > This has not been answered yet, so I have marked the patch as returned\n> > with feedback.\n>\n> Updated patch attached, wich reformats the operator lists as requested\n> by Jürgen,\n\n\nA couple of things:\n\nOne, I would place the equality operator for hash inside a standalone\nsynopsis just like all of the others.\nTwo, why is hash special in having its create index command provided here?\n(I notice this isn't the fault of this patch but it stands out when\nreviewing it)\n\nI would suggest rewording hash to look more like the others and add a link\nto the \"CREATE INDEX\" command from the chapter preamble.\n\nand skips the reindentation as suggested by Tom.\n>\n\nAgreed\nDavid J.\n\nOn Wed, Sep 30, 2020 at 4:25 AM Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> wrote:Michael Paquier <michael@paquier.xyz> writes:\n\n> On Mon, Aug 10, 2020 at 12:52:17PM +0000, Jürgen Purtz wrote:\n>> The new status of this patch is: Waiting on Author\n>\n> This has not been answered yet, so I have marked the patch as returned\n> with feedback.\n\nUpdated patch attached, wich reformats the operator lists as requested\nby Jürgen, A couple of things:One, I would place the equality operator for hash inside a standalone synopsis just like all of the others.Two, why is hash special in having its create index command provided here? (I notice this isn't the fault of this patch but it stands out when reviewing it)I would suggest rewording hash to look more like the others and add a link to the \"CREATE INDEX\" command from the chapter preamble.and skips the reindentation as suggested by Tom.AgreedDavid J.", "msg_date": "Wed, 21 Oct 2020 14:12:03 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add section headings to index types doc" }, { "msg_contents": "On 21.10.20 23:12, David G. Johnston wrote:\n> On Wed, Sep 30, 2020 at 4:25 AM Dagfinn Ilmari Mannsåker \n> <ilmari@ilmari.org <mailto:ilmari@ilmari.org>> wrote:\n>\n> Michael Paquier <michael@paquier.xyz <mailto:michael@paquier.xyz>>\n> writes:\n>\n> > On Mon, Aug 10, 2020 at 12:52:17PM +0000, Jürgen Purtz wrote:\n> >> The new status of this patch is: Waiting on Author\n> >\n> > This has not been answered yet, so I have marked the patch as\n> returned\n> > with feedback.\n>\n> Updated patch attached, wich reformats the operator lists as requested\n> by Jürgen, \n>\n>\n> A couple of things:\n>\n> One, I would place the equality operator for hash inside a standalone \n> synopsis just like all of the others.\nok\n> Two, why is hash special in having its create index command provided \n> here? (I notice this isn't the fault of this patch but it stands out \n> when reviewing it)\nyes, this looks odd.\n>\n> I would suggest rewording hash to look more like the others\nok\n> and add a link to the \"CREATE INDEX\" command from the chapter preamble.\nis the link necessary?\n>\n> and skips the reindentation as suggested by Tom.\n>\n>\n> Agreed\n> David J.\n\n--\n\nJ. Purtz\n\n\n\n\n\n\n\n\nOn 21.10.20 23:12, David G. Johnston\n wrote:\n\n\n\n\n\nOn Wed, Sep\n 30, 2020 at 4:25 AM Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>\n wrote:\n\n\n\nMichael Paquier <michael@paquier.xyz> writes:\n\n > On Mon, Aug 10, 2020 at 12:52:17PM +0000, Jürgen Purtz\n wrote:\n >> The new status of this patch is: Waiting on Author\n >\n > This has not been answered yet, so I have marked the\n patch as returned\n > with feedback.\n\n Updated patch attached, wich reformats the operator lists as\n requested\n by Jürgen, \n\n\n\nA couple of\n things:\n\n\nOne, I\n would place the equality operator for hash inside a\n standalone synopsis just like all of the others.\n\n\n\n\n ok\n\n\n\n\nTwo, why is\n hash special in having its create index command provided\n here? (I notice this isn't the fault of this patch but it\n stands out when reviewing it)\n\n\n\n\n yes, this looks odd.\n\n\n\n\n\n\nI would\n suggest rewording hash to look more like the others\n\n\n\n\n ok\n\n\n\n\n and add a\n link to the \"CREATE INDEX\" command from the chapter\n preamble.\n\n\n\n\n is the link necessary?\n\n\n\n\n\nand skips the\n reindentation as suggested by Tom.\n\n\n\nAgreed\nDavid J.\n\n\n\n--\nJ. Purtz", "msg_date": "Fri, 23 Oct 2020 12:18:50 +0200", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add section headings to index types doc" }, { "msg_contents": "On Fri, Oct 23, 2020 at 3:18 AM Jürgen Purtz <juergen@purtz.de> wrote:\n\n> and add a link to the \"CREATE INDEX\" command from the chapter preamble.\n>\n> is the link necessary?\n>\n\nI suppose it would make more sense to add it to the previous section - the\nintroduction page. I do think having a link (or more than one) to CREATE\nINDEX from the Indexes chapter is reader friendly. Having links to SQL\ncommands is never truly necessary - the reader knows a SQL command\nreference exists and the name of the command allows them to find the\ncorrect page.\n\nDavid J.\n\nOn Fri, Oct 23, 2020 at 3:18 AM Jürgen Purtz <juergen@purtz.de> wrote:\n\nand add a\n link to the \"CREATE INDEX\" command from the chapter\n preamble.\n\n\n\n\n is the link necessary?I suppose it would make more sense to add it to the previous section - the introduction page.  I do think having a link (or more than one) to CREATE INDEX from the Indexes chapter is reader friendly.  Having links to SQL commands is never truly necessary - the reader knows a SQL command reference exists and the name of the command allows them to find the correct page.David J.", "msg_date": "Fri, 23 Oct 2020 09:08:50 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add section headings to index types doc" }, { "msg_contents": "On 23.10.20 18:08, David G. Johnston wrote:\n> On Fri, Oct 23, 2020 at 3:18 AM Jürgen Purtz <juergen@purtz.de \n> <mailto:juergen@purtz.de>> wrote:\n>\n>> and add a link to the \"CREATE INDEX\" command from the chapter\n>> preamble.\n> is the link necessary?\n>\n>\n> I suppose it would make more sense to add it to the previous section - \n> the introduction page.  I do think having a link (or more than one) to \n> CREATE INDEX from the Indexes chapter is reader friendly.  Having \n> links to SQL commands is never truly necessary - the reader knows a \n> SQL command reference exists and the name of the command allows them \n> to find the correct page.\n>\n> David J.\n>\nI'm afraid I haven't grasped everything of your intentions and \nsuggestions of your last two mails.\n\n- equal operator in standalone paragraph: ok, integrated.\n\n- shift \"create index ... using HASH\" to a different place: You suggest \nshifting the statement or a link to the previous (sub-)chapter \"11.1 \nIntroduction\"? But there is already a \"create index\" example. Please \nread my suggestion/modification in the first paragraph of the \"11.2 \nIndex Types\" page.\n\n- \"rewording hash\": I don't know what is missing here. But I have added \na few words about the nature of this index type.\n\nAttached are two patches: a) summary against master and b) standalone of \nmy current changes.\n\n--\n\nJ. Purtz", "msg_date": "Sun, 25 Oct 2020 10:40:07 +0100", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add section headings to index types doc" }, { "msg_contents": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de> writes:\n> Attached are two patches: a) summary against master and b) standalone of \n> my current changes.\n\nThis was a bit confused ... as submitted, the patch wanted to commit\na couple of patchfiles. I sorted it out I believe, and pushed with\na little additional fiddling of my own.\n\nI did not commit the reindentation of existing text --- I don't think\nit's worth adding \"git blame\" noise for.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 25 Nov 2020 14:08:02 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add section headings to index types doc" } ]
[ { "msg_contents": "Hi,\n\nIn our testing framework, backed by pg_regress, there exists the ability to use special strings\nthat can be replaced by environment based ones. Such an example is '@testtablespace@'. The\nfunction used for this replacement is replace_string which inline replaces these occurrences in\noriginal line. It is documented that the original line buffer should be large enough to accommodate.\n\nHowever, it is rather possible and easy for subtle errors to occur, especially if there are multiple\noccurrences to be replaced in long enough lines. Please find two distinct versions of a possible\nsolution. One, which is preferred, is using StringInfo though it requires for stringinfo.h to be included\nin pg_regress.c. The other patch is more basic and avoids including stringinfo.h. As a reminder\nstringinfo became available in the frontend in commit (26aaf97b683d)\n\nBecause the original replace_string() is exposed to other users, it is currently left intact.\nAlso if required, an error can be raised in the original function, in cases that the string is not\nlong enough to accommodate the replacements.\n\nWorthwhile to mention that currently there are no such issues present in the test suits. It should\nnot hurt to do a bit better though.\n\n//Asim and Georgios", "msg_date": "Fri, 31 Jul 2020 12:25:02 +0000", "msg_from": "Georgios <gkokolatos@protonmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] - Provide robust alternatives for replace_string" }, { "msg_contents": "What happens if a replacement string happens to be split in the middle\nby the fgets buffering? I think it'll fail to be replaced. This\napplies to both versions.\n\nIn the stringinfo version it seemed to me that using pnstrdup is\npossible to avoid copying trailing bytes.\n\nIf you're asking for opinion, mine is that StringInfo looks to be the\nbetter approach, and also you don't need to keep API compatibility.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 31 Jul 2020 21:52:02 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] - Provide robust alternatives for replace_string" }, { "msg_contents": "Thank you Alvaro for reviewing the patch!\r\n\r\n> On 01-Aug-2020, at 7:22 AM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\r\n> \r\n> What happens if a replacement string happens to be split in the middle\r\n> by the fgets buffering? I think it'll fail to be replaced. This\r\n> applies to both versions.\r\n\r\nCan a string to be replaced be split across multiple lines in the source file? If I understand correctly, fgets reads one line from input file at a time. If I do not, in the worst case, we will get an un-replaced string in the output, such as “@abs_dir@“ and it should be easily detected by a failing diff.\r\n\r\n> In the stringinfo version it seemed to me that using pnstrdup is\r\n> possible to avoid copying trailing bytes.\r\n> \r\n\r\nThat’s a good suggestion. Using pnstrdup would look like this:\r\n\r\n--- a/src/test/regress/pg_regress.c\r\n+++ b/src/test/regress/pg_regress.c\r\n@@ -465,7 +465,7 @@ replace_stringInfo(StringInfo string, const char *replace, const char *replaceme\r\n\r\n while ((ptr = strstr(string->data, replace)) != NULL)\r\n {\r\n- char *dup = pg_strdup(string->data);\r\n+ char *dup = pnstrdup(string->data, string->maxlen);\r\n size_t pos = ptr - string->data;\r\n\r\n string->len = pos;\r\n\r\n \r\n> If you're asking for opinion, mine is that StringInfo looks to be the\r\n> better approach, and also you don't need to keep API compatibility.\r\n> \r\n\r\nThank you. We also prefer StringInfo solution.\r\n\r\nAsim", "msg_date": "Mon, 3 Aug 2020 09:34:08 +0000", "msg_from": "Asim Praveen <pasim@vmware.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] - Provide robust alternatives for replace_string" }, { "msg_contents": "On 2020-Aug-03, Asim Praveen wrote:\n\n> Thank you Alvaro for reviewing the patch!\n> \n> > On 01-Aug-2020, at 7:22 AM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > \n> > What happens if a replacement string happens to be split in the middle\n> > by the fgets buffering? I think it'll fail to be replaced. This\n> > applies to both versions.\n> \n> Can a string to be replaced be split across multiple lines in the source file? If I understand correctly, fgets reads one line from input file at a time. If I do not, in the worst case, we will get an un-replaced string in the output, such as “@abs_dir@“ and it should be easily detected by a failing diff.\n\nI meant what if the line is longer than 1023 chars and the replace\nmarker starts at byte 1021, for example. Then the first fgets would get\n\"@ab\" and the second fgets would get \"s_dir@\" and none would see it as\nreplaceable.\n\n> > In the stringinfo version it seemed to me that using pnstrdup is\n> > possible to avoid copying trailing bytes.\n> \n> That’s a good suggestion. Using pnstrdup would look like this:\n> \n> --- a/src/test/regress/pg_regress.c\n> +++ b/src/test/regress/pg_regress.c\n> @@ -465,7 +465,7 @@ replace_stringInfo(StringInfo string, const char *replace, const char *replaceme\n> \n> while ((ptr = strstr(string->data, replace)) != NULL)\n> {\n> - char *dup = pg_strdup(string->data);\n> + char *dup = pnstrdup(string->data, string->maxlen);\n\nI was thinking pnstrdup(string->data, ptr - string->data) to avoid\ncopying the chars beyond ptr.\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 3 Aug 2020 11:06:40 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] - Provide robust alternatives for replace_string" }, { "msg_contents": "\r\n\r\n> On 03-Aug-2020, at 8:36 PM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\r\n> \r\n> On 2020-Aug-03, Asim Praveen wrote:\r\n> \r\n>> Thank you Alvaro for reviewing the patch!\r\n>> \r\n>>> On 01-Aug-2020, at 7:22 AM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\r\n>>> \r\n>>> What happens if a replacement string happens to be split in the middle\r\n>>> by the fgets buffering? I think it'll fail to be replaced. This\r\n>>> applies to both versions.\r\n>> \r\n>> Can a string to be replaced be split across multiple lines in the source file? If I understand correctly, fgets reads one line from input file at a time. If I do not, in the worst case, we will get an un-replaced string in the output, such as “@abs_dir@“ and it should be easily detected by a failing diff.\r\n> \r\n> I meant what if the line is longer than 1023 chars and the replace\r\n> marker starts at byte 1021, for example. Then the first fgets would get\r\n> \"@ab\" and the second fgets would get \"s_dir@\" and none would see it as\r\n> replaceable.\r\n\r\nThanks for the patient explanation, I had missed the obvious. To keep the code simple, I’m in favour of relying on the diff of a failing test to catch the split-replacement string problem.\r\n\r\n> \r\n>>> In the stringinfo version it seemed to me that using pnstrdup is\r\n>>> possible to avoid copying trailing bytes.\r\n>> \r\n>> That’s a good suggestion. Using pnstrdup would look like this:\r\n>> \r\n>> --- a/src/test/regress/pg_regress.c\r\n>> +++ b/src/test/regress/pg_regress.c\r\n>> @@ -465,7 +465,7 @@ replace_stringInfo(StringInfo string, const char *replace, const char *replaceme\r\n>> \r\n>> while ((ptr = strstr(string->data, replace)) != NULL)\r\n>> {\r\n>> - char *dup = pg_strdup(string->data);\r\n>> + char *dup = pnstrdup(string->data, string->maxlen);\r\n> \r\n> I was thinking pnstrdup(string->data, ptr - string->data) to avoid\r\n> copying the chars beyond ptr.\r\n> \r\n\r\nIn fact, what we need in the dup are chars beyond ptr. Copying of characters prefixing the string to be replaced can be avoided, like so:\r\n\r\n--- a/src/test/regress/pg_regress.c\r\n+++ b/src/test/regress/pg_regress.c\r\n@@ -465,12 +465,12 @@ replace_stringInfo(StringInfo string, const char *replace, const char *replaceme\r\n\r\n while ((ptr = strstr(string->data, replace)) != NULL)\r\n {\r\n- char *dup = pg_strdup(string->data);\r\n+ char *suffix = pnstrdup(ptr + strlen(replace), string->maxlen);\r\n size_t pos = ptr - string->data;\r\n\r\n string->len = pos;\r\n appendStringInfoString(string, replacement);\r\n- appendStringInfoString(string, dup + pos + strlen(replace));\r\n+ appendStringInfoString(string, suffix);\r\n\r\n- free(dup);\r\n+ free(suffix);\r\n }\r\n}\r\n\r\n\r\nAsim", "msg_date": "Tue, 4 Aug 2020 09:22:40 +0000", "msg_from": "Asim Praveen <pasim@vmware.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] - Provide robust alternatives for replace_string" }, { "msg_contents": "> On 03-Aug-2020, at 8:36 PM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\r\n> \r\n> On 2020-Aug-03, Asim Praveen wrote:\r\n> \r\n>> Thank you Alvaro for reviewing the patch!\r\n>> \r\n>>> On 01-Aug-2020, at 7:22 AM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\r\n>>> \r\n>>> What happens if a replacement string happens to be split in the middle\r\n>>> by the fgets buffering? I think it'll fail to be replaced. This\r\n>>> applies to both versions.\r\n>> \r\n>> Can a string to be replaced be split across multiple lines in the source file? If I understand correctly, fgets reads one line from input file at a time. If I do not, in the worst case, we will get an un-replaced string in the output, such as “@abs_dir@“ and it should be easily detected by a failing diff.\r\n> \r\n> I meant what if the line is longer than 1023 chars and the replace\r\n> marker starts at byte 1021, for example. Then the first fgets would get\r\n> \"@ab\" and the second fgets would get \"s_dir@\" and none would see it as\r\n> replaceable.\r\n> \r\n\r\n\r\nPlease find attached a StringInfo based solution to this problem. It uses fgetln instead of fgets such that a line is read in full, without ever splitting it.\r\n\r\nAsim", "msg_date": "Wed, 5 Aug 2020 07:08:41 +0000", "msg_from": "Asim Praveen <pasim@vmware.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] - Provide robust alternatives for replace_string" }, { "msg_contents": "On 2020-Aug-05, Asim Praveen wrote:\n\n> Please find attached a StringInfo based solution to this problem. It\n> uses fgetln instead of fgets such that a line is read in full, without\n> ever splitting it.\n\nnever heard of fgetln, my system doesn't have a manpage for it, and we\ndon't use it anywhere AFAICS. Are you planning to add something to\nsrc/common for it?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 5 Aug 2020 09:31:10 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] - Provide robust alternatives for replace_string" }, { "msg_contents": "> On 05-Aug-2020, at 7:01 PM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> \n> On 2020-Aug-05, Asim Praveen wrote:\n> \n>> Please find attached a StringInfo based solution to this problem. It\n>> uses fgetln instead of fgets such that a line is read in full, without\n>> ever splitting it.\n> \n> never heard of fgetln, my system doesn't have a manpage for it, and we\n> don't use it anywhere AFAICS. Are you planning to add something to\n> src/common for it?\n> \n\nIndeed! I noticed fgetln on the man page of fgets and used it without checking. And this happened on a MacOS system.\n\nPlease find a revised version that uses fgetc instead.\n\nAsim", "msg_date": "Fri, 7 Aug 2020 06:02:58 +0000", "msg_from": "Asim Praveen <pasim@vmware.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] - Provide robust alternatives for replace_string" }, { "msg_contents": "‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\nOn Friday, 7 August 2020 09:02, Asim Praveen <pasim@vmware.com> wrote:\n\n> > On 05-Aug-2020, at 7:01 PM, Alvaro Herrera alvherre@2ndquadrant.com wrote:\n> > On 2020-Aug-05, Asim Praveen wrote:\n> >\n> > > Please find attached a StringInfo based solution to this problem. It\n> > > uses fgetln instead of fgets such that a line is read in full, without\n> > > ever splitting it.\n> >\n> > never heard of fgetln, my system doesn't have a manpage for it, and we\n> > don't use it anywhere AFAICS. Are you planning to add something to\n> > src/common for it?\n>\n> Indeed! I noticed fgetln on the man page of fgets and used it without checking. And this happened on a MacOS system.\n>\n> Please find a revised version that uses fgetc instead.\n\nAlthough not an issue in the current branch, fgetc might become a bit slow\nin large files. Please find v3 which simply continues reading the line if\nfgets fills the buffer and there is still data to read.\n\nAlso this version, implements Alvaro's suggestion to break API compatibility.\n\nTo that extent, ecpg regress has been slightly modified to use the new version\nof replace_string() where needed, or remove it all together where possible.\n\n//Georgios\n\n>\n> Asim", "msg_date": "Wed, 19 Aug 2020 08:07:16 +0000", "msg_from": "Georgios <gkokolatos@protonmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] - Provide robust alternatives for replace_string" }, { "msg_contents": "‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\nOn Wednesday, 19 August 2020 11:07, Georgios <gkokolatos@protonmail.com> wrote:\n\n>\n>\n> ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\n> On Friday, 7 August 2020 09:02, Asim Praveen pasim@vmware.com wrote:\n>\n> > > On 05-Aug-2020, at 7:01 PM, Alvaro Herrera alvherre@2ndquadrant.com wrote:\n> > > On 2020-Aug-05, Asim Praveen wrote:\n> > >\n> > > > Please find attached a StringInfo based solution to this problem. It\n> > > > uses fgetln instead of fgets such that a line is read in full, without\n> > > > ever splitting it.\n> > >\n> > > never heard of fgetln, my system doesn't have a manpage for it, and we\n> > > don't use it anywhere AFAICS. Are you planning to add something to\n> > > src/common for it?\n> >\n> > Indeed! I noticed fgetln on the man page of fgets and used it without checking. And this happened on a MacOS system.\n> > Please find a revised version that uses fgetc instead.\n>\n> Although not an issue in the current branch, fgetc might become a bit slow\n> in large files. Please find v3 which simply continues reading the line if\n> fgets fills the buffer and there is still data to read.\n>\n> Also this version, implements Alvaro's suggestion to break API compatibility.\n>\n> To that extent, ecpg regress has been slightly modified to use the new version\n> of replace_string() where needed, or remove it all together where possible.\n\nI noticed that the cfbot [1] was unhappy with the raw use of __attribute__ on windows builds.\n\nIn retrospect it is rather obvious it would complain. Please find v4 attached.\n\n//Georgios\n\n>\n> //Georgios\n>\n> > Asim\n\n[1] https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.105985", "msg_date": "Mon, 31 Aug 2020 09:04:10 +0000", "msg_from": "Georgios <gkokolatos@protonmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] - Provide robust alternatives for replace_string" }, { "msg_contents": "Note that starting with commit 67a472d71c98 you can use pg_get_line and\nnot worry about the hard part of this anymore :-)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 3 Sep 2020 22:15:03 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] - Provide robust alternatives for replace_string" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> Note that starting with commit 67a472d71c98 you can use pg_get_line and\n> not worry about the hard part of this anymore :-)\n\npg_get_line as it stands isn't quite suitable, because it just hands\nback a \"char *\" string, not a StringInfo that you can do further\nprocessing on.\n\nHowever, I'd already grown a bit dissatisfied with exposing only that\nAPI, because the code 8f8154a50 added to hba.c couldn't use pg_get_line\neither, and had to duplicate the logic. So the attached revised patch\nsplits pg_get_line into two pieces, one with the existing char * API\nand one that appends to a caller-provided StringInfo. (hba.c needs the\nappend-rather-than-reset behavior, and it might be useful elsewhere\ntoo.)\n\nWhile here, I couldn't resist getting rid of ecpg_filter()'s hard-wired\nline length limit too.\n\nThis version looks committable to me, though perhaps someone has\nfurther thoughts?\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 05 Sep 2020 18:42:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] - Provide robust alternatives for replace_string" }, { "msg_contents": "I wrote:\n> This version looks committable to me, though perhaps someone has\n> further thoughts?\n\nI looked through this again and pushed it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 06 Sep 2020 14:15:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] - Provide robust alternatives for replace_string" } ]
[ { "msg_contents": "Postgres provides serial and bigserial column types for which it \nimplicitly creates sequence.\nAs far as this mechanism is somehow hidden from user, it may be \nconfusing that table\ncreated with CREATE TABLE LIKE has no associated sequence.\n\nBut what is worse, even if experienced user knows that serial types are \nimplemented in Postgres by specifying\nnextval(seq) default value for this column and default values are copied \nby CREATE TABLE LIKE only if is it explicitly requested (including all),\nthen two tables will share the same sequence:\n\ncreate table t1(x serial primary key, val int);\ncreate table t2(like t1 including all);\n\n\npostgres=# \\d+ t1;\n                                                Table \"public.t1\"\n  Column |  Type   | Collation | Nullable | Default            | Storage \n| Stats target | Description\n--------+---------+-----------+----------+-------------------------------+---------+--------------+-------------\n  x      | integer |           | not null | \nnextval('t1_x_seq'::regclass) | plain   |              |\n  val    | integer |           | |                               | \nplain   |              |\nIndexes:\n     \"t1_pkey\" PRIMARY KEY, btree (x)\nAccess method: heap\n\npostgres=# \\d+ t2;\n                                                Table \"public.t2\"\n  Column |  Type   | Collation | Nullable | Default            | Storage \n| Stats target | Description\n--------+---------+-----------+----------+-------------------------------+---------+--------------+-------------\n  x      | integer |           | not null | \nnextval('t1_x_seq'::regclass) | plain   |              |\n  val    | integer |           | |                               | \nplain   |              |\nIndexes:\n     \"t2_pkey\" PRIMARY KEY, btree (x)\nAccess method: heap\n\n\nPlease notice that index is correctly replaced, but sequence - not.\nI consider such behavior more like bug than a feature.\nAnd it can be fixed using relatively small patch.\n\nThoughts?", "msg_date": "Sat, 1 Aug 2020 01:06:08 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Confusing behavior of create table like" }, { "msg_contents": "On 2020-08-01 00:06, Konstantin Knizhnik wrote:\n> Postgres provides serial and bigserial column types for which it\n> implicitly creates sequence.\n> As far as this mechanism is somehow hidden from user, it may be\n> confusing that table\n> created with CREATE TABLE LIKE has no associated sequence.\n\nThat's why identity columns were added. You shouldn't use serial \ncolumns anymore, especially if you are concerned about behaviors like this.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 3 Aug 2020 10:00:27 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Confusing behavior of create table like" }, { "msg_contents": "\n\nOn 03.08.2020 11:00, Peter Eisentraut wrote:\n> On 2020-08-01 00:06, Konstantin Knizhnik wrote:\n>> Postgres provides serial and bigserial column types for which it\n>> implicitly creates sequence.\n>> As far as this mechanism is somehow hidden from user, it may be\n>> confusing that table\n>> created with CREATE TABLE LIKE has no associated sequence.\n>\n> That's why identity columns were added.  You shouldn't use serial \n> columns anymore, especially if you are concerned about behaviors like \n> this.\n>\nI can completely agree with this position.\nThere are several things in Postgres which are conceptually similar, \nshare a lot of code but... following different rules.\nUsually it happens when some new notion is introduced, fully or partly \nsubstitute old notion.\nInheritance and declarative partitioning is one of such examples.\nAlthough them are used to solve the same goal, there are many cases when \nsome optimization works for partitioned table but not for inheritance.\n\nMay be generated and identity columns are good things. I have nothing \nagainst them.\nBut what preventing us from providing the similar behavior for \nserial/bigseries types?\n\n\n\n", "msg_date": "Mon, 3 Aug 2020 15:58:55 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Confusing behavior of create table like" }, { "msg_contents": "On Mon, Aug 3, 2020 at 8:59 AM Konstantin Knizhnik\n<k.knizhnik@postgrespro.ru> wrote:\n> May be generated and identity columns are good things. I have nothing\n> against them.\n> But what preventing us from providing the similar behavior for\n> serial/bigseries types?\n\nBackward compatibility seems like one good argument.\n\nIt kind of sucks that we end up with cases where new notions are\nintroduced to patch up the inadequacies of earlier ideas, but it's\nalso inevitable. If, after 25+ years of development, we didn't have\ncases where somebody had come up with a new plan that was better than\nthe older plan, that would be pretty scary. We have to remember,\nthough, that there's a giant user community around PostgreSQL at this\npoint, and changing things like this can inconvenience large numbers\nof those users. Sometimes that's worth it, but I find it pretty\ndubious in a case like this. There's every possibility that there are\npeople out there who rely on the current behavior, and whose stuff\nwould break if it were changed.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 3 Aug 2020 11:33:35 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Confusing behavior of create table like" }, { "msg_contents": "On 2020-08-03 14:58, Konstantin Knizhnik wrote:\n> May be generated and identity columns are good things. I have nothing\n> against them.\n> But what preventing us from providing the similar behavior for\n> serial/bigseries types?\n\nIn my mind, serial/bigserial is deprecated and it's not worth spending \neffort on patching them up.\n\nOne thing we could do is change serial/bigserial to expand to identity \ncolumn definitions instead of the current behavior.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 3 Aug 2020 18:35:15 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Confusing behavior of create table like" }, { "msg_contents": "On Mon, Aug 3, 2020 at 12:35 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> On 2020-08-03 14:58, Konstantin Knizhnik wrote:\n> > May be generated and identity columns are good things. I have nothing\n> > against them.\n> > But what preventing us from providing the similar behavior for\n> > serial/bigseries types?\n>\n> In my mind, serial/bigserial is deprecated and it's not worth spending\n> effort on patching them up.\n>\n> One thing we could do is change serial/bigserial to expand to identity\n> column definitions instead of the current behavior.\n\nI'm not really convinced that's a good idea. There's probably a lot of\npeople (me included) who are used to the way serial and bigserial work\nand wouldn't necessarily be happy about a change. Plus, aren't the\ngenerated columns still depending on an underlying sequence anyway?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 4 Aug 2020 12:53:20 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Confusing behavior of create table like" }, { "msg_contents": "\n\nOn 04.08.2020 19:53, Robert Haas wrote:\n> On Mon, Aug 3, 2020 at 12:35 PM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n>> On 2020-08-03 14:58, Konstantin Knizhnik wrote:\n>>> May be generated and identity columns are good things. I have nothing\n>>> against them.\n>>> But what preventing us from providing the similar behavior for\n>>> serial/bigseries types?\n>> In my mind, serial/bigserial is deprecated and it's not worth spending\n>> effort on patching them up.\n>>\n>> One thing we could do is change serial/bigserial to expand to identity\n>> column definitions instead of the current behavior.\n> I'm not really convinced that's a good idea. There's probably a lot of\n> people (me included) who are used to the way serial and bigserial work\n> and wouldn't necessarily be happy about a change. Plus, aren't the\n> generated columns still depending on an underlying sequence anyway?\n>\nYes, generated columns are also using implicitly generated sequences.\nSo them are  very similar with SERIAL/BIGSERIAL columns. This actually \nmake we wonder why we can not handle them in the same way in\nCREATE TABLE LIKE.\nThe only difference is that it is not possible to explicitly specify \nsequence for generated column.\nAnd it certainly makes there  handling in CREATE TABLE LIKE less \ncontradictory.\n\nI think that many people are using serial/bigserial types in their \ndatabase schemas and will continue to use them.\nI do not expect that any of them will be upset of behavior of handling \nthis columns in CREATE TABLE LIKE ... INCLUDING ALL will be changed.\nMostly because very few people are using this construction. But if \nsomeone wants to use it, then most likely he will be confused\n(I have not imagine this problem myself - it was motivated by question \nin one of Postgres forums where current behavior was interpreted as bug).\nSo I do not think that \"backward compatibility\" is actually good in this \ncase and that somebody can suffer from changing it.\n\nI do not insist - as I already told, I do not think that much people are \nusing CREATE TABLE LIKE, so it should not be a big problem.\nBut if there is some will to change current behavior, then I can send \nmore correct version of the patch and may be submit it to commitfest.\n\n\n\n", "msg_date": "Tue, 4 Aug 2020 20:36:13 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Confusing behavior of create table like" }, { "msg_contents": "On 2020-08-04 19:36, Konstantin Knizhnik wrote:\n> Yes, generated columns are also using implicitly generated sequences.\n> So them are  very similar with SERIAL/BIGSERIAL columns. This actually\n> make we wonder why we can not handle them in the same way in\n> CREATE TABLE LIKE.\n\nThe current specification of serial is a parse-time expansion of integer \ncolumn, sequence, and column default. The behavior of column defaults \nin CREATE TABLE LIKE does not currently include rewriting the default \nexpression or creating additional schema objects. If you want to \nintroduce these concepts, it should be done in a general way, not just \nhard-coded for a particular case.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 4 Aug 2020 23:50:38 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Confusing behavior of create table like" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2020-08-04 19:36, Konstantin Knizhnik wrote:\n>> Yes, generated columns are also using implicitly generated sequences.\n>> So them are  very similar with SERIAL/BIGSERIAL columns. This actually\n>> make we wonder why we can not handle them in the same way in\n>> CREATE TABLE LIKE.\n\n> The current specification of serial is a parse-time expansion of integer \n> column, sequence, and column default.\n\nYeah; and note it's actually defined that way in the docs.\n\nI'd certainly concede that serial is a legacy feature now that we have\nidentity columns. But, by the same token, its value is in backwards\ncompatibility with old behaviors. Therefore, reimplementing it in a\nway that isn't 100% backwards compatible seems like entirely the\nwrong thing to do. On similar grounds, I'd be pretty suspicious of\nchanging LIKE's behaviors around the case.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 04 Aug 2020 18:27:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Confusing behavior of create table like" } ]
[ { "msg_contents": "Hi,\n\nI tried to implement a static background(bg) worker without shared\nmemory access (BGWORKER_SHMEM_ACCESS), it worked fine on Linux machine\nwhere EXEC_BACKEND is not defined(thanks to the fork() implementation\nwhich does great job to get the global state from the\npostmaster(parent) to bg worker(child)).\n\nHowever, the problem arised, when I switched to EXEC_BACKEND mode, it\nseems it doesn't. I digged a bit and the following is my analysis: for\nEXEC_BACKEND cases, (say on Windows platforms where fork() doesn't\nexist) the way postmaster creates a background worker is entirely\ndifferent. It is done through SubPostmasterMain and the global state\nfrom the postmaster is shared with the background worker via shared\nmemory. MyLatch variable also gets created in shared mode. And having\nno shared memory access for the worker for EXEC_BACKEND cases(in\nStartBackgroundWorker, the shared memory segments get detached), the\nworker fails to receive all the global state from the postmaster.\nLooks like the background worker needs to have the\nBGWORKER_SHMEM_ACCESS flag while registering for EXEC_BACKEND cases.\n\nPlease feel free to correct me if I miss anything here.\n\nIf the above analysis looks fine, then please find a patch that adds\nsome info in bgworker.sgml and bgworker.h.\n\nThoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sat, 1 Aug 2020 08:43:32 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Can a background worker exist without shared memory access for\n EXEC_BACKEND cases?" }, { "msg_contents": "On Fri, Jul 31, 2020 at 11:13 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> memory. MyLatch variable also gets created in shared mode. And having\n> no shared memory access for the worker for EXEC_BACKEND cases(in\n> StartBackgroundWorker, the shared memory segments get detached), the\n> worker fails to receive all the global state from the postmaster.\n\nWhat exactly do you mean by \"all the global state\"?\n\nIt's certainly true that if you declare some random static variable\nand initialize it in the postmaster, and you don't take any special\nprecautions to propagate that into workers, then on an EXEC_BACKEND\nbuild, it won't be set in the workers. That's why, for example, most\nof the *ShmemInit() functions are written like this:\n\n TwoPhaseState = ShmemInitStruct(\"Prepared Transaction Table\",\n\n TwoPhaseShmemSize(),\n &found);\n if (!IsUnderPostmaster)\n...initialize the data structure...\n else\n Assert(found);\n\nThe assignment to TwoPhaseState is unconditional, because in an\nEXEC_BACKEND build that's going to be done in every process, and\notherwise the variable won't be set. But the initialization of the\nshared data structure happens conditionally, because that needs to be\ndone only once.\n\nSee also the BackendParameters stuff, which arranges to pass down a\nbunch of things to exec'd backends.\n\nI am not necessarily opposed to trying to clarify the documentation\nand/or comments here, but \"global state\" is a fuzzy term that doesn't\nreally mean anything to me.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 3 Aug 2020 11:49:20 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Can a background worker exist without shared memory access for\n EXEC_BACKEND cases?" }, { "msg_contents": "Thank you Robert for the comments.\n\nOn Mon, Aug 3, 2020 at 9:19 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Jul 31, 2020 at 11:13 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > memory. MyLatch variable also gets created in shared mode. And having\n> > no shared memory access for the worker for EXEC_BACKEND cases(in\n> > StartBackgroundWorker, the shared memory segments get detached), the\n> > worker fails to receive all the global state from the postmaster.\n>\n> What exactly do you mean by \"all the global state\"?\n>\n\nMy intention was exactly to refer to the variables specified in\nBackendParameters struct.\n\n>\n> It's certainly true that if you declare some random static variable\n> and initialize it in the postmaster, and you don't take any special\n> precautions to propagate that into workers, then on an EXEC_BACKEND\n> build, it won't be set in the workers. That's why, for example, most\n> of the *ShmemInit() functions are written like this:\n>\n> TwoPhaseState = ShmemInitStruct(\"Prepared Transaction Table\",\n>\n> TwoPhaseShmemSize(),\n> &found);\n> if (!IsUnderPostmaster)\n> ...initialize the data structure...\n> else\n> Assert(found);\n>\n> The assignment to TwoPhaseState is unconditional, because in an\n> EXEC_BACKEND build that's going to be done in every process, and\n> otherwise the variable won't be set. But the initialization of the\n> shared data structure happens conditionally, because that needs to be\n> done only once.\n>\n> See also the BackendParameters stuff, which arranges to pass down a\n> bunch of things to exec'd backends.\n>\n\nI could get these points earlier in my initial analysis. In fact, I\ncould figure out the flow on Windows, how these parameters are shared\nusing a shared file(CreateFileMapping(), MapViewOfFile()), and the\nshared file name being passed as an argv[2] to the child process, and\nthe way child process uses this file name to read the backend\nparameters in read_backend_variables().\n\n>\n> I am not necessarily opposed to trying to clarify the documentation\n> and/or comments here, but \"global state\" is a fuzzy term that doesn't\n> really mean anything to me.\n>\n\nHow about having \"backend parameters from the postmaster.....\" as is\nbeing referred to in the internal_forkexec() function comments? I\nrephrased the comments adding \"backend parameters..\" and removing\n\"global state\". Please find the v2 patch attached.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 4 Aug 2020 16:57:31 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Can a background worker exist without shared memory access for\n EXEC_BACKEND cases?" }, { "msg_contents": "On Tue, Aug 4, 2020 at 7:27 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> I could get these points earlier in my initial analysis. In fact, I\n> could figure out the flow on Windows, how these parameters are shared\n> using a shared file(CreateFileMapping(), MapViewOfFile()), and the\n> shared file name being passed as an argv[2] to the child process, and\n> the way child process uses this file name to read the backend\n> parameters in read_backend_variables().\n\nDoesn't that happen even if the background worker isn't declared to\nuse BGWORKER_SHMEM_ACCESS? See StartBackgroundWorker(): IIUC, we start\nwith shared memory access, then afterwards detach.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 4 Aug 2020 12:50:29 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Can a background worker exist without shared memory access for\n EXEC_BACKEND cases?" }, { "msg_contents": "On Tue, Aug 4, 2020 at 10:20 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Aug 4, 2020 at 7:27 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > I could get these points earlier in my initial analysis. In fact, I\n> > could figure out the flow on Windows, how these parameters are shared\n> > using a shared file(CreateFileMapping(), MapViewOfFile()), and the\n> > shared file name being passed as an argv[2] to the child process, and\n> > the way child process uses this file name to read the backend\n> > parameters in read_backend_variables().\n>\n> Doesn't that happen even if the background worker isn't declared to\n> use BGWORKER_SHMEM_ACCESS? See StartBackgroundWorker(): IIUC, we start\n> with shared memory access, then afterwards detach.\n>\n\nYes, the bg worker starts with shared memory access even with no\nBGWORKER_SHMEM_ACCESS and later it gets detached in\nStartBackgroundWorker() with PGSharedMemoryDetach().\n\nif ((worker->bgw_flags & BGWORKER_SHMEM_ACCESS) == 0)\n{\n dsm_detach_all();\n PGSharedMemoryDetach();\n }\n\nIn EXEC_BACKEND cases, right after PGSharedMemoryDetach(), the bg\nworker will no longer be able to access the backend parameters, see\nbelow(I tried this on my Ubuntu machine with a bgworker with no\nBGWORKER_SHMEM_ACCESS flag and defined EXEC_BACKEND macro in\npg_config_manual.h) :\n\n(gdb) p *MyLatch\nCannot access memory at address 0x7fd60424a6b4\n(gdb) p *ShmemVariableCache\nCannot access memory at address 0x7fd58427bf80\n(gdb) p ProcStructLock\n$10 = (slock_t *) 0x7fd60429bd00 <error: Cannot access memory at\naddress 0x7fd60429bd00>\n(gdb) p *AuxiliaryProcs\nCannot access memory at address 0x7fd60424cc60\n(gdb) p *ProcGlobal\nCannot access memory at address 0x7fd604232880\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 Aug 2020 16:54:37 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Can a background worker exist without shared memory access for\n EXEC_BACKEND cases?" }, { "msg_contents": "On Wed, Aug 5, 2020 at 7:24 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> In EXEC_BACKEND cases, right after PGSharedMemoryDetach(), the bg\n> worker will no longer be able to access the backend parameters, see\n> below(I tried this on my Ubuntu machine with a bgworker with no\n> BGWORKER_SHMEM_ACCESS flag and defined EXEC_BACKEND macro in\n> pg_config_manual.h) :\n>\n> (gdb) p *MyLatch\n> Cannot access memory at address 0x7fd60424a6b4\n> (gdb) p *ShmemVariableCache\n> Cannot access memory at address 0x7fd58427bf80\n> (gdb) p ProcStructLock\n> $10 = (slock_t *) 0x7fd60429bd00 <error: Cannot access memory at\n> address 0x7fd60429bd00>\n> (gdb) p *AuxiliaryProcs\n> Cannot access memory at address 0x7fd60424cc60\n> (gdb) p *ProcGlobal\n> Cannot access memory at address 0x7fd604232880\n\nWell all of those are pointers into the main shared memory segment,\nwhich is expected to be inaccessible after it is detached. Hopefully\nnobody should be surprised that if you don't specify\nBGWORKER_SHMEM_ACCESS, you can't access data stored in shared memory.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 5 Aug 2020 07:45:54 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Can a background worker exist without shared memory access for\n EXEC_BACKEND cases?" }, { "msg_contents": "On Wed, Aug 5, 2020 at 5:16 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Aug 5, 2020 at 7:24 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > In EXEC_BACKEND cases, right after PGSharedMemoryDetach(), the bg\n> > worker will no longer be able to access the backend parameters, see\n> > below(I tried this on my Ubuntu machine with a bgworker with no\n> > BGWORKER_SHMEM_ACCESS flag and defined EXEC_BACKEND macro in\n> > pg_config_manual.h) :\n> >\n> > (gdb) p *MyLatch\n> > Cannot access memory at address 0x7fd60424a6b4\n> > (gdb) p *ShmemVariableCache\n> > Cannot access memory at address 0x7fd58427bf80\n> > (gdb) p ProcStructLock\n> > $10 = (slock_t *) 0x7fd60429bd00 <error: Cannot access memory at\n> > address 0x7fd60429bd00>\n> > (gdb) p *AuxiliaryProcs\n> > Cannot access memory at address 0x7fd60424cc60\n> > (gdb) p *ProcGlobal\n> > Cannot access memory at address 0x7fd604232880\n>\n> Well all of those are pointers into the main shared memory segment,\n> which is expected to be inaccessible after it is detached. Hopefully\n> nobody should be surprised that if you don't specify\n> BGWORKER_SHMEM_ACCESS, you can't access data stored in shared memory.\n>\n\nRight.\n\nWill the proposed patch(v2) having some info in bgworker.sgml and\nbgworker.h be ever useful to the users in some way?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 Aug 2020 18:32:40 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Can a background worker exist without shared memory access for\n EXEC_BACKEND cases?" }, { "msg_contents": "On Wed, Aug 5, 2020 at 9:02 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> Will the proposed patch(v2) having some info in bgworker.sgml and\n> bgworker.h be ever useful to the users in some way?\n\nWell, it says things that aren't true, so, no, it's not useful. Your\npatch claims that \"the worker fails to receive the backend parameters\nfrom the postmaster,\" but that's not the case. SubPostmasterMain()\nfirst calls read_backend_variables() which calls\nrestore_backend_variables(); then later it calls\nStartBackgroundWorker() which does PGSharedMemoryDetach(). So the\nvalues of the backend variables *are* available in the worker\nprocesses. Your debugger output also shows this: if\nrestore_backend_variables() weren't running in the child processes,\nthose variables would all be NULL, but you show them all at different\naddresses in the 0x7fd... range, which is presumably where the shared\nmemory segment was mapped.\n\nThe reason you can't print out the results of dereferencing the\nvariables with * is because the memory to which the variables point is\nno longer mapped in the process, not because the variables haven't\nbeen initialized. If you looked at a variable that wasn't a pointer to\nshared memory, but rather, say, an integer, like max_safe_fds or\nMyCancelKey, I think you'd find that the value was preserved just\nfine. I think you're confusing the pointer with the data to which it\npoints.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 5 Aug 2020 09:14:36 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Can a background worker exist without shared memory access for\n EXEC_BACKEND cases?" } ]
[ { "msg_contents": "Hi,\n\nThere are one or two failures per month on crake. It looks like when\nauthentication is rejected, as expected in the tests, the psql process\nis exiting, but there is a race where the Perl script still wants to\nwrite a dummy query to its stdin (?), so you get:\n\npsql: FATAL: LDAP authentication failed for user \"test1\"\nack Broken pipe: write( 13, 'SELECT 1' ) at\n/usr/share/perl5/vendor_perl/IPC/Run/IO.pm line 549.\n\nExample:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2019-11-10%2023%3A36%3A04\n\ntmunro=> select animal, snapshot, branch from run where fail_stage =\n'ldapCheck' order by snapshot desc;\n animal | snapshot | branch\n--------+---------------------+---------------\n crake | 2020-08-02 02:32:30 | REL_13_STABLE\n crake | 2020-07-22 23:36:04 | REL_12_STABLE\n crake | 2020-07-14 00:52:04 | REL_13_STABLE\n crake | 2020-05-15 17:35:05 | REL_11_STABLE\n crake | 2020-04-07 20:51:03 | REL_12_STABLE\n mantid | 2020-03-04 18:17:58 | REL_12_STABLE\n mantid | 2020-03-04 17:59:58 | REL_11_STABLE\n crake | 2020-01-17 14:33:21 | REL_12_STABLE\n crake | 2019-11-10 23:36:04 | REL_11_STABLE\n crake | 2019-09-09 08:48:25 | HEAD\n crake | 2019-08-05 21:18:23 | REL_12_STABLE\n crake | 2019-07-19 01:33:31 | HEAD\n crake | 2019-07-16 01:06:02 | REL_11_STABLE\n(13 rows)\n\n(Ignore mantid, it had a temporary setup problem that was resolved.)\n\n\n", "msg_date": "Sun, 2 Aug 2020 17:29:57 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "LDAP check flapping on crake due to race" }, { "msg_contents": "On Sun, Aug 02, 2020 at 05:29:57PM +1200, Thomas Munro wrote:\n> There are one or two failures per month on crake. It looks like when\n> authentication is rejected, as expected in the tests, the psql process\n> is exiting, but there is a race where the Perl script still wants to\n> write a dummy query to its stdin (?), so you get:\n> \n> psql: FATAL: LDAP authentication failed for user \"test1\"\n> ack Broken pipe: write( 13, 'SELECT 1' ) at\n> /usr/share/perl5/vendor_perl/IPC/Run/IO.pm line 549.\n\nDo you suppose a fix like e12a472 would cover this? (\"psql <&-\" fails with\nstatus 1 after successful authentication, and authentication failure gives\nstatus 2.)\n\n\n", "msg_date": "Sat, 1 Aug 2020 23:10:48 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: LDAP check flapping on crake due to race" }, { "msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> On Sun, Aug 02, 2020 at 05:29:57PM +1200, Thomas Munro wrote:\n>> There are one or two failures per month on crake. It looks like when\n>> authentication is rejected, as expected in the tests, the psql process\n>> is exiting, but there is a race where the Perl script still wants to\n>> write a dummy query to its stdin (?), so you get:\n>> psql: FATAL: LDAP authentication failed for user \"test1\"\n>> ack Broken pipe: write( 13, 'SELECT 1' ) at\n>> /usr/share/perl5/vendor_perl/IPC/Run/IO.pm line 549.\n\n> Do you suppose a fix like e12a472 would cover this? (\"psql <&-\" fails with\n> status 1 after successful authentication, and authentication failure gives\n> status 2.)\n\nAFAICT the failure is happening down inside PostgresNode::psql's call\nof IPC::Run::run, so we don't really have the option to adjust things\nin exactly that way. (And messing with sub psql for the benefit of\nthis one caller seems pretty risky anyway.)\n\nI'm inclined to suggest that the LDAP test's test_access could use\nan empty stdin and pass \"-c 'SELECT 1'\" as a command line option\ninstead. (Maybe that's exactly what you meant, but I'm not sure.)\n\nI've not been able to duplicate this locally, so I have no idea if\nthat'd really fix it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 02 Aug 2020 12:09:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: LDAP check flapping on crake due to race" }, { "msg_contents": "On Mon, Aug 3, 2020 at 4:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I'm inclined to suggest that the LDAP test's test_access could use\n> an empty stdin and pass \"-c 'SELECT 1'\" as a command line option\n> instead. (Maybe that's exactly what you meant, but I'm not sure.)\n\nGood idea. Here's a patch like that.\n\n> I've not been able to duplicate this locally, so I have no idea if\n> that'd really fix it.\n\nMe neither -- I guess someone who enjoys perl could hack IPC::Run to\ntake a short nap at the right moment.", "msg_date": "Mon, 3 Aug 2020 12:12:57 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: LDAP check flapping on crake due to race" }, { "msg_contents": "On Mon, Aug 03, 2020 at 12:12:57PM +1200, Thomas Munro wrote:\n> On Mon, Aug 3, 2020 at 4:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I'm inclined to suggest that the LDAP test's test_access could use\n> > an empty stdin and pass \"-c 'SELECT 1'\" as a command line option\n> > instead. (Maybe that's exactly what you meant, but I'm not sure.)\n> \n> Good idea. Here's a patch like that.\n\nWhile I had meant a different approach, this is superior.\n\n> > I've not been able to duplicate this locally, so I have no idea if\n> > that'd really fix it.\n> \n> Me neither -- I guess someone who enjoys perl could hack IPC::Run to\n> take a short nap at the right moment.\n\nNot essential to reproduce first, I think.\n\n\n", "msg_date": "Sun, 2 Aug 2020 17:28:57 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: LDAP check flapping on crake due to race" }, { "msg_contents": "On Mon, Aug 3, 2020 at 12:29 PM Noah Misch <noah@leadboat.com> wrote:\n> On Mon, Aug 03, 2020 at 12:12:57PM +1200, Thomas Munro wrote:\n> > On Mon, Aug 3, 2020 at 4:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > I'm inclined to suggest that the LDAP test's test_access could use\n> > > an empty stdin and pass \"-c 'SELECT 1'\" as a command line option\n> > > instead. (Maybe that's exactly what you meant, but I'm not sure.)\n> >\n> > Good idea. Here's a patch like that.\n>\n> While I had meant a different approach, this is superior.\n\nThanks. Pushed.\n\n\n", "msg_date": "Mon, 3 Aug 2020 12:52:03 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: LDAP check flapping on crake due to race" } ]
[ { "msg_contents": "Hello,\n\nUnder the next version of macOS (11.0 unreleased beta 3), configuring Postgres 9.5 and 9.6 fails with\n\n> checking test program... ok\n> checking whether long int is 64 bits... no\n> checking whether long long int is 64 bits... no\n> configure: error: Cannot find a working 64-bit integer type.\n\n\nThis has been fixed for Postgres 10 and onwards by the following commit. It would be nice for this to be back-ported for people building 9.5 or 9.6 on MacOS.\n\n> commit 1c0cf52b39ca3a9a79661129cff918dc000a55eb\n> Author: Peter Eisentraut <peter_e@gmx.net>\n> Date: Tue Aug 30 12:00:00 2016 -0400\n> \n> Use return instead of exit() in configure\n> \n> Using exit() requires stdlib.h, which is not included. Use return\n> instead. Also add return type for main().\n> \n> Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>\n> Reviewed-by: Thomas Munro <thomas.munro@enterprisedb.com>\n> \n> diff --git a/config/c-compiler.m4 b/config/c-compiler.m4\n> index a7f6773ae1..7d901e1f1a 100644\n> --- a/config/c-compiler.m4\n> +++ b/config/c-compiler.m4\n> @@ -71,8 +71,10 @@ int does_int64_work()\n> return 0;\n> return 1;\n> }\n> +\n> +int\n> main() {\n> - exit(! does_int64_work());\n> + return (! does_int64_work());\n> }])],\n> [Ac_cachevar=yes],\n> [Ac_cachevar=no],\n> diff --git a/config/c-library.m4 b/config/c-library.m4\n> index d330b0cf95..0a7452c176 100644\n> --- a/config/c-library.m4\n> +++ b/config/c-library.m4\n> @@ -204,8 +204,10 @@ int does_int64_snprintf_work()\n> return 0;\t\t\t/* either multiply or snprintf is busted */\n> return 1;\n> }\n> +\n> +int\n> main() {\n> - exit(! does_int64_snprintf_work());\n> + return (! does_int64_snprintf_work());\n> }]])],\n> [pgac_cv_snprintf_long_long_int_modifier=$pgac_modifier; break],\n> [],\n> diff --git a/configure b/configure\n> index 55c771a11e..3eb0faf77d 100755\n> --- a/configure\n> +++ b/configure\n> @@ -13594,8 +13594,10 @@ int does_int64_work()\n> return 0;\n> return 1;\n> }\n> +\n> +int\n> main() {\n> - exit(! does_int64_work());\n> + return (! does_int64_work());\n> }\n> _ACEOF\n> if ac_fn_c_try_run \"$LINENO\"; then :\n> @@ -13676,8 +13678,10 @@ int does_int64_work()\n> return 0;\n> return 1;\n> }\n> +\n> +int\n> main() {\n> - exit(! does_int64_work());\n> + return (! does_int64_work());\n> }\n> _ACEOF\n> if ac_fn_c_try_run \"$LINENO\"; then :\n> @@ -13770,8 +13774,10 @@ int does_int64_snprintf_work()\n> return 0;\t\t\t/* either multiply or snprintf is busted */\n> return 1;\n> }\n> +\n> +int\n> main() {\n> - exit(! does_int64_snprintf_work());\n> + return (! does_int64_snprintf_work());\n> }\n> _ACEOF\n> if ac_fn_c_try_run \"$LINENO\"; then :\n\nKindly,\nThomas Gilligan\nthomas.gilligan@icloud.com\n\n", "msg_date": "Mon, 3 Aug 2020 01:04:52 +1000", "msg_from": "Thomas Gilligan <thomas.gilligan@icloud.com>", "msg_from_op": true, "msg_subject": "Fix for configure error in 9.5/9.6 on macOS 11.0 Big Sur" }, { "msg_contents": "Thomas Gilligan <thomas.gilligan@icloud.com> writes:\n> Under the next version of macOS (11.0 unreleased beta 3), configuring Postgres 9.5 and 9.6 fails with\n\n>> checking test program... ok\n>> checking whether long int is 64 bits... no\n>> checking whether long long int is 64 bits... no\n>> configure: error: Cannot find a working 64-bit integer type.\n\nHm, could we see the config.log output for this? I'm not 100% convinced\nthat you've diagnosed the problem accurately, because it'd imply that\nApple made some fundamentally incompatible changes in libc, which\nseems like stirring up trouble for nothing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 02 Aug 2020 17:18:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix for configure error in 9.5/9.6 on macOS 11.0 Big Sur" }, { "msg_contents": "On 2020-08-02 23:18, Tom Lane wrote:\n> Thomas Gilligan <thomas.gilligan@icloud.com> writes:\n>> Under the next version of macOS (11.0 unreleased beta 3), configuring Postgres 9.5 and 9.6 fails with\n> \n>>> checking test program... ok\n>>> checking whether long int is 64 bits... no\n>>> checking whether long long int is 64 bits... no\n>>> configure: error: Cannot find a working 64-bit integer type.\n> \n> Hm, could we see the config.log output for this? I'm not 100% convinced\n> that you've diagnosed the problem accurately, because it'd imply that\n> Apple made some fundamentally incompatible changes in libc, which\n> seems like stirring up trouble for nothing.\n\nIt looks like the new compiler errors out on calling undeclared \nfunctions. Might be good to see config.log to confirm this.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 25 Aug 2020 15:57:11 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Fix for configure error in 9.5/9.6 on macOS 11.0 Big Sur" }, { "msg_contents": "Hi Peter,\n\nYeah it's funny I got this immediately after upgrading to Big Sur (beta\n5). I found commit 1c0cf52b39ca3 but couldn't quite find the mailing\nlist thread on it from 4 years ago (it lists Heikki and Thomas Munro as\nreviewers). Was it prompted by a similar error you encountered?\n\nOn Tue, Aug 25, 2020 at 6:57 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2020-08-02 23:18, Tom Lane wrote:\n> > Thomas Gilligan <thomas.gilligan@icloud.com> writes:\n> >> Under the next version of macOS (11.0 unreleased beta 3), configuring Postgres 9.5 and 9.6 fails with\n> >\n> >>> checking test program... ok\n> >>> checking whether long int is 64 bits... no\n> >>> checking whether long long int is 64 bits... no\n> >>> configure: error: Cannot find a working 64-bit integer type.\n> >\n> > Hm, could we see the config.log output for this? I'm not 100% convinced\n> > that you've diagnosed the problem accurately, because it'd imply that\n> > Apple made some fundamentally incompatible changes in libc, which\n> > seems like stirring up trouble for nothing.\n>\n> It looks like the new compiler errors out on calling undeclared\n> functions. Might be good to see config.log to confirm this.\n\nYeah here's an excerpt from config.log verbatim (I'm not wrapping the\nlines):\n\n| configure:13802: checking whether long int is 64 bits\n| configure:13860: ccache clang -o conftest -Wall -Wmissing-prototypes\n-Wpointer-arith -Wdeclaration-after-statement -Wendif-labels\n-Wmissing-format-attribute -Wformat-security -fno-strict-aliasing\n-fwrapv -Wno-unused-command-line-argument -g -O0 conftest.c -lz\n-lreadline -lm >&5\n| conftest.c:169:5: warning: no previous prototype for function\n'does_int64_work' [-Wmissing-prototypes]\n| int does_int64_work()\n| ^\n| conftest.c:169:1: note: declare 'static' if the function is not\nintended to be used outside of this translation unit\n| int does_int64_work()\n| ^\n| static\n| conftest.c:183:1: warning: type specifier missing, defaults to 'int'\n[-Wimplicit-int]\n| main() {\n| ^\n| conftest.c:184:3: error: implicitly declaring library function\n'exit' with type 'void (int) __attribute__((noreturn))'\n[-Werror,-Wimplicit-function-declaration]\n| exit(! does_int64_work());\n| ^\n| conftest.c:184:3: note: include the header <stdlib.h> or explicitly\nprovide a declaration for 'exit'\n| 2 warnings and 1 error generated.\n\nCheers,\nJesse\n\n\n", "msg_date": "Wed, 2 Sep 2020 13:43:54 -0700", "msg_from": "Jesse Zhang <sbjesse@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix for configure error in 9.5/9.6 on macOS 11.0 Big Sur" }, { "msg_contents": "On 2020-Sep-02, Jesse Zhang wrote:\n\n> Hi Peter,\n> \n> Yeah it's funny I got this immediately after upgrading to Big Sur (beta\n> 5). I found commit 1c0cf52b39ca3 but couldn't quite find the mailing\n> list thread on it from 4 years ago (it lists Heikki and Thomas Munro as\n> reviewers). Was it prompted by a similar error you encountered?\n\nhttps://postgr.es/m/bf9de63c-b669-4b8c-d33b-4a5ed11cd5d4@2ndquadrant.com\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 2 Sep 2020 17:18:53 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Fix for configure error in 9.5/9.6 on macOS 11.0 Big Sur" }, { "msg_contents": "Wow thanks Alvaro! My search of \"most obvious keywords\" didn't turn this\nup.\n\nOn Wed, Sep 2, 2020 at 2:18 PM Alvaro Herrera wrote:\n>\n> On 2020-Sep-02, Jesse Zhang wrote:\n>\n> > Hi Peter,\n> >\n> > Yeah it's funny I got this immediately after upgrading to Big Sur (beta\n> > 5). I found commit 1c0cf52b39ca3 but couldn't quite find the mailing\n> > list thread on it from 4 years ago (it lists Heikki and Thomas Munro as\n> > reviewers). Was it prompted by a similar error you encountered?\n>\n> https://postgr.es/m/bf9de63c-b669-4b8c-d33b-4a5ed11cd5d4@2ndquadrant.com\n\nCheers,\nJesse\n\n\n", "msg_date": "Wed, 2 Sep 2020 14:32:20 -0700", "msg_from": "Jesse Zhang <sbjesse@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix for configure error in 9.5/9.6 on macOS 11.0 Big Sur" }, { "msg_contents": "On 2020-09-02 22:43, Jesse Zhang wrote:\n> | conftest.c:184:3: error: implicitly declaring library function\n> 'exit' with type 'void (int) __attribute__((noreturn))'\n> [-Werror,-Wimplicit-function-declaration]\n> | exit(! does_int64_work());\n> | ^\n> | conftest.c:184:3: note: include the header <stdlib.h> or explicitly\n> provide a declaration for 'exit'\n> | 2 warnings and 1 error generated.\n\nWhere did the -Werror come from?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 3 Sep 2020 06:57:47 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Fix for configure error in 9.5/9.6 on macOS 11.0 Big Sur" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2020-09-02 22:43, Jesse Zhang wrote:\n>> | conftest.c:184:3: error: implicitly declaring library function\n>> 'exit' with type 'void (int) __attribute__((noreturn))'\n>> [-Werror,-Wimplicit-function-declaration]\n\n> Where did the -Werror come from?\n\nPeter wasn't entirely explicit here, but note the advice at the end of\n\nhttps://www.postgresql.org/docs/devel/install-procedure.html\n\nthat you cannot include -Werror in any CFLAGS you tell configure\nto use. It'd be nice if autoconf was more robust about that,\nbut it is not our bug to fix.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 03 Sep 2020 01:40:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix for configure error in 9.5/9.6 on macOS 11.0 Big Sur" }, { "msg_contents": "Hi Tom and Peter,\n\nOn Wed, Sep 2, 2020 at 10:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> > On 2020-09-02 22:43, Jesse Zhang wrote:\n> >> | conftest.c:184:3: error: implicitly declaring library function\n> >> 'exit' with type 'void (int) __attribute__((noreturn))'\n> >> [-Werror,-Wimplicit-function-declaration]\n>\n> > Where did the -Werror come from?\n>\n> Peter wasn't entirely explicit here, but note the advice at the end of\n>\n> https://www.postgresql.org/docs/devel/install-procedure.html\n>\n> that you cannot include -Werror in any CFLAGS you tell configure\n> to use. It'd be nice if autoconf was more robust about that,\n> but it is not our bug to fix.\n>\n> regards, tom lane\n\nIf you noticed the full invocation of clang, you'd notice that Werror is\nnowhere on the command line, even though the error message suggests\notherwise. I think this is a behavior from the new AppleClang, here's\nthe minimal repro:\n\nint main() { exit(0); }\n\nAnd boom!\n\n$ clang -c c.c\nc.c:1:14: error: implicitly declaring library function 'exit' with\ntype 'void (int) __attribute__((noreturn))'\n[-Werror,-Wimplicit-function-declaration]\nint main() { exit(0); }\n ^\nc.c:1:14: note: include the header <stdlib.h> or explicitly provide a\ndeclaration for 'exit'\n1 error generated.\n\nMy environment:\n\n$ uname -rsv\nDarwin 20.0.0 Darwin Kernel Version 20.0.0: Fri Aug 14 00:25:13 PDT\n2020; root:xnu-7195.40.44.151.1~4/RELEASE_X86_64\n$ clang --version\nApple clang version 12.0.0 (clang-1200.0.31.1)\nTarget: x86_64-apple-darwin20.0.0\nThread model: posix\nInstalledDir: /Library/Developer/CommandLineTools/usr/bin\n\nI've heard reports of the same under the latest Xcode 12 on macOS\nCatalina, but I don't have my hands on such an env.\n\nCheers,\nJesse\n\n\n", "msg_date": "Thu, 3 Sep 2020 08:52:51 -0700", "msg_from": "Jesse Zhang <sbjesse@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix for configure error in 9.5/9.6 on macOS 11.0 Big Sur" }, { "msg_contents": "Jesse Zhang <sbjesse@gmail.com> writes:\n>> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>>> Where did the -Werror come from?\n\n> If you noticed the full invocation of clang, you'd notice that Werror is\n> nowhere on the command line, even though the error message suggests\n> otherwise. I think this is a behavior from the new AppleClang,\n\nHmph. If you explicitly say -Wno-error, does the error drop back down\nto being a warning?\n\n> I've heard reports of the same under the latest Xcode 12 on macOS\n> Catalina, but I don't have my hands on such an env.\n\nThe latest thing available to the unwashed masses seems to be\nXcode 11.7 with\n\n$ clang --version\nApple clang version 11.0.3 (clang-1103.0.32.62)\n\nAt least, that's what I got when I reinstalled Xcode just now on\nmy Catalina machine. It does not exhibit this behavior. I see\n\n$ clang -c c.c\nc.c:1:14: warning: implicitly declaring library function 'exit' with type 'void\n (int) __attribute__((noreturn))' [-Wimplicit-function-declaration]\nint main() { exit(0); }\n ^\nc.c:1:14: note: include the header <stdlib.h> or explicitly provide a\n declaration for 'exit'\n1 warning generated.\n\nand PG configure and build goes through just fine.\n\nSmells like an Apple bug from here. Surely they're not expecting\nthat anyone will appreciate -Werror suddenly being the default.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 03 Sep 2020 13:36:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix for configure error in 9.5/9.6 on macOS 11.0 Big Sur" }, { "msg_contents": "On 2020-09-03 19:36, Tom Lane wrote:\n> At least, that's what I got when I reinstalled Xcode just now on\n> my Catalina machine. It does not exhibit this behavior. I see\n> \n> $ clang -c c.c\n> c.c:1:14: warning: implicitly declaring library function 'exit' with type 'void\n> (int) __attribute__((noreturn))' [-Wimplicit-function-declaration]\n> int main() { exit(0); }\n> ^\n> c.c:1:14: note: include the header <stdlib.h> or explicitly provide a\n> declaration for 'exit'\n> 1 warning generated.\n> \n> and PG configure and build goes through just fine.\n> \n> Smells like an Apple bug from here. Surely they're not expecting\n> that anyone will appreciate -Werror suddenly being the default.\n\nIIRC, calling an undeclared function is (or may be?) an error in C99. \nSo perhaps the implicit -Werror only applies to this particular warning \nclass.\n\nI suppose backpatching the patch that fixed this would be appropriate.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 4 Sep 2020 07:46:52 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Fix for configure error in 9.5/9.6 on macOS 11.0 Big Sur" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> I suppose backpatching the patch that fixed this would be appropriate.\n\n[ confused ... ] Back-patching what patch?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 04 Sep 2020 01:52:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix for configure error in 9.5/9.6 on macOS 11.0 Big Sur" }, { "msg_contents": "On 2020-09-04 07:52, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> I suppose backpatching the patch that fixed this would be appropriate.\n> \n> [ confused ... ] Back-patching what patch?\n\nCommit 1c0cf52b39ca3a9a79661129cff918dc000a55eb was mentioned at the \nbeginning of the thread.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 4 Sep 2020 14:21:23 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Fix for configure error in 9.5/9.6 on macOS 11.0 Big Sur" }, { "msg_contents": "On Thu, Sep 3, 2020 at 10:36 AM Tom Lane wrote:\n>\n> Jesse Zhang writes:\n> >> Peter Eisentraut writes:\n> >>> Where did the -Werror come from?\n>\n> > If you noticed the full invocation of clang, you'd notice that Werror is\n> > nowhere on the command line, even though the error message suggests\n> > otherwise. I think this is a behavior from the new AppleClang,\n>\n> Hmph. If you explicitly say -Wno-error, does the error drop back down\n> to being a warning?\n\nYeah something like that, this is what works for me:\n\nclang -Wno-error=implicit-function-declaration c.c\n\nThen it became a warning.\n\nInterestingly, it seems that AppleClang incorrectly forces this warning\non us:\n\n$ clang --verbose c.c\n\n| Apple clang version 12.0.0 (clang-1200.0.31.1)\n| Target: x86_64-apple-darwin20.0.0\n| Thread model: posix\n| InstalledDir: /Library/Developer/CommandLineTools/usr/bin\n| \"/Library/Developer/CommandLineTools/usr/bin/clang\" -cc1 -triple\nx86_64-apple-macosx11.0.0 -Wdeprecated-objc-isa-usage\n-Werror=deprecated-objc-isa-usage\n-Werror=implicit-function-declaration -emit-obj -mrelax-all\n-disable-free -disable-llvm-verifier -discard-value-names\n-main-file-name c.c -mrelocation-model pic -pic-level 2 -mthread-model\nposix -mframe-pointer=all -fno-strict-return -masm-verbose\n-munwind-tables -target-sdk-version=11.0 -target-cpu penryn\n-dwarf-column-info -debugger-tuning=lldb -target-linker-version 609 -v\n-resource-dir /Library/Developer/CommandLineTools/usr/lib/clang/12.0.0\n-isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk\n-I/usr/local/include -internal-isystem\n/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/local/include\n-internal-isystem\n/Library/Developer/CommandLineTools/usr/lib/clang/12.0.0/include\n-internal-externc-isystem\n/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include\n-internal-externc-isystem\n/Library/Developer/CommandLineTools/usr/include -Wno-reorder-init-list\n-Wno-implicit-int-float-conversion -Wno-c99-designator\n-Wno-final-dtor-non-final-class -Wno-extra-semi-stmt\n-Wno-misleading-indentation -Wno-quoted-include-in-framework-header\n-Wno-implicit-fallthrough -Wno-enum-enum-conversion\n-Wno-enum-float-conversion -fdebug-compilation-dir /tmp -ferror-limit\n19 -fmessage-length 193 -stack-protector 1 -fstack-check\n-mdarwin-stkchk-strong-link -fblocks -fencode-extended-block-signature\n-fregister-global-dtors-with-atexit -fgnuc-version=4.2.1\n-fobjc-runtime=macosx-11.0.0 -fmax-type-align=16\n-fdiagnostics-show-option -fcolor-diagnostics -o\n/var/folders/ts/nxrsmhmd0xb5zdrlqb4jlkbr0000gn/T/c-e6802f.o -x c c.c\n| clang -cc1 version 12.0.0 (clang-1200.0.31.1) default target\nx86_64-apple-darwin20.0.0\n\nNotice that -Werror=implicit-function-declaration up there? I spent a\nfew minutes digging in Apple's published fork of LLVM, they've been\nforcing this error flag for quite a while, but this particular\nwarning-turned-error is guarded by a conditional along the lines of \"is\nthis iOS-like\" [1][2], so I cannot imagine such a code path is activated\n(other than something like \"goto fail;\" from 2014)\n\n>\n> > I've heard reports of the same under the latest Xcode 12 on macOS\n> > Catalina, but I don't have my hands on such an env.\n>\n> The latest thing available to the unwashed masses seems to be\n> Xcode 11.7 with\n\nYes you're right. Xcode 12 is still beta.\n\n>\n> Smells like an Apple bug from here. Surely they're not expecting\n> that anyone will appreciate -Werror suddenly being the default.\n\nI think you've convinced me that this is an Apple bug indeed. I'll\nprobably just get by with a Wno-error=implicit-function-declaration in\nmy CFLAGS for now.\n\n\n[1] https://github.com/apple/llvm-project/blob/swift-5.3-DEVELOPMENT-SNAPSHOT-2020-08-04-a/clang/lib/Driver/ToolChains/Darwin.cpp#L952-L962\n[2] https://opensource.apple.com/source/clang/clang-800.0.42.1/src/tools/clang/lib/Driver/ToolChains.cpp\n\n\n", "msg_date": "Fri, 4 Sep 2020 21:23:12 -0700", "msg_from": "Jesse Zhang <sbjesse@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix for configure error in 9.5/9.6 on macOS 11.0 Big Sur" }, { "msg_contents": "Jesse Zhang <sbjesse@gmail.com> writes:\n> Notice that -Werror=implicit-function-declaration up there? I spent a\n> few minutes digging in Apple's published fork of LLVM, they've been\n> forcing this error flag for quite a while, but this particular\n> warning-turned-error is guarded by a conditional along the lines of \"is\n> this iOS-like\" [1][2],\n\nWow, [1] is interesting:\n\n // For iOS and watchOS, also error about implicit function declarations,\n // as that can impact calling conventions.\n if (!isTargetMacOSBased())\n CC1Args.push_back(\"-Werror=implicit-function-declaration\");\n\nI wonder if the new Xcode version dropped the not-macOS restriction\non doing this? It's not much of a stretch of the imagination\nto guess that the iOS/watchOS issue is related to Apple's ABI\nconventions for ARM, in which case they might have to do the\nsame for macOS to get it to run on ARM ... which we can expect\nthat Big Sur is ready for.\n\nAnyway, I'm now satisfied that we understand where the problem really\nlies, so +1 for back-patching 1c0cf52b39ca3.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 05 Sep 2020 11:27:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix for configure error in 9.5/9.6 on macOS 11.0 Big Sur" }, { "msg_contents": "On 2020-09-05 17:27, Tom Lane wrote:\n> Jesse Zhang <sbjesse@gmail.com> writes:\n>> Notice that -Werror=implicit-function-declaration up there? I spent a\n>> few minutes digging in Apple's published fork of LLVM, they've been\n>> forcing this error flag for quite a while, but this particular\n>> warning-turned-error is guarded by a conditional along the lines of \"is\n>> this iOS-like\" [1][2],\n> \n> Wow, [1] is interesting:\n> \n> // For iOS and watchOS, also error about implicit function declarations,\n> // as that can impact calling conventions.\n> if (!isTargetMacOSBased())\n> CC1Args.push_back(\"-Werror=implicit-function-declaration\");\n> \n> I wonder if the new Xcode version dropped the not-macOS restriction\n> on doing this? It's not much of a stretch of the imagination\n> to guess that the iOS/watchOS issue is related to Apple's ABI\n> conventions for ARM, in which case they might have to do the\n> same for macOS to get it to run on ARM ... which we can expect\n> that Big Sur is ready for.\n> \n> Anyway, I'm now satisfied that we understand where the problem really\n> lies, so +1 for back-patching 1c0cf52b39ca3.\n\ndone\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 8 Sep 2020 10:12:16 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Fix for configure error in 9.5/9.6 on macOS 11.0 Big Sur" } ]
[ { "msg_contents": "As previously discussed at [1], contrib/intarray's GiST opclasses\ndo not index empty arrays in a useful way, meaning that\n\"indexedcol <@ something\" has to do a full-index search to ensure\nthat it finds empty arrays, which such a query should always find.\nWe'd be better off to not consider <@ indexable at all by these\nopclasses, but removing it has been problematic because of\ndependencies [2]. Now that commit 9f9682783 is in, the dependency\nproblem is fixed, so here are a couple of patches to remove the\noperator's opclass membership.\n\nPatch 0001 is a minimal patch to just drop the opclass membership.\nWe could do that and stop there, but if we do, <@ searches will\ncontinue to be slow until people think to update their extensions\n(which pg_upgrade does nothing to encourage). Alternatively,\nwe could replace the now-dead support code with something that\nthrows an error telling people to update the extension, as in 0002.\n\nI'm honestly not sure whether 0002 is a good idea or not. Thoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/458.1565114141%40sss.pgh.pa.us\n[2] https://www.postgresql.org/message-id/flat/4578.1565195302%40sss.pgh.pa.us", "msg_date": "Sun, 02 Aug 2020 13:37:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Removing <@ from contrib/intarray's GiST opclasses" } ]
[ { "msg_contents": "Core was generated by `postgres: telsasoft ts [local] BIND '.\n\n(gdb) bt\n#0 0x00007f0951303387 in raise () from /lib64/libc.so.6\n#1 0x00007f0951304a78 in abort () from /lib64/libc.so.6\n#2 0x0000000000921005 in ExceptionalCondition (conditionName=conditionName@entry=0xa5db3d \"pd_idx == pinfo->nparts\", errorType=errorType@entry=0x977389 \"FailedAssertion\", \n fileName=fileName@entry=0xa5da88 \"execPartition.c\", lineNumber=lineNumber@entry=1689) at assert.c:67\n#3 0x0000000000672806 in ExecCreatePartitionPruneState (planstate=planstate@entry=0x908f6d8, partitionpruneinfo=<optimized out>) at execPartition.c:1689\n#4 0x000000000068444a in ExecInitAppend (node=node@entry=0x7036b90, estate=estate@entry=0x11563f0, eflags=eflags@entry=16) at nodeAppend.c:132\n#5 0x00000000006731fd in ExecInitNode (node=0x7036b90, estate=estate@entry=0x11563f0, eflags=eflags@entry=16) at execProcnode.c:179\n#6 0x000000000069d03a in ExecInitResult (node=node@entry=0x70363d8, estate=estate@entry=0x11563f0, eflags=eflags@entry=16) at nodeResult.c:210\n#7 0x000000000067323c in ExecInitNode (node=0x70363d8, estate=estate@entry=0x11563f0, eflags=eflags@entry=16) at execProcnode.c:164\n#8 0x000000000069e834 in ExecInitSort (node=node@entry=0x7035ca8, estate=estate@entry=0x11563f0, eflags=eflags@entry=16) at nodeSort.c:210\n#9 0x0000000000672ff0 in ExecInitNode (node=0x7035ca8, estate=estate@entry=0x11563f0, eflags=eflags@entry=16) at execProcnode.c:313\n#10 0x00000000006812e8 in ExecInitAgg (node=node@entry=0x68311d0, estate=estate@entry=0x11563f0, eflags=eflags@entry=16) at nodeAgg.c:3292\n#11 0x0000000000672fb1 in ExecInitNode (node=0x68311d0, estate=estate@entry=0x11563f0, eflags=eflags@entry=16) at execProcnode.c:328\n#12 0x000000000068925a in ExecInitGatherMerge (node=node@entry=0x6830998, estate=estate@entry=0x11563f0, eflags=eflags@entry=16) at nodeGatherMerge.c:110\n#13 0x0000000000672f33 in ExecInitNode (node=0x6830998, estate=estate@entry=0x11563f0, eflags=eflags@entry=16) at execProcnode.c:348\n#14 0x00000000006812e8 in ExecInitAgg (node=node@entry=0x682eda8, estate=estate@entry=0x11563f0, eflags=eflags@entry=16) at nodeAgg.c:3292\n#15 0x0000000000672fb1 in ExecInitNode (node=node@entry=0x682eda8, estate=estate@entry=0x11563f0, eflags=eflags@entry=16) at execProcnode.c:328\n#16 0x000000000066c8e6 in InitPlan (eflags=16, queryDesc=<optimized out>) at execMain.c:1020\n#17 standard_ExecutorStart (queryDesc=<optimized out>, eflags=16) at execMain.c:266\n#18 0x00007f0944ca83b5 in pgss_ExecutorStart (queryDesc=0x1239b08, eflags=<optimized out>) at pg_stat_statements.c:1007\n#19 0x00007f09117e4891 in explain_ExecutorStart (queryDesc=0x1239b08, eflags=<optimized out>) at auto_explain.c:301\n#20 0x00000000007f9983 in PortalStart (portal=0xeff810, params=0xfacc98, eflags=0, snapshot=0x0) at pquery.c:505\n#21 0x00000000007f7370 in PostgresMain (argc=<optimized out>, argv=argv@entry=0xeb8500, dbname=0xeb84e0 \"ts\", username=<optimized out>) at postgres.c:1987\n#22 0x000000000048916e in BackendRun (port=<optimized out>, port=<optimized out>) at postmaster.c:4523\n#23 BackendStartup (port=0xeb1000) at postmaster.c:4215\n#24 ServerLoop () at postmaster.c:1727\n#25 0x000000000076ec85 in PostmasterMain (argc=argc@entry=13, argv=argv@entry=0xe859b0) at postmaster.c:1400\n#26 0x000000000048a82d in main (argc=13, argv=0xe859b0) at main.c:210\n\n#3 0x0000000000672806 in ExecCreatePartitionPruneState (planstate=planstate@entry=0x908f6d8, partitionpruneinfo=<optimized out>) at execPartition.c:1689\n pd_idx = <optimized out>\n pp_idx = <optimized out>\n pprune = 0x908f910\n partdesc = 0x91937f8\n pinfo = 0x7d6ee78\n partrel = <optimized out>\n partkey = 0xfbba28\n lc2__state = {l = 0x7d6ee20, i = 0}\n partrelpruneinfos = 0x7d6ee20\n lc2 = <optimized out>\n npartrelpruneinfos = <optimized out>\n prunedata = 0x908f908\n j = 0\n lc__state = {l = 0x7d6edc8, i = 0}\n estate = 0x11563f0\n prunestate = 0x908f8b0\n n_part_hierarchies = <optimized out>\n lc = <optimized out>\n i = 0\n\n(gdb) p *pinfo\n$2 = {type = T_PartitionedRelPruneInfo, rtindex = 7, present_parts = 0x7d6ef10, nparts = 414, subplan_map = 0x7d6ef68, subpart_map = 0x7d6f780, relid_map = 0x7d6ff98, initial_pruning_steps = 0x7d707b0, \n exec_pruning_steps = 0x0, execparamids = 0x0}\n\n(gdb) p pd_idx \n$3 = <optimized out>\n\n\n< 2020-08-02 02:04:17.358 SST >LOG: server process (PID 20954) was terminated by signal 6: Aborted\n< 2020-08-02 02:04:17.358 SST >DETAIL: Failed process was running: \n INSERT INTO child.cdrs_data_users_per_cell_20200801 (...list of columns elided...)\n (\n SELECT ..., $3::timestamp, $2,\n MODE() WITHIN GROUP (ORDER BY ...) AS ..., STRING_AGG(DISTINCT ..., ',') AS ..., ...\n\nThis crashed at 2am, which at first I thought was maybe due to simultaneously\ncreating today's partition.\n\nAug 2 02:04:08 telsasoftsky abrt-hook-ccpp: Process 19264 (postgres) of user 26 killed by SIGABRT - dumping core\nAug 2 02:04:17 telsasoftsky abrt-hook-ccpp: Process 20954 (postgres) of user 26 killed by SIGABRT - ignoring (repeated crash)\n\nRunning:\npostgresql13-server-13-beta2_1PGDG.rhel7.x86_64\n\nMaybe this is a problem tickled by something new in v13. However, this is a\nnew VM, and at the time of the crash I was running a shell loop around\npg_restore, in reverse-chronological order. I have full logs, and I found that\njust CREATEd was a table which the crashing process would've tried to SELECT FROM:\n\n| 2020-08-02 02:04:01.48-11 | duration: 106.275 ms statement: CREATE TABLE child.cdrs_huawei_sgwrecord_2019_06_14 (\n\nThat table *currently* has:\n|Number of partitions: 416 (Use \\d+ to list them.)\nAnd the oldest table is still child.cdrs_huawei_sgwrecord_2019_06_14 (since the\nshell loop probably quickly spun through hundreds of pg_restores, failing to\nconnect to the database \"in recovery\"). And today's partition was already\ncreated, at: 2020-08-02 01:30:35. So I think \n\nBased on commit logs, I suspect this may be an \"older bug\", specifically maybe\nwith:\n\n|commit 898e5e3290a72d288923260143930fb32036c00c\n|Author: Robert Haas <rhaas@postgresql.org>\n|Date: Thu Mar 7 11:13:12 2019 -0500\n|\n| Allow ATTACH PARTITION with only ShareUpdateExclusiveLock.\n\nI don't think it matters, but the process surrounding the table being INSERTed\nINTO is more than a little special, involving renames, detaches, creation,\nre-attaching within a transaction. I think that doesn't matter though, and the\nissue is surrounding the table being SELECTed *from*, which is actually behind\na view.\n\n\n", "msg_date": "Sun, 2 Aug 2020 13:11:31 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "FailedAssertion(\"pd_idx == pinfo->nparts\", File: \"execPartition.c\",\n Line: 1689)" }, { "msg_contents": "On Sun, Aug 2, 2020 at 2:11 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Based on commit logs, I suspect this may be an \"older bug\", specifically maybe\n> with:\n>\n> |commit 898e5e3290a72d288923260143930fb32036c00c\n> |Author: Robert Haas <rhaas@postgresql.org>\n> |Date: Thu Mar 7 11:13:12 2019 -0500\n> |\n> | Allow ATTACH PARTITION with only ShareUpdateExclusiveLock.\n>\n> I don't think it matters, but the process surrounding the table being INSERTed\n> INTO is more than a little special, involving renames, detaches, creation,\n> re-attaching within a transaction. I think that doesn't matter though, and the\n> issue is surrounding the table being SELECTed *from*, which is actually behind\n> a view.\n\nThat's an entirely reasonable guess, but it doesn't seem easy to\nunderstand exactly what happened here based on the provided\ninformation. The assertion failure probably indicates that\npinfo->relid_map[] and partdesc->oids[] differ in some way other than\nadditional elements having been inserted into the latter. For example,\nsome elements might have disappeared, or the order might have changed.\nThis isn't supposed to happen, because DETACH PARTITION requires\nheavier locking, and the order changing without anything getting\ndetached should be impossible. But evidently it did. If we could dump\nout the two arrays in question it might shed more light on exactly how\nthings went wrong.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 3 Aug 2020 11:41:37 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: FailedAssertion(\"pd_idx == pinfo->nparts\", File:\n \"execPartition.c\", Line: 1689)" }, { "msg_contents": "On Mon, Aug 03, 2020 at 11:41:37AM -0400, Robert Haas wrote:\n> On Sun, Aug 2, 2020 at 2:11 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > Based on commit logs, I suspect this may be an \"older bug\", specifically maybe\n> > with:\n> >\n> > |commit 898e5e3290a72d288923260143930fb32036c00c\n> > |Author: Robert Haas <rhaas@postgresql.org>\n> > |Date: Thu Mar 7 11:13:12 2019 -0500\n> > |\n> > | Allow ATTACH PARTITION with only ShareUpdateExclusiveLock.\n> >\n> > I don't think it matters, but the process surrounding the table being INSERTed\n> > INTO is more than a little special, involving renames, detaches, creation,\n> > re-attaching within a transaction. I think that doesn't matter though, and the\n> > issue is surrounding the table being SELECTed *from*, which is actually behind\n> > a view.\n> \n> That's an entirely reasonable guess, but it doesn't seem easy to\n> understand exactly what happened here based on the provided\n> information. The assertion failure probably indicates that\n> pinfo->relid_map[] and partdesc->oids[] differ in some way other than\n> additional elements having been inserted into the latter. For example,\n> some elements might have disappeared, or the order might have changed.\n> This isn't supposed to happen, because DETACH PARTITION requires\n> heavier locking, and the order changing without anything getting\n> detached should be impossible. But evidently it did. If we could dump\n> out the two arrays in question it might shed more light on exactly how\n> things went wrong.\n\n(gdb) p *pinfo->relid_map@414\n$8 = {22652203, 22652104, 22651920, 22651654, 22647359, 22645269, 22644012, 22639600, 22638852, 22621975, 22615355, 22615256, 22615069, 22610573, 22606503, 22606404, 22600237, 22600131, 22596610, 22595013, \n 22594914, 22594725, 22594464, 22589317, 22589216, 22587504, 22582570, 22580264, 22577047, 22576948, 22576763, 22576656, 22574077, 22570911, 22570812, 22564524, 22564113, 22558519, 22557080, 22556981, 22556793, \n 22555205, 22550680, 22550579, 22548884, 22543899, 22540822, 22536665, 22536566, 22536377, 22535133, 22528876, 22527780, 22526065, 22521131, 22517878, 22513674, 22513575, 22513405, 22513288, 22507520, 22504728, \n 22504629, 22493699, 22466016, 22458641, 22457551, 22457421, 22457264, 22452879, 22449864, 22449765, 22443560, 22442952, 22436193, 22434644, 22434469, 22434352, 22430792, 22426903, 22426804, 22420732, 22420025, \n 22413050, 22411963, 22411864, 22411675, 22407652, 22404156, 22404049, 22397550, 22394622, 22390035, 22389936, 22389752, 22388386, 22383211, 22382115, 22381934, 22375210, 22370297, 22367878, 22367779, 22367586, \n 22362556, 22359928, 22358236, 22353374, 22348704, 22345692, 22345593, 22345399, 22341347, 22336809, 22336709, 22325812, 22292836, 22287756, 22287657, 22287466, 22283194, 22278659, 22278560, 22272041, 22269121, \n 22264424, 22264325, 22264135, 22260102, 22255418, 22254818, 22248841, 22245824, 22241490, 22241391, 22241210, 22240354, 22236224, 22235123, 22234060, 22228744, 22228345, 22228033, 22222528, 22222429, 22222330, \n 22222144, 22222045, 22218408, 22215986, 22215887, 22209311, 22209212, 22207919, 22205203, 22203385, 22203298, 22203211, 22203124, 22202954, 22202859, 22202772, 22201869, 22200438, 22197706, 22195027, 22194932, \n 22194834, 22191208, 22188412, 22187029, 22182238, 22182134, 22182030, 22181849, 22181737, 22181107, 22175811, 22175710, 22169859, 22169604, 22159266, 22158131, 22158021, 22157824, 22153348, 22153236, 22147308, \n 22146736, 22143778, 22143599, 22143471, 22138702, 22138590, 22132612, 22132513, 22132271, 22132172, 22131987, 21935599, 21932664, 21927997, 21925823, 21885889, 21862973, 21859854, 21859671, 21858869, 21853440, \n 21851884, 21845405, 21842901, 21837523, 21837413, 21837209, 21832347, 21829359, 21827652, 21822602, 21816150, 21805995, 21805812, 21805235, 21798914, 21798026, 21791895, 21791124, 21783854, 21783744, 21783540, \n 21780568, 21774797, 21774687, 21768326, 21764063, 21759627, 21759517, 21759311, 21755697, 21751690, 21751156, 21744906, 21738543, 21736176, 21735992, 21735769, 21727603, 21725956, 21716432, 21678682, 21670968, \n 21670858, 21670665, 21669342, 21661932, 21661822, 21655311, 21650838, 21646721, 21646611, 21646409, 21640984, 21637816, 21637706, 21631061, 21622723, 21621459, 21621320, 21621148, 21612902, 21612790, 21606170, \n 21602265, 21597910, 21597800, 21597605, 21592489, 21589415, 21589305, 21582910, 21578017, 21576758, 21576648, 21572692, 21566633, 21566521, 21560127, 21560017, 21553910, 21553800, 21553613, 21553495, 21549102, \n 21548992, 21542759, 21540922, 21532093, 21531983, 21531786, 21531676, 21531264, 21531154, 21525290, 21524817, 21519470, 21519360, 21519165, 21516571, 21514269, 21514159, 21508389, 21508138, 21508028, 21507830, \n 21503457, 21502484, 21496897, 21494287, 21493722, 21493527, 21491807, 21488530, 21486122, 21485766, 21485603, 21485383, 21481969, 21481672, 21476245, 21472576, 21468851, 21468741, 21468546, 21467832, 21460086, \n 21425406, 21420632, 21420506, 21419974, 21417830, 21417365, 21408677, 21401314, 21400808, 21399725, 21399113, 21393312, 21393202, 21387393, 21384625, 21384361, 21384172, 21384054, 21379960, 21374013, 21365760, \n 21361813, 21361703, 21361504, 21358333, 21358220, 21352848, 21348896, 21348484, 21343591, 21337675, 21337472, 21331017, 21330907, 21325895, 21325785, 21325675, 21325565, 21325370, 21319929, 21316068, 21315958, \n 21312609, 21284187, 21262186, 21258549, 21258439, 21258279, 21258131, 21254759, 21251782, 21251094, 21250984, 21250874, 21250764, 21244302, 21239067, 21238951, 21238831, 21236783, 21235605, 21230205, 21166173, \n 21151836, 21151726, 21151608, 21151498, 21151388, 21151278, 21151168, 21151055, 2576248, 2576255, 2576262, 2576269, 2576276, 21456497, 22064128, 0}\n\n(gdb) p *partdesc->oids@415\n$12 = {22653702, 22652203, 22652104, 22651920, 22651654, 22647359, 22645269, 22644012, 22639600, 22638852, 22621975, 22615355, 22615256, 22615069, 22610573, 22606503, 22606404, 22600237, 22600131, 22596610,\n 22595013, 22594914, 22594725, 22594464, 22589317, 22589216, 22587504, 22582570, 22580264, 22577047, 22576948, 22576763, 22576656, 22574077, 22570911, 22570812, 22564524, 22564113, 22558519, 22557080, 22556981,\n 22556793, 22555205, 22550680, 22550579, 22548884, 22543899, 22540822, 22536665, 22536566, 22536377, 22535133, 22528876, 22527780, 22526065, 22521131, 22517878, 22513674, 22513575, 22513405, 22513288, 22507520,\n 22504728, 22504629, 22493699, 22466016, 22458641, 22457551, 22457421, 22457264, 22452879, 22449864, 22449765, 22443560, 22442952, 22436193, 22434644, 22434469, 22434352, 22430792, 22426903, 22426804, 22420732,\n 22420025, 22413050, 22411963, 22411864, 22411675, 22407652, 22404156, 22404049, 22397550, 22394622, 22390035, 22389936, 22389752, 22388386, 22383211, 22382115, 22381934, 22375210, 22370297, 22367878, 22367779,\n 22367586, 22362556, 22359928, 22358236, 22353374, 22348704, 22345692, 22345593, 22345399, 22341347, 22336809, 22336709, 22325812, 22292836, 22287756, 22287657, 22287466, 22283194, 22278659, 22278560, 22272041,\n 22269121, 22264424, 22264325, 22264135, 22260102, 22255418, 22254818, 22248841, 22245824, 22241490, 22241391, 22241210, 22240354, 22236224, 22235123, 22234060, 22228744, 22228345, 22228033, 22222528, 22222429,\n 22222330, 22222144, 22222045, 22218408, 22215986, 22215887, 22209311, 22209212, 22207919, 22205203, 22203385, 22203298, 22203211, 22203124, 22202954, 22202859, 22202772, 22201869, 22200438, 22197706, 22195027,\n 22194932, 22194834, 22191208, 22188412, 22187029, 22182238, 22182134, 22182030, 22181849, 22181737, 22181107, 22175811, 22175710, 22169859, 22169604, 22159266, 22158131, 22158021, 22157824, 22153348, 22153236,\n 22147308, 22146736, 22143778, 22143599, 22143471, 22138702, 22138590, 22132612, 22132513, 22132271, 22132172, 22131987, 21935599, 21932664, 21927997, 21925823, 21885889, 21862973, 21859854, 21859671, 21858869,\n 21853440, 21851884, 21845405, 21842901, 21837523, 21837413, 21837209, 21832347, 21829359, 21827652, 21822602, 21816150, 21805995, 21805812, 21805235, 21798914, 21798026, 21791895, 21791124, 21783854, 21783744,\n 21783540, 21780568, 21774797, 21774687, 21768326, 21764063, 21759627, 21759517, 21759311, 21755697, 21751690, 21751156, 21744906, 21738543, 21736176, 21735992, 21735769, 21727603, 21725956, 21716432, 21678682,\n 21670968, 21670858, 21670665, 21669342, 21661932, 21661822, 21655311, 21650838, 21646721, 21646611, 21646409, 21640984, 21637816, 21637706, 21631061, 21622723, 21621459, 21621320, 21621148, 21612902, 21612790,\n 21606170, 21602265, 21597910, 21597800, 21597605, 21592489, 21589415, 21589305, 21582910, 21578017, 21576758, 21576648, 21572692, 21566633, 21566521, 21560127, 21560017, 21553910, 21553800, 21553613, 21553495,\n 21549102, 21548992, 21542759, 21540922, 21532093, 21531983, 21531786, 21531676, 21531264, 21531154, 21525290, 21524817, 21519470, 21519360, 21519165, 21516571, 21514269, 21514159, 21508389, 21508138, 21508028,\n 21507830, 21503457, 21502484, 21496897, 21494287, 21493722, 21493527, 21491807, 21488530, 21486122, 21485766, 21485603, 21485383, 21481969, 21481672, 21476245, 21472576, 21468851, 21468741, 21468546, 21467832,\n 21460086, 21425406, 21420632, 21420506, 21419974, 21417830, 21417365, 21408677, 21401314, 21400808, 21399725, 21399113, 21393312, 21393202, 21387393, 21384625, 21384361, 21384172, 21384054, 21379960, 21374013,\n 21365760, 21361813, 21361703, 21361504, 21358333, 21358220, 21352848, 21348896, 21348484, 21343591, 21337675, 21337472, 21331017, 21330907, 21325895, 21325785, 21325675, 21325565, 21325370, 21319929, 21316068,\n 21315958, 21312609, 21284187, 21262186, 21258549, 21258439, 21258279, 21258131, 21254759, 21251782, 21251094, 21250984, 21250874, 21250764, 21244302, 21239067, 21238951, 21238831, 21236783, 21235605, 21230205,\n 21166173, 21151836, 21151726, 21151608, 21151498, 21151388, 21151278, 21151168, 21151055, 2576248, 2576255, 2576262, 2576269, 2576276, 21456497, 22064128, 22628862}\n\nts=# SELECT 22628862 ::regclass; \nregclass | child.cdrs_huawei_msc_voice_2020_08_02\n\n=> This one was *probably* created around 00:30, but I didn't save logs earlier\nthan 0200. That table was probably involved in a query around 2020-08-02\n02:02:01.\n\nts=# SELECT 22653702 ::regclass; \nregclass | child.cdrs_huawei_msc_voice_2019_06_15\n\n=> This one was created by pg_restore at: 2020-08-02 02:03:24\n\nMaybe it's significant that the crash happened during BIND. This is a prepared\nquery.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 3 Aug 2020 11:11:33 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: FailedAssertion(\"pd_idx == pinfo->nparts\", File:\n \"execPartition.c\", Line: 1689)" }, { "msg_contents": "On Tue, Aug 4, 2020 at 1:11 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Mon, Aug 03, 2020 at 11:41:37AM -0400, Robert Haas wrote:\n> > On Sun, Aug 2, 2020 at 2:11 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > Based on commit logs, I suspect this may be an \"older bug\", specifically maybe\n> > > with:\n> > >\n> > > |commit 898e5e3290a72d288923260143930fb32036c00c\n> > > |Author: Robert Haas <rhaas@postgresql.org>\n> > > |Date: Thu Mar 7 11:13:12 2019 -0500\n> > > |\n> > > | Allow ATTACH PARTITION with only ShareUpdateExclusiveLock.\n> > >\n> > > I don't think it matters, but the process surrounding the table being INSERTed\n> > > INTO is more than a little special, involving renames, detaches, creation,\n> > > re-attaching within a transaction. I think that doesn't matter though, and the\n> > > issue is surrounding the table being SELECTed *from*, which is actually behind\n> > > a view.\n> >\n> > That's an entirely reasonable guess, but it doesn't seem easy to\n> > understand exactly what happened here based on the provided\n> > information. The assertion failure probably indicates that\n> > pinfo->relid_map[] and partdesc->oids[] differ in some way other than\n> > additional elements having been inserted into the latter. For example,\n> > some elements might have disappeared, or the order might have changed.\n> > This isn't supposed to happen, because DETACH PARTITION requires\n> > heavier locking, and the order changing without anything getting\n> > detached should be impossible. But evidently it did. If we could dump\n> > out the two arrays in question it might shed more light on exactly how\n> > things went wrong.\n\nIt may be this commit that went into PG 12 that is causing the problem:\n\ncommit 428b260f87e8861ba8e58807b69d433db491c4f4\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Sat Mar 30 18:58:55 2019 -0400\n\n Speed up planning when partitions can be pruned at plan time.\n\nwhich had this:\n\n- /* Double-check that list of relations has not changed. */\n- Assert(memcmp(partdesc->oids, pinfo->relid_map,\n- pinfo->nparts * sizeof(Oid)) == 0);\n+ /*\n+ * Double-check that the list of unpruned relations has not\n+ * changed. (Pruned partitions are not in relid_map[].)\n+ */\n+#ifdef USE_ASSERT_CHECKING\n+ for (int k = 0; k < pinfo->nparts; k++)\n+ {\n+ Assert(partdesc->oids[k] == pinfo->relid_map[k] ||\n+ pinfo->subplan_map[k] == -1);\n+ }\n+#endif\n\nto account for partitions that were pruned by the planner for which we\ndecided to put 0 into relid_map, but it only considered the case where\nthe number of partitions doesn't change since the plan was created.\nThe crash reported here is in the other case where the concurrently\nadded partitions cause the execution-time PartitionDesc to have more\npartitions than the one that PartitionedRelPruneInfo is based on.\n\nI was able to reproduce such a crash as follows:\n\nStart with these tables in session 1.\n\ncreate table foo (a int, b int) partition by list (a);\ncreate table foo1 partition of foo for values in (1);\ncreate table foo2 partition of foo for values in (2);\ncreate table foo3 partition of foo for values in (3);\n\nAttach gdb with a breakpoint set in PartitionDirectoryLookup() and run this:\n\nexplain analyze select * from foo where a <> 1 and a = (select 2);\n<After hitting the breakpoint in PartitionDirectoryLookup() called by\nthe planner, step to the end of it and leave it there>\n\nIn another session:\n\ncreate table foo4 (like foo)\nalter table foo attach partition foo4 for values in (4);\n\nThat should finish without waiting for any lock and send an\ninvalidation message to session 1. Go back to gdb attached to session\n1 and hit continue, resulting in the plan containing runtime pruning\ninfo being executed. ExecCreatePartitionPruneState() opens foo which\nwill now have 4 partitions instead of the 3 that the planner would\nhave seen, of which foo1 is pruned (a <> 1), so the following block is\nexecuted:\n\n if (partdesc->nparts == pinfo->nparts)\n ...\n else\n {\n int pd_idx = 0;\n int pp_idx;\n\n /*\n * Some new partitions have appeared since plan time, and\n * those are reflected in our PartitionDesc but were not\n * present in the one used to construct subplan_map and\n * subpart_map. So we must construct new and longer arrays\n * where the partitions that were originally present map to\n * the same place, and any added indexes map to -1, as if the\n * new partitions had been pruned.\n */\n pprune->subpart_map = palloc(sizeof(int) * partdesc->nparts);\n for (pp_idx = 0; pp_idx < partdesc->nparts; ++pp_idx)\n {\n if (pinfo->relid_map[pd_idx] != partdesc->oids[pp_idx])\n {\n pprune->subplan_map[pp_idx] = -1;\n pprune->subpart_map[pp_idx] = -1;\n }\n else\n {\n pprune->subplan_map[pp_idx] =\n pinfo->subplan_map[pd_idx];\n pprune->subpart_map[pp_idx] =\n pinfo->subpart_map[pd_idx++];\n }\n }\n Assert(pd_idx == pinfo->nparts);\n }\n\nwhere it crashes due to having relid_map[] and partdesc->oids[] that\nlook like this:\n\n(gdb) p *pinfo->relid_map@pinfo->nparts\n$3 = {0, 74106, 74109}\n\n(gdb) p *partdesc->oids@partdesc->nparts\n$6 = {74103, 74106, 74109, 74112}\n\nThe 0 in relid_map matches with nothing in partdesc->oids with the\nloop ending without moving forward in the relid_map array, causing the\nAssert to fail.\n\nThe attached patch should fix that.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 4 Aug 2020 20:12:10 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: FailedAssertion(\"pd_idx == pinfo->nparts\", File:\n \"execPartition.c\", Line: 1689)" }, { "msg_contents": "On Tue, Aug 04, 2020 at 08:12:10PM +0900, Amit Langote wrote:\n> It may be this commit that went into PG 12 that is causing the problem:\n\nThanks for digging into this.\n\n> to account for partitions that were pruned by the planner for which we\n> decided to put 0 into relid_map, but it only considered the case where\n> the number of partitions doesn't change since the plan was created.\n> The crash reported here is in the other case where the concurrently\n> added partitions cause the execution-time PartitionDesc to have more\n> partitions than the one that PartitionedRelPruneInfo is based on.\n\nIs there anything else needed to check that my crash matches your analysis ?\n\n(gdb) up\n#4 0x000000000068444a in ExecInitAppend (node=node@entry=0x7036b90, estate=estate@entry=0x11563f0, eflags=eflags@entry=16) at nodeAppend.c:132\n132 nodeAppend.c: No such file or directory.\n(gdb) p *node->appendplans \n$17 = {type = T_List, length = 413, max_length = 509, elements = 0x7037400, initial_elements = 0x7037400}\n\n(gdb) down\n#3 0x0000000000672806 in ExecCreatePartitionPruneState (planstate=planstate@entry=0x908f6d8, partitionpruneinfo=<optimized out>) at execPartition.c:1689\n1689 execPartition.c: No such file or directory.\n\n$27 = {ps = {type = T_AppendState, plan = 0x7036b90, state = 0x11563f0, ExecProcNode = 0x6842c0 <ExecAppend>, ExecProcNodeReal = 0x0, instrument = 0x0, worker_instrument = 0x0, worker_jit_instrument = 0x0, \n qual = 0x0, lefttree = 0x0, righttree = 0x0, initPlan = 0x0, subPlan = 0x0, chgParam = 0x0, ps_ResultTupleDesc = 0x0, ps_ResultTupleSlot = 0x0, ps_ExprContext = 0x908f7f0, ps_ProjInfo = 0x0, scandesc = 0x0, \n scanops = 0x0, outerops = 0x0, innerops = 0x0, resultops = 0x0, scanopsfixed = false, outeropsfixed = false, inneropsfixed = false, resultopsfixed = false, scanopsset = false, outeropsset = false, \n inneropsset = false, resultopsset = false}, appendplans = 0x0, as_nplans = 0, as_whichplan = -1, as_first_partial_plan = 0, as_pstate = 0x0, pstate_len = 0, as_prune_state = 0x0, as_valid_subplans = 0x0, \n choose_next_subplan = 0x0}\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 4 Aug 2020 10:11:17 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: FailedAssertion(\"pd_idx == pinfo->nparts\", File:\n \"execPartition.c\", Line: 1689)" }, { "msg_contents": "On Mon, Aug 3, 2020 at 12:11 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> (gdb) p *pinfo->relid_map@414\n> (gdb) p *partdesc->oids@415\n\nWhoa, I didn't know about using @ in gdb to print multiple elements. Wild!\n\nAnyway, these two arrays differ in that the latter array has 22653702\ninserted at the beginning and 22628862 at the end, and also in that a\n0 has been removed. This code can't cope with things getting removed,\nso kaboom. I think Amit probably has the right idea about what's going\non here and how to fix it, but I haven't yet had time to study it in\ndetail.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 4 Aug 2020 15:48:49 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: FailedAssertion(\"pd_idx == pinfo->nparts\", File:\n \"execPartition.c\", Line: 1689)" }, { "msg_contents": "On Wed, Aug 5, 2020 at 9:52 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Wed, Aug 5, 2020 at 9:32 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > On Wed, Aug 05, 2020 at 09:26:20AM +0900, Amit Langote wrote:\n> > > On Wed, Aug 5, 2020 at 12:11 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > >\n> > > > On Tue, Aug 04, 2020 at 08:12:10PM +0900, Amit Langote wrote:\n> > > > > It may be this commit that went into PG 12 that is causing the problem:\n> > > >\n> > > > Thanks for digging into this.\n> > > >\n> > > > > to account for partitions that were pruned by the planner for which we\n> > > > > decided to put 0 into relid_map, but it only considered the case where\n> > > > > the number of partitions doesn't change since the plan was created.\n> > > > > The crash reported here is in the other case where the concurrently\n> > > > > added partitions cause the execution-time PartitionDesc to have more\n> > > > > partitions than the one that PartitionedRelPruneInfo is based on.\n> > > >\n> > > > Is there anything else needed to check that my crash matches your analysis ?\n> > >\n> > > If you can spot a 0 in the output of the following, then yes.\n> > >\n> > > (gdb) p *pinfo->relid_map@pinfo->nparts\n> >\n> > I guess you knew that an earlier message has just that. Thanks.\n> > https://www.postgresql.org/message-id/20200803161133.GA21372@telsasoft.com\n>\n> Yeah, you showed:\n>\n> (gdb) p *pinfo->relid_map@414\n>\n> And there is indeed a 0 in there, but I wasn't sure if it was actually\n> in the array or a stray zero due to forcing gdb to show beyond the\n> array bound. Does pinfo->nparts match 414?\n\n(sorry, I forgot to hit reply all in last two emails.)\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 Aug 2020 09:53:44 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: FailedAssertion(\"pd_idx == pinfo->nparts\", File:\n \"execPartition.c\", Line: 1689)" }, { "msg_contents": "On Wed, Aug 05, 2020 at 09:53:44AM +0900, Amit Langote wrote:\n> On Wed, Aug 5, 2020 at 9:52 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Wed, Aug 5, 2020 at 9:32 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > On Wed, Aug 05, 2020 at 09:26:20AM +0900, Amit Langote wrote:\n> > > > On Wed, Aug 5, 2020 at 12:11 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > > >\n> > > > > On Tue, Aug 04, 2020 at 08:12:10PM +0900, Amit Langote wrote:\n> > > > > > It may be this commit that went into PG 12 that is causing the problem:\n> > > > >\n> > > > > Thanks for digging into this.\n> > > > >\n> > > > > > to account for partitions that were pruned by the planner for which we\n> > > > > > decided to put 0 into relid_map, but it only considered the case where\n> > > > > > the number of partitions doesn't change since the plan was created.\n> > > > > > The crash reported here is in the other case where the concurrently\n> > > > > > added partitions cause the execution-time PartitionDesc to have more\n> > > > > > partitions than the one that PartitionedRelPruneInfo is based on.\n> > > > >\n> > > > > Is there anything else needed to check that my crash matches your analysis ?\n> > > >\n> > > > If you can spot a 0 in the output of the following, then yes.\n> > > >\n> > > > (gdb) p *pinfo->relid_map@pinfo->nparts\n> > >\n> > > I guess you knew that an earlier message has just that. Thanks.\n> > > https://www.postgresql.org/message-id/20200803161133.GA21372@telsasoft.com\n> >\n> > Yeah, you showed:\n> >\n> > (gdb) p *pinfo->relid_map@414\n> >\n> > And there is indeed a 0 in there, but I wasn't sure if it was actually\n> > in the array or a stray zero due to forcing gdb to show beyond the\n> > array bound. Does pinfo->nparts match 414?\n\nYes. I typed 414 manually since the the array lengths were suspect.\n\n(gdb) p pinfo->nparts\n$1 = 414\n(gdb) set print elements 0\n(gdb) p *pinfo->relid_map@pinfo->nparts\n$3 = {22652203, 22652104, 22651920, 22651654, 22647359, 22645269, 22644012, 22639600, 22638852, 22621975, 22615355, 22615256, 22615069, 22610573, 22606503, 22606404, 22600237, 22600131, 22596610, 22595013, \n 22594914, 22594725, 22594464, 22589317, 22589216, 22587504, 22582570, 22580264, 22577047, 22576948, 22576763, 22576656, 22574077, 22570911, 22570812, 22564524, 22564113, 22558519, 22557080, 22556981, 22556793, \n 22555205, 22550680, 22550579, 22548884, 22543899, 22540822, 22536665, 22536566, 22536377, 22535133, 22528876, 22527780, 22526065, 22521131, 22517878, 22513674, 22513575, 22513405, 22513288, 22507520, 22504728, \n 22504629, 22493699, 22466016, 22458641, 22457551, 22457421, 22457264, 22452879, 22449864, 22449765, 22443560, 22442952, 22436193, 22434644, 22434469, 22434352, 22430792, 22426903, 22426804, 22420732, 22420025, \n 22413050, 22411963, 22411864, 22411675, 22407652, 22404156, 22404049, 22397550, 22394622, 22390035, 22389936, 22389752, 22388386, 22383211, 22382115, 22381934, 22375210, 22370297, 22367878, 22367779, 22367586, \n 22362556, 22359928, 22358236, 22353374, 22348704, 22345692, 22345593, 22345399, 22341347, 22336809, 22336709, 22325812, 22292836, 22287756, 22287657, 22287466, 22283194, 22278659, 22278560, 22272041, 22269121, \n 22264424, 22264325, 22264135, 22260102, 22255418, 22254818, 22248841, 22245824, 22241490, 22241391, 22241210, 22240354, 22236224, 22235123, 22234060, 22228744, 22228345, 22228033, 22222528, 22222429, 22222330, \n 22222144, 22222045, 22218408, 22215986, 22215887, 22209311, 22209212, 22207919, 22205203, 22203385, 22203298, 22203211, 22203124, 22202954, 22202859, 22202772, 22201869, 22200438, 22197706, 22195027, 22194932, \n 22194834, 22191208, 22188412, 22187029, 22182238, 22182134, 22182030, 22181849, 22181737, 22181107, 22175811, 22175710, 22169859, 22169604, 22159266, 22158131, 22158021, 22157824, 22153348, 22153236, 22147308, \n 22146736, 22143778, 22143599, 22143471, 22138702, 22138590, 22132612, 22132513, 22132271, 22132172, 22131987, 21935599, 21932664, 21927997, 21925823, 21885889, 21862973, 21859854, 21859671, 21858869, 21853440, \n 21851884, 21845405, 21842901, 21837523, 21837413, 21837209, 21832347, 21829359, 21827652, 21822602, 21816150, 21805995, 21805812, 21805235, 21798914, 21798026, 21791895, 21791124, 21783854, 21783744, 21783540, \n 21780568, 21774797, 21774687, 21768326, 21764063, 21759627, 21759517, 21759311, 21755697, 21751690, 21751156, 21744906, 21738543, 21736176, 21735992, 21735769, 21727603, 21725956, 21716432, 21678682, 21670968, \n 21670858, 21670665, 21669342, 21661932, 21661822, 21655311, 21650838, 21646721, 21646611, 21646409, 21640984, 21637816, 21637706, 21631061, 21622723, 21621459, 21621320, 21621148, 21612902, 21612790, 21606170, \n 21602265, 21597910, 21597800, 21597605, 21592489, 21589415, 21589305, 21582910, 21578017, 21576758, 21576648, 21572692, 21566633, 21566521, 21560127, 21560017, 21553910, 21553800, 21553613, 21553495, 21549102, \n 21548992, 21542759, 21540922, 21532093, 21531983, 21531786, 21531676, 21531264, 21531154, 21525290, 21524817, 21519470, 21519360, 21519165, 21516571, 21514269, 21514159, 21508389, 21508138, 21508028, 21507830, \n 21503457, 21502484, 21496897, 21494287, 21493722, 21493527, 21491807, 21488530, 21486122, 21485766, 21485603, 21485383, 21481969, 21481672, 21476245, 21472576, 21468851, 21468741, 21468546, 21467832, 21460086, \n 21425406, 21420632, 21420506, 21419974, 21417830, 21417365, 21408677, 21401314, 21400808, 21399725, 21399113, 21393312, 21393202, 21387393, 21384625, 21384361, 21384172, 21384054, 21379960, 21374013, 21365760, \n 21361813, 21361703, 21361504, 21358333, 21358220, 21352848, 21348896, 21348484, 21343591, 21337675, 21337472, 21331017, 21330907, 21325895, 21325785, 21325675, 21325565, 21325370, 21319929, 21316068, 21315958, \n 21312609, 21284187, 21262186, 21258549, 21258439, 21258279, 21258131, 21254759, 21251782, 21251094, 21250984, 21250874, 21250764, 21244302, 21239067, 21238951, 21238831, 21236783, 21235605, 21230205, 21166173, \n 21151836, 21151726, 21151608, 21151498, 21151388, 21151278, 21151168, 21151055, 2576248, 2576255, 2576262, 2576269, 2576276, 21456497, 22064128, 0}\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 4 Aug 2020 20:04:28 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: FailedAssertion(\"pd_idx == pinfo->nparts\", File:\n \"execPartition.c\", Line: 1689)" }, { "msg_contents": "On Wed, Aug 5, 2020 at 10:04 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Wed, Aug 05, 2020 at 09:53:44AM +0900, Amit Langote wrote:\n> > On Wed, Aug 5, 2020 at 9:52 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > On Wed, Aug 5, 2020 at 9:32 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > > On Wed, Aug 05, 2020 at 09:26:20AM +0900, Amit Langote wrote:\n> > > > > On Wed, Aug 5, 2020 at 12:11 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > > > >\n> > > > > > On Tue, Aug 04, 2020 at 08:12:10PM +0900, Amit Langote wrote:\n> > > > > > > It may be this commit that went into PG 12 that is causing the problem:\n> > > > > >\n> > > > > > Thanks for digging into this.\n> > > > > >\n> > > > > > > to account for partitions that were pruned by the planner for which we\n> > > > > > > decided to put 0 into relid_map, but it only considered the case where\n> > > > > > > the number of partitions doesn't change since the plan was created.\n> > > > > > > The crash reported here is in the other case where the concurrently\n> > > > > > > added partitions cause the execution-time PartitionDesc to have more\n> > > > > > > partitions than the one that PartitionedRelPruneInfo is based on.\n> > > > > >\n> > > > > > Is there anything else needed to check that my crash matches your analysis ?\n> > > > >\n> > > > > If you can spot a 0 in the output of the following, then yes.\n> > > > >\n> > > > > (gdb) p *pinfo->relid_map@pinfo->nparts\n> > > >\n> > > > I guess you knew that an earlier message has just that. Thanks.\n> > > > https://www.postgresql.org/message-id/20200803161133.GA21372@telsasoft.com\n> > >\n> > > Yeah, you showed:\n> > >\n> > > (gdb) p *pinfo->relid_map@414\n> > >\n> > > And there is indeed a 0 in there, but I wasn't sure if it was actually\n> > > in the array or a stray zero due to forcing gdb to show beyond the\n> > > array bound. Does pinfo->nparts match 414?\n>\n> Yes. I typed 414 manually since the the array lengths were suspect.\n>\n> (gdb) p pinfo->nparts\n> $1 = 414\n> (gdb) set print elements 0\n> (gdb) p *pinfo->relid_map@pinfo->nparts\n> $3 = {....\n> 21151836, 21151726, 21151608, 21151498, 21151388, 21151278, 21151168, 21151055, 2576248, 2576255, 2576262, 2576269, 2576276, 21456497, 22064128, 0}\n\nThanks. There is a 0 in there, which can only be there if planner was\nable to prune that last partition. So, the planner saw a table with\n414 partitions, was able to prune the last one and constructed an\nAppend plan with 413 subplans for unpruned partitions as you showed\nupthread:\n\n> (gdb) p *node->appendplans\n> $17 = {type = T_List, length = 413, max_length = 509, elements = 0x7037400, initial_elements = 0x7037400}\n\nThis suggests that the crash I was able produce is similar to what you saw.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 Aug 2020 10:12:53 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: FailedAssertion(\"pd_idx == pinfo->nparts\", File:\n \"execPartition.c\", Line: 1689)" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> The crash reported here is in the other case where the concurrently\n> added partitions cause the execution-time PartitionDesc to have more\n> partitions than the one that PartitionedRelPruneInfo is based on.\n> I was able to reproduce such a crash as follows:\n\nYeah, I can repeat the case per these directions. I concur that the\nissue is that ExecCreatePartitionPruneState is failing to cope with\nzeroes in the relid_map.\n\n> The attached patch should fix that.\n\nI don't like this patch at all though; I do not think it is being nearly\ncareful enough to ensure that it's matched the surviving relation OIDs\ncorrectly. In particular it blithely assumes that a zero in relid_map\n*must* match the immediately next entry in partdesc->oids, which is easy\nto break if the new partition is adjacent to the one the planner managed\nto prune. So I think we should do it more like the attached.\n\nI'm strongly tempted to convert the trailing Assert to an actual\ntest-and-elog, too, but didn't do so here.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 05 Aug 2020 13:30:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: FailedAssertion(\"pd_idx == pinfo->nparts\",\n File: \"execPartition.c\", Line: 1689)" }, { "msg_contents": "On Wed, Aug 5, 2020 at 1:30 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I don't like this patch at all though; I do not think it is being nearly\n> careful enough to ensure that it's matched the surviving relation OIDs\n> correctly. In particular it blithely assumes that a zero in relid_map\n> *must* match the immediately next entry in partdesc->oids, which is easy\n> to break if the new partition is adjacent to the one the planner managed\n> to prune. So I think we should do it more like the attached.\n\nOoh, nice catch.\n\n> I'm strongly tempted to convert the trailing Assert to an actual\n> test-and-elog, too, but didn't do so here.\n\nI was thinking about that, too. +1 for taking that step.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 5 Aug 2020 13:53:03 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: FailedAssertion(\"pd_idx == pinfo->nparts\", File:\n \"execPartition.c\", Line: 1689)" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Aug 5, 2020 at 1:30 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I'm strongly tempted to convert the trailing Assert to an actual\n>> test-and-elog, too, but didn't do so here.\n\n> I was thinking about that, too. +1 for taking that step.\n\nWill do.\n\nIn the longer term, it's annoying that we have no test methodology\nfor this other than \"manually set a breakpoint here\". If we're going\nto allow plan-relevant DDL changes to happen with less than full table\nlock, I think we need to improve that. I spent a little bit of time\njust now trying to build an isolationtester case for this, and failed\ncompletely. So I wonder if we can create some sort of test module that\nallows capture of a plan tree and then execution of that plan tree later\n(even after relcache inval would normally have forced replanning).\nObviously that could not be a normal SQL-accessible feature, because\nsome types of invals would make the plan completely wrong, but for\ntesting purposes it'd be mighty helpful to check that a stale plan\nstill works.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 05 Aug 2020 14:21:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: FailedAssertion(\"pd_idx == pinfo->nparts\",\n File: \"execPartition.c\", Line: 1689)" }, { "msg_contents": "On Wed, Aug 5, 2020 at 2:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> In the longer term, it's annoying that we have no test methodology\n> for this other than \"manually set a breakpoint here\". If we're going\n> to allow plan-relevant DDL changes to happen with less than full table\n> lock, I think we need to improve that. I spent a little bit of time\n> just now trying to build an isolationtester case for this, and failed\n> completely. So I wonder if we can create some sort of test module that\n> allows capture of a plan tree and then execution of that plan tree later\n> (even after relcache inval would normally have forced replanning).\n> Obviously that could not be a normal SQL-accessible feature, because\n> some types of invals would make the plan completely wrong, but for\n> testing purposes it'd be mighty helpful to check that a stale plan\n> still works.\n\nThat's an interesting idea. I don't know exactly how it would work,\nbut I agree that it would allow useful testing that we can't do today.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 5 Aug 2020 15:59:54 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: FailedAssertion(\"pd_idx == pinfo->nparts\", File:\n \"execPartition.c\", Line: 1689)" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Aug 5, 2020 at 2:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> In the longer term, it's annoying that we have no test methodology\n>> for this other than \"manually set a breakpoint here\". If we're going\n>> to allow plan-relevant DDL changes to happen with less than full table\n>> lock, I think we need to improve that. I spent a little bit of time\n>> just now trying to build an isolationtester case for this, and failed\n>> completely. So I wonder if we can create some sort of test module that\n>> allows capture of a plan tree and then execution of that plan tree later\n>> (even after relcache inval would normally have forced replanning).\n>> Obviously that could not be a normal SQL-accessible feature, because\n>> some types of invals would make the plan completely wrong, but for\n>> testing purposes it'd be mighty helpful to check that a stale plan\n>> still works.\n\n> That's an interesting idea. I don't know exactly how it would work,\n> but I agree that it would allow useful testing that we can't do today.\n\nAfter thinking about it for a little bit, I'm envisioning a test module\nthat can be loaded into a session, and then it gets into the planner_hook,\nand what it does is after each planner execution, take and release an\nadvisory lock with some selectable ID. Then we can construct\nisolationtester specs that do something like\n\n\tsession 1\t\t\t\tsession 2\n\n\tLOAD test-module;\n\tSET custom_guc_for_lock_id = n;\n\tprepare test tables;\n\n\t\t\t\t\t\tSELECT pg_advisory_lock(n);\n\t\t\n\tSELECT victim-query-here;\n\t... after planning, query blocks on lock\n\n\t\t\t\t\t\tperform DDL changes;\n\t\t\t\t\t\tSELECT pg_advisory_unlock(n);\n\n\t... query executes with now-stale plan\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 05 Aug 2020 16:19:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: FailedAssertion(\"pd_idx == pinfo->nparts\",\n File: \"execPartition.c\", Line: 1689)" }, { "msg_contents": "On Wed, Aug 5, 2020 at 4:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> After thinking about it for a little bit, I'm envisioning a test module\n> that can be loaded into a session, and then it gets into the planner_hook,\n> and what it does is after each planner execution, take and release an\n> advisory lock with some selectable ID. Then we can construct\n> isolationtester specs that do something like\n>\n> session 1 session 2\n>\n> LOAD test-module;\n> SET custom_guc_for_lock_id = n;\n> prepare test tables;\n>\n> SELECT pg_advisory_lock(n);\n>\n> SELECT victim-query-here;\n> ... after planning, query blocks on lock\n>\n> perform DDL changes;\n> SELECT pg_advisory_unlock(n);\n>\n> ... query executes with now-stale plan\n\nVery sneaky!\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 5 Aug 2020 16:20:48 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: FailedAssertion(\"pd_idx == pinfo->nparts\", File:\n \"execPartition.c\", Line: 1689)" }, { "msg_contents": "On Thu, Aug 6, 2020 at 2:30 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > The attached patch should fix that.\n>\n> I don't like this patch at all though; I do not think it is being nearly\n> careful enough to ensure that it's matched the surviving relation OIDs\n> correctly. In particular it blithely assumes that a zero in relid_map\n> *must* match the immediately next entry in partdesc->oids, which is easy\n> to break if the new partition is adjacent to the one the planner managed\n> to prune.\n\nIndeed, you're right.\n\n> So I think we should do it more like the attached.\n\nThanks for pushing that.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 6 Aug 2020 12:22:13 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: FailedAssertion(\"pd_idx == pinfo->nparts\", File:\n \"execPartition.c\", Line: 1689)" }, { "msg_contents": "On Thu, Aug 6, 2020 at 2:22 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Wed, Aug 5, 2020 at 1:30 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I'm strongly tempted to convert the trailing Assert to an actual\n> >> test-and-elog, too, but didn't do so here.\n>\n> > I was thinking about that, too. +1 for taking that step.\n>\n> Will do.\n>\n> In the longer term, it's annoying that we have no test methodology\n> for this other than \"manually set a breakpoint here\".\n\n\nOne of the methods I see is we can just add some GUC variable for some\naction injection. basically it adds some code based on the GUC like this;\n\nif (shall_delay_planning)\n{\n sleep(10)\n};\n\nAFAIK, MongoDB uses much such technology in their test framework. First\nit\ndefines the fail point [1], and then does code injection if the fail point\nis set [2].\nAt last, during the test it can set a fail point like a GUC, but with more\nattributes [3].\nIf that is useful in PG as well and it is not an urgent task, I would like\nto help\nin this direction.\n\n[1]\nhttps://github.com/mongodb/mongo/search?q=MONGO_FAIL_POINT_DEFINE&unscoped_q=MONGO_FAIL_POINT_DEFINE\n\n[2]\nhttps://github.com/mongodb/mongo/blob/d4e7ea57599b44353b5393afedee8ae5670837b3/src/mongo/db/repl/repl_set_config.cpp#L475\n[3]\nhttps://github.com/mongodb/mongo/blob/e07c2d29aded5a30ff08b5ce6a436b6ef6f44014/src/mongo/shell/replsettest.js#L1427\n\n\n\nIf we're going\n> to allow plan-relevant DDL changes to happen with less than full table\n> lock, I think we need to improve that. I spent a little bit of time\n> just now trying to build an isolationtester case for this, and failed\n> completely. So I wonder if we can create some sort of test module that\n> allows capture of a plan tree and then execution of that plan tree later\n> (even after relcache inval would normally have forced replanning).\n> Obviously that could not be a normal SQL-accessible feature, because\n> some types of invals would make the plan completely wrong, but for\n> testing purposes it'd be mighty helpful to check that a stale plan\n> still works.\n>\n> regards, tom lane\n>\n>\n>\n\n-- \nBest Regards\nAndy Fan\n\nOn Thu, Aug 6, 2020 at 2:22 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Aug 5, 2020 at 1:30 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I'm strongly tempted to convert the trailing Assert to an actual\n>> test-and-elog, too, but didn't do so here.\n\n> I was thinking about that, too. +1 for taking that step.\n\nWill do.\n\nIn the longer term, it's annoying that we have no test methodology\nfor this other than \"manually set a breakpoint here\".  One of the methods I see is we can just add some GUC variable for someaction injection.   basically it adds some code based on the GUC like this;if (shall_delay_planning){  sleep(10)};AFAIK,  MongoDB uses much such technology  in their test framework. First it defines the fail point [1],  and then does code injection if the fail point is set [2].  At last, during the test it can set a fail point like a GUC, but with more attributes [3]. If that is useful in PG as well and it is not an urgent task,  I would like to helpin this direction. [1] https://github.com/mongodb/mongo/search?q=MONGO_FAIL_POINT_DEFINE&unscoped_q=MONGO_FAIL_POINT_DEFINE  [2] https://github.com/mongodb/mongo/blob/d4e7ea57599b44353b5393afedee8ae5670837b3/src/mongo/db/repl/repl_set_config.cpp#L475[3] https://github.com/mongodb/mongo/blob/e07c2d29aded5a30ff08b5ce6a436b6ef6f44014/src/mongo/shell/replsettest.js#L1427 If we're going\nto allow plan-relevant DDL changes to happen with less than full table\nlock, I think we need to improve that.  I spent a little bit of time\njust now trying to build an isolationtester case for this, and failed\ncompletely.  So I wonder if we can create some sort of test module that\nallows capture of a plan tree and then execution of that plan tree later\n(even after relcache inval would normally have forced replanning).\nObviously that could not be a normal SQL-accessible feature, because\nsome types of invals would make the plan completely wrong, but for\ntesting purposes it'd be mighty helpful to check that a stale plan\nstill works.\n\n                        regards, tom lane\n\n\n-- Best RegardsAndy Fan", "msg_date": "Thu, 6 Aug 2020 11:49:36 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: FailedAssertion(\"pd_idx == pinfo->nparts\", File:\n \"execPartition.c\", Line: 1689)" }, { "msg_contents": "Andy Fan <zhihui.fan1213@gmail.com> writes:\n> On Thu, Aug 6, 2020 at 2:22 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> In the longer term, it's annoying that we have no test methodology\n>> for this other than \"manually set a breakpoint here\".\n\n> One of the methods I see is we can just add some GUC variable for some\n> action injection. basically it adds some code based on the GUC like this;\n\nSee my straw-man proposal downthread. I'm not very excited about putting\nthings like this into the standard build, because it's really hard to be\nsure that there are no security-hazard-ish downsides of putting in ways to\nget at testing behaviors from standard SQL. And then there's the question\nof whether you're adding noticeable overhead to production builds. So a\nloadable module that can use some existing hook to provide the needed\nbehavior seems like a better plan to me, whenever we can do it that way.\n\nIn general, though, it seems like we've seldom regretted investments in\ntest tooling.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 06 Aug 2020 00:02:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: FailedAssertion(\"pd_idx == pinfo->nparts\",\n File: \"execPartition.c\", Line: 1689)" }, { "msg_contents": "On Thu, Aug 6, 2020 at 12:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andy Fan <zhihui.fan1213@gmail.com> writes:\n> > On Thu, Aug 6, 2020 at 2:22 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> In the longer term, it's annoying that we have no test methodology\n> >> for this other than \"manually set a breakpoint here\".\n>\n> > One of the methods I see is we can just add some GUC variable for some\n> > action injection. basically it adds some code based on the GUC like\n> this;\n>\n> See my straw-man proposal downthread. I'm not very excited about putting\n> things like this into the standard build, because it's really hard to be\n> sure that there are no security-hazard-ish downsides of putting in ways to\n> get at testing behaviors from standard SQL. And then there's the question\n> of whether you're adding noticeable overhead to production builds. So a\n> loadable module that can use some existing hook to provide the needed\n> behavior seems like a better plan to me, whenever we can do it that way.\n>\n> In general, though, it seems like we've seldom regretted investments in\n> test tooling.\n>\n> regards, tom lane\n>\n\n\nThanks for your explanation, I checked it again and it looks a very clean\nmethod. The attached is a draft patch based on my understanding. Hope\nI didn't misunderstand you..\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Thu, 6 Aug 2020 21:52:23 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: FailedAssertion(\"pd_idx == pinfo->nparts\", File:\n \"execPartition.c\", Line: 1689)" }, { "msg_contents": "Andy Fan <zhihui.fan1213@gmail.com> writes:\n> On Thu, Aug 6, 2020 at 12:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> See my straw-man proposal downthread.\n\n> Thanks for your explanation, I checked it again and it looks a very clean\n> method. The attached is a draft patch based on my understanding. Hope\n> I didn't misunderstand you..\n\nAh, I was going to play arond with that today, but you beat me to it ;-)\n\nA few thoughts after a quick look at the patch:\n\n* I had envisioned that there's a custom GUC controlling the lock ID\nused; this would allow blocking different sessions at different points,\nif we ever need that. Also, I'd make the GUC start out as zero which\nmeans \"do nothing\", so that merely loading the module has no immediate\neffect.\n\n* Don't really see the point of the before-planning lock.\n\n* Rather than exposing internal declarations from lockfuncs.c, you\ncould just write calls to pg_advisory_lock_int8 etc. using\nDirectFunctionCall1.\n\n* We need some better name than \"test_module\". I had vaguely thought\nabout \"delay_execution\", but am surely open to better ideas.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 06 Aug 2020 10:42:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: FailedAssertion(\"pd_idx == pinfo->nparts\",\n File: \"execPartition.c\", Line: 1689)" }, { "msg_contents": "On Thu, Aug 6, 2020 at 10:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andy Fan <zhihui.fan1213@gmail.com> writes:\n> > On Thu, Aug 6, 2020 at 12:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> See my straw-man proposal downthread.\n>\n> > Thanks for your explanation, I checked it again and it looks a very clean\n> > method. The attached is a draft patch based on my understanding. Hope\n> > I didn't misunderstand you..\n>\n> Ah, I was going to play arond with that today, but you beat me to it ;-)\n>\n>\nVery glad to be helpful.\n\n\n> A few thoughts after a quick look at the patch:\n>\n> * I had envisioned that there's a custom GUC controlling the lock ID\n> used; this would allow blocking different sessions at different points,\n> if we ever need that. Also, I'd make the GUC start out as zero which\n> means \"do nothing\", so that merely loading the module has no immediate\n> effect.\n>\n>\nI forgot to say I didn't get the point of the guc variable in the last\nthread,\nnow I think it is a smart idea, so added it. In this way, one session\ncan only be blocked at one place, it may not be an issue in practise.\n\n* Don't really see the point of the before-planning lock.\n>\n>\nyes.. it was removed now.\n\n* Rather than exposing internal declarations from lockfuncs.c, you\n> could just write calls to pg_advisory_lock_int8 etc. using\n> DirectFunctionCall1.\n>\n>\nThanks for sharing it, this method looks pretty good.\n\n\n> * We need some better name than \"test_module\". I had vaguely thought\n> about \"delay_execution\", but am surely open to better ideas.\n>\n>\nBoth names look good to me, delay_execution looks better, it is used in\nv2.\n\nAttached is the v2 patch.\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Thu, 6 Aug 2020 23:57:08 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: FailedAssertion(\"pd_idx == pinfo->nparts\", File:\n \"execPartition.c\", Line: 1689)" }, { "msg_contents": "Andy Fan <zhihui.fan1213@gmail.com> writes:\n> Attached is the v2 patch.\n\nForgot to mention that I'd envisioned adding this as a src/test/modules/\nmodule; contrib/ is for things that we intend to expose to users, which\nI think this isn't.\n\nI played around with this and got the isolation test I'd experimented\nwith yesterday to work with it. If you revert 7a980dfc6 then the\nattached patch will expose that bug. Interestingly, I had to add an\nexplicit AcceptInvalidationMessages() call to reproduce the bug; so\napparently we do none of those between planning and execution in the\nordinary query code path. Arguably, that means we're testing a scenario\nsomewhat different from what can happen in live databases, but I think\nit's OK. Amit's recipe for reproducing the bug works because there are\nother relation lock acquisitions (and hence AcceptInvalidationMessages\ncalls) later in planning than where he asked us to wait. So this\neffectively tests a scenario where a very late A.I.M. call within the\nplanner detects an inval event for some already-planned relation, and\nthat seems like a valid-enough scenario.\n\nAnyway, attached find a reviewed version of your patch plus a test\nscenario contributed by me (I was too lazy to split it into two\npatches). Barring objections, I'll push this tomorrow or so.\n\n(BTW, I checked and found that this test does *not* expose the problems\nwith Amit's original patch. Not sure if it's worth trying to finagle\nit so it does.)\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 06 Aug 2020 20:32:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: FailedAssertion(\"pd_idx == pinfo->nparts\",\n File: \"execPartition.c\", Line: 1689)" }, { "msg_contents": "On Fri, Aug 7, 2020 at 8:32 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andy Fan <zhihui.fan1213@gmail.com> writes:\n> > Attached is the v2 patch.\n>\n> Forgot to mention that I'd envisioned adding this as a src/test/modules/\n> module; contrib/ is for things that we intend to expose to users, which\n> I think this isn't.\n>\n> I played around with this and got the isolation test I'd experimented\n> with yesterday to work with it. If you revert 7a980dfc6 then the\n> attached patch will expose that bug. Interestingly, I had to add an\n> explicit AcceptInvalidationMessages() call to reproduce the bug; so\n> apparently we do none of those between planning and execution in the\n> ordinary query code path. Arguably, that means we're testing a scenario\n> somewhat different from what can happen in live databases, but I think\n> it's OK. Amit's recipe for reproducing the bug works because there are\n> other relation lock acquisitions (and hence AcceptInvalidationMessages\n> calls) later in planning than where he asked us to wait. So this\n> effectively tests a scenario where a very late A.I.M. call within the\n> planner detects an inval event for some already-planned relation, and\n> that seems like a valid-enough scenario.\n>\n> Anyway, attached find a reviewed version of your patch plus a test\n> scenario contributed by me (I was too lazy to split it into two\n> patches). Barring objections, I'll push this tomorrow or so.\n>\n> (BTW, I checked and found that this test does *not* expose the problems\n> with Amit's original patch. Not sure if it's worth trying to finagle\n> it so it does.)\n>\n> regards, tom lane\n>\n>\n+ * delay_execution.c\n+ * Test module to allow delay between parsing and execution of a query.\n\nI am not sure if we need to limit the scope to \"between parsing and\nexecution\",\nIMO, it can be used at any place where we have a hook so that\ndelay_execution\ncan inject the lock_unlock logic with a predefined lock id. Probably the\ntest writer\nonly wants one place blocked, then delay_execution.xxx_lock_id can be set\nso\nthat only the given lock ID is considered. Just my 0.01 cents.\n\n-- \nBest Regards\nAndy Fan\n\nOn Fri, Aug 7, 2020 at 8:32 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Andy Fan <zhihui.fan1213@gmail.com> writes:\n> Attached is the v2 patch.\n\nForgot to mention that I'd envisioned adding this as a src/test/modules/\nmodule; contrib/ is for things that we intend to expose to users, which\nI think this isn't.\n\nI played around with this and got the isolation test I'd experimented\nwith yesterday to work with it.  If you revert 7a980dfc6 then the\nattached patch will expose that bug.  Interestingly, I had to add an\nexplicit AcceptInvalidationMessages() call to reproduce the bug; so\napparently we do none of those between planning and execution in the\nordinary query code path.  Arguably, that means we're testing a scenario\nsomewhat different from what can happen in live databases, but I think\nit's OK.  Amit's recipe for reproducing the bug works because there are\nother relation lock acquisitions (and hence AcceptInvalidationMessages\ncalls) later in planning than where he asked us to wait.  So this\neffectively tests a scenario where a very late A.I.M. call within the\nplanner detects an inval event for some already-planned relation, and\nthat seems like a valid-enough scenario.\n\nAnyway, attached find a reviewed version of your patch plus a test\nscenario contributed by me (I was too lazy to split it into two\npatches).  Barring objections, I'll push this tomorrow or so.\n\n(BTW, I checked and found that this test does *not* expose the problems\nwith Amit's original patch.  Not sure if it's worth trying to finagle\nit so it does.)\n\n                        regards, tom lane\n + * delay_execution.c+ *\t\tTest module to allow delay between parsing and execution of a query.I am not sure if we need to limit the scope to \"between parsing and execution\",IMO, it can be used at any place where we have a hook so that delay_executioncan inject the lock_unlock logic with a predefined lock id. Probably the test writeronly wants one place blocked, then delay_execution.xxx_lock_id can be set so that only the given lock ID  is considered.  Just my 0.01 cents. -- Best RegardsAndy Fan", "msg_date": "Fri, 7 Aug 2020 10:26:21 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: FailedAssertion(\"pd_idx == pinfo->nparts\", File:\n \"execPartition.c\", Line: 1689)" }, { "msg_contents": "Andy Fan <zhihui.fan1213@gmail.com> writes:\n> I am not sure if we need to limit the scope to \"between parsing and\n> execution\",\n\nYeah, there might be reason to add similar functionality in other\nplaces later. I'm not sure where yet --- but that idea does make\nme slightly unhappy with the \"delay_execution\" moniker. I don't\nhave a better name though ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 06 Aug 2020 22:44:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: FailedAssertion(\"pd_idx == pinfo->nparts\",\n File: \"execPartition.c\", Line: 1689)" }, { "msg_contents": "On Fri, Aug 7, 2020 at 9:32 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Andy Fan <zhihui.fan1213@gmail.com> writes:\n> > Attached is the v2 patch.\n\nThanks Andy and Tom for this.\n\n> Forgot to mention that I'd envisioned adding this as a src/test/modules/\n> module; contrib/ is for things that we intend to expose to users, which\n> I think this isn't.\n>\n> I played around with this and got the isolation test I'd experimented\n> with yesterday to work with it. If you revert 7a980dfc6 then the\n> attached patch will expose that bug. Interestingly, I had to add an\n> explicit AcceptInvalidationMessages() call to reproduce the bug; so\n> apparently we do none of those between planning and execution in the\n> ordinary query code path. Arguably, that means we're testing a scenario\n> somewhat different from what can happen in live databases, but I think\n> it's OK. Amit's recipe for reproducing the bug works because there are\n> other relation lock acquisitions (and hence AcceptInvalidationMessages\n> calls) later in planning than where he asked us to wait. So this\n> effectively tests a scenario where a very late A.I.M. call within the\n> planner detects an inval event for some already-planned relation, and\n> that seems like a valid-enough scenario.\n\nAgreed.\n\nCuriously, Justin mentioned upthread that the crash occurred during\nBIND of a prepared query, so it better had been that a custom plan was\nbeing executed, because a generic one based on fewer partitions would\nbe thrown away due to A.I.M. invoked during AcquireExecutorLocks().\n\n> Anyway, attached find a reviewed version of your patch plus a test\n> scenario contributed by me (I was too lazy to split it into two\n> patches). Barring objections, I'll push this tomorrow or so.\n>\n> (BTW, I checked and found that this test does *not* expose the problems\n> with Amit's original patch. Not sure if it's worth trying to finagle\n> it so it does.)\n\nI tried to figure out a scenario where my patch would fail but\ncouldn't come up with one either, but it's no proof that it isn't\nwrong. For example, I can see that pinfo->relid_map[pinfo->nparts]\ncan be accessed with my patch which is not correct.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 7 Aug 2020 12:16:11 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: FailedAssertion(\"pd_idx == pinfo->nparts\", File:\n \"execPartition.c\", Line: 1689)" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Fri, Aug 7, 2020 at 9:32 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> ... Amit's recipe for reproducing the bug works because there are\n>> other relation lock acquisitions (and hence AcceptInvalidationMessages\n>> calls) later in planning than where he asked us to wait. So this\n>> effectively tests a scenario where a very late A.I.M. call within the\n>> planner detects an inval event for some already-planned relation, and\n>> that seems like a valid-enough scenario.\n\n> Agreed.\n\n> Curiously, Justin mentioned upthread that the crash occurred during\n> BIND of a prepared query, so it better had been that a custom plan was\n> being executed, because a generic one based on fewer partitions would\n> be thrown away due to A.I.M. invoked during AcquireExecutorLocks().\n\nBased on the above, it seems plausible that the plancache did throw away\nan old plan and try to replan, but the inval message announcing partition\naddition arrived too late during that planning cycle. Just like the\nnormal execution path, the plancache code path won't do more than one\niteration of planning on the way to a demanded query execution.\n\n>> (BTW, I checked and found that this test does *not* expose the problems\n>> with Amit's original patch. Not sure if it's worth trying to finagle\n>> it so it does.)\n\n> I tried to figure out a scenario where my patch would fail but\n> couldn't come up with one either, but it's no proof that it isn't\n> wrong. For example, I can see that pinfo->relid_map[pinfo->nparts]\n> can be accessed with my patch which is not correct.\n\nYeah, touching array entries off the end of the relid_map array definitely\nseems possible with that coding. But the scenario I was worried about\nwas that the loop actually attaches the wrong subplan (one for a different\npartition) to a partdesc entry. In an assert-enabled build, that would\nhave led to assertion failure just below, because then we could not match\nup all the remaining relid_map entries; but in a non-assert build, we'd\nplow through and bad things would likely happen during execution.\nYou might need further conditions, like the partitions not being all\nidentical, for that to actually cause any problem. I'd poked at this\nfor a little bit without causing an obvious crash, but I can't claim\nto have tried hard.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 06 Aug 2020 23:31:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: FailedAssertion(\"pd_idx == pinfo->nparts\",\n File: \"execPartition.c\", Line: 1689)" }, { "msg_contents": "On Fri, Aug 07, 2020 at 12:16:11PM +0900, Amit Langote wrote:\n> Curiously, Justin mentioned upthread that the crash occurred during\n> BIND of a prepared query, so it better had been that a custom plan was\n> being executed,\n\nI'm looking at how to check that ... can you give a hint ?\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 6 Aug 2020 22:33:37 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: FailedAssertion(\"pd_idx == pinfo->nparts\", File:\n \"execPartition.c\", Line: 1689)" }, { "msg_contents": "On Fri, Aug 07, 2020 at 12:16:11PM +0900, Amit Langote wrote:\n> Curiously, Justin mentioned upthread that the crash occurred during\n> BIND of a prepared query, so it better had been that a custom plan was\n> being executed, because a generic one based on fewer partitions would\n> be thrown away due to A.I.M. invoked during AcquireExecutorLocks().\n\nWell this statement should only be executed once, and should be using\nPQexecParams and not PQexecPrepared (pygresql: pg.DB().query_prepared()).\n\n(gdb) p portal->name\n$30 = 0xf03238 \"\"\n\n(gdb) p portal->prepStmtName \n$31 = 0x0\n\n(gdb) p *portal->cplan\n$24 = {magic = 953717834, stmt_list = 0x682ec38, is_oneshot = false, is_saved = true, is_valid = true, planRoleId = 16554, dependsOnRole = false, saved_xmin = 0, generation = 1, refcount = 1, context = 0x682dfd0}\n\nI'm not sure why is_oneshot=false, though...\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 6 Aug 2020 23:05:55 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: FailedAssertion(\"pd_idx == pinfo->nparts\", File:\n \"execPartition.c\", Line: 1689)" }, { "msg_contents": "On Fri, Aug 7, 2020 at 1:05 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Fri, Aug 07, 2020 at 12:16:11PM +0900, Amit Langote wrote:\n> > Curiously, Justin mentioned upthread that the crash occurred during\n> > BIND of a prepared query, so it better had been that a custom plan was\n> > being executed, because a generic one based on fewer partitions would\n> > be thrown away due to A.I.M. invoked during AcquireExecutorLocks().\n>\n> Well this statement should only be executed once, and should be using\n> PQexecParams and not PQexecPrepared (pygresql: pg.DB().query_prepared()).\n>\n> (gdb) p portal->name\n> $30 = 0xf03238 \"\"\n>\n> (gdb) p portal->prepStmtName\n> $31 = 0x0\n>\n> (gdb) p *portal->cplan\n> $24 = {magic = 953717834, stmt_list = 0x682ec38, is_oneshot = false, is_saved = true, is_valid = true, planRoleId = 16554, dependsOnRole = false, saved_xmin = 0, generation = 1, refcount = 1, context = 0x682dfd0}\n>\n> I'm not sure why is_oneshot=false, though...\n\nPerhaps printing *unnamed_stmt_psrc (CachedPlanSource for an unnamed\nstatement) would put out more information.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 7 Aug 2020 13:13:51 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: FailedAssertion(\"pd_idx == pinfo->nparts\", File:\n \"execPartition.c\", Line: 1689)" }, { "msg_contents": "On Fri, Aug 07, 2020 at 01:13:51PM +0900, Amit Langote wrote:\n> On Fri, Aug 7, 2020 at 1:05 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > On Fri, Aug 07, 2020 at 12:16:11PM +0900, Amit Langote wrote:\n> > > Curiously, Justin mentioned upthread that the crash occurred during\n> > > BIND of a prepared query, so it better had been that a custom plan was\n> > > being executed, because a generic one based on fewer partitions would\n> > > be thrown away due to A.I.M. invoked during AcquireExecutorLocks().\n> >\n> > Well this statement should only be executed once, and should be using\n> > PQexecParams and not PQexecPrepared (pygresql: pg.DB().query_prepared()).\n> >\n> > (gdb) p portal->name\n> > $30 = 0xf03238 \"\"\n> >\n> > (gdb) p portal->prepStmtName\n> > $31 = 0x0\n> >\n> > (gdb) p *portal->cplan\n> > $24 = {magic = 953717834, stmt_list = 0x682ec38, is_oneshot = false, is_saved = true, is_valid = true, planRoleId = 16554, dependsOnRole = false, saved_xmin = 0, generation = 1, refcount = 1, context = 0x682dfd0}\n> >\n> > I'm not sure why is_oneshot=false, though...\n> \n> Perhaps printing *unnamed_stmt_psrc (CachedPlanSource for an unnamed\n> statement) would put out more information.\n\n(gdb) p *unnamed_stmt_psrc\n$49 = {magic = 195726186, raw_parse_tree = 0xfae788, \n query_string = 0xfaddc0 \"\\n\", ' ' <repeats 20 times>, \"SELECT $3::timestamp as start_time, $2::int as interval_seconds,\\n\", ' ' <repeats 20 times>, \"first_cgi as cgi, gsm_carr_mcc||gsm_carr_mnc as home_plmn,\\n\", ' ' <repeats 20 times>, \"SUM(chargeable_\"..., commandTag = CMDTAG_SELECT, param_types = 0x1254400, num_params = 3, parserSetup = 0x0, parserSetupArg = 0x0, cursor_options = 256, fixed_result = true, \n resultDesc = 0x1376670, context = 0xfae550, query_list = 0x103c9a8, relationOids = 0x11aa580, invalItems = 0x11aa600, search_path = 0x11aa878, query_context = 0xf85790, rewriteRoleId = 16554, \n rewriteRowSecurity = true, dependsOnRLS = false, gplan = 0x0, is_oneshot = false, is_complete = true, is_saved = true, is_valid = false, generation = 1, node = {prev = 0x12fcf28, \n next = 0xdf2c80 <saved_plan_list>}, generic_cost = -1, total_custom_cost = 12187136.696805555, num_custom_plans = 1}\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 6 Aug 2020 23:21:24 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: FailedAssertion(\"pd_idx == pinfo->nparts\", File:\n \"execPartition.c\", Line: 1689)" }, { "msg_contents": "On Fri, Aug 7, 2020 at 1:21 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Fri, Aug 07, 2020 at 01:13:51PM +0900, Amit Langote wrote:\n> > On Fri, Aug 7, 2020 at 1:05 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > On Fri, Aug 07, 2020 at 12:16:11PM +0900, Amit Langote wrote:\n> > > > Curiously, Justin mentioned upthread that the crash occurred during\n> > > > BIND of a prepared query, so it better had been that a custom plan was\n> > > > being executed, because a generic one based on fewer partitions would\n> > > > be thrown away due to A.I.M. invoked during AcquireExecutorLocks().\n> > >\n> > > Well this statement should only be executed once, and should be using\n> > > PQexecParams and not PQexecPrepared (pygresql: pg.DB().query_prepared()).\n> > >\n> > > (gdb) p portal->name\n> > > $30 = 0xf03238 \"\"\n> > >\n> > > (gdb) p portal->prepStmtName\n> > > $31 = 0x0\n> > >\n> > > (gdb) p *portal->cplan\n> > > $24 = {magic = 953717834, stmt_list = 0x682ec38, is_oneshot = false, is_saved = true, is_valid = true, planRoleId = 16554, dependsOnRole = false, saved_xmin = 0, generation = 1, refcount = 1, context = 0x682dfd0}\n> > >\n> > > I'm not sure why is_oneshot=false, though...\n> >\n> > Perhaps printing *unnamed_stmt_psrc (CachedPlanSource for an unnamed\n> > statement) would put out more information.\n>\n> (gdb) p *unnamed_stmt_psrc\n> $49 = {... gplan = 0x0, is_oneshot = false, is_complete = true, is_saved = true, is_valid = false, generation = 1, node = {prev = 0x12fcf28,\n> next = 0xdf2c80 <saved_plan_list>}, generic_cost = -1, total_custom_cost = 12187136.696805555, num_custom_plans = 1}\n\n From this part, I think it's clear that a custom plan was used and\nthat's the only one that this portal seems to know about. Also, I can\nsee that only SPI ever builds \"oneshot\" plans, so is_oneshot would\nalways be false in your use case.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 7 Aug 2020 13:42:42 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: FailedAssertion(\"pd_idx == pinfo->nparts\", File:\n \"execPartition.c\", Line: 1689)" } ]
[ { "msg_contents": "Hackers,\n\nI have a situation that I am observing where dblink_is_busy returns 1\neven though the connection is long gone. tcp keepalives are on and\nthe connection has been dead for several hours. Looking at the call\nfor dblink_is_busy, I see that it is a thin wrapper to PQusBusy().\nIf I attempt to call dblink_get_result(), the result comes back with\nan error mesage, 'invalid socket'. This however is not helpful since\nthere is no way to probe for dead connections in dblink that appears\nto be 100% reliable. My workaround that I had been relying on was to\ncall dblink_get_notify twice, which for some weird reason forced the\nconnection error to the surface. However for whatever reason, that is\nnot working here.\n\nIn cases the connection was cancelled via dblink_cancel_query(), so in\nsome scenarios connections cancelled that way seem to become 'stuck'.\nAny thoughts on this?\n\nmerlin\n\n\n", "msg_date": "Sun, 2 Aug 2020 19:18:08 -0500", "msg_from": "Merlin Moncure <mmoncure@gmail.com>", "msg_from_op": true, "msg_subject": "dblnk_is_busy returns 1 for dead connecitons" }, { "msg_contents": "On Sun, Aug 2, 2020 at 7:18 PM Merlin Moncure <mmoncure@gmail.com> wrote:\n>\n> Hackers,\n>\n> I have a situation that I am observing where dblink_is_busy returns 1\n> even though the connection is long gone. tcp keepalives are on and\n> the connection has been dead for several hours. Looking at the call\n> for dblink_is_busy, I see that it is a thin wrapper to PQusBusy().\n> If I attempt to call dblink_get_result(), the result comes back with\n> an error mesage, 'invalid socket'. This however is not helpful since\n> there is no way to probe for dead connections in dblink that appears\n> to be 100% reliable. My workaround that I had been relying on was to\n> call dblink_get_notify twice, which for some weird reason forced the\n> connection error to the surface. However for whatever reason, that is\n> not working here.\n>\n> In cases the connection was cancelled via dblink_cancel_query(), so in\n> some scenarios connections cancelled that way seem to become 'stuck'.\n> Any thoughts on this?\n\nCorrection, keepalives are probably not on, because dblink does not\nhave an option to set them. Also, it looks like dblink_is_busy()\ncalls pqConsumeInput without checking the error code. Is that safe?\n\nmerlin\n\n\n", "msg_date": "Sun, 2 Aug 2020 21:55:41 -0500", "msg_from": "Merlin Moncure <mmoncure@gmail.com>", "msg_from_op": true, "msg_subject": "Re: dblnk_is_busy returns 1 for dead connecitons" }, { "msg_contents": "On Sun, Aug 2, 2020 at 9:55 PM Merlin Moncure <mmoncure@gmail.com> wrote:\n>\n> On Sun, Aug 2, 2020 at 7:18 PM Merlin Moncure <mmoncure@gmail.com> wrote:\n> >\n> > Hackers,\n> >\n> > I have a situation that I am observing where dblink_is_busy returns 1\n> > even though the connection is long gone. tcp keepalives are on and\n> > the connection has been dead for several hours. Looking at the call\n> > for dblink_is_busy, I see that it is a thin wrapper to PQusBusy().\n> > If I attempt to call dblink_get_result(), the result comes back with\n> > an error mesage, 'invalid socket'. This however is not helpful since\n> > there is no way to probe for dead connections in dblink that appears\n> > to be 100% reliable. My workaround that I had been relying on was to\n> > call dblink_get_notify twice, which for some weird reason forced the\n> > connection error to the surface. However for whatever reason, that is\n> > not working here.\n> >\n> > In cases the connection was cancelled via dblink_cancel_query(), so in\n> > some scenarios connections cancelled that way seem to become 'stuck'.\n> > Any thoughts on this?\n>\n> Correction, keepalives are probably not on, because dblink does not\n> have an option to set them. Also, it looks like dblink_is_busy()\n> calls pqConsumeInput without checking the error code. Is that safe?\n\nI could not reproduce this with application external test script (see\nattached if curious). I alos noticed you can set keepalives in the\nlibpq connection string, so I'll do that and see if it helps, and\nreport back for posterity. Thanks, sorry for the noise.\n\nmerlin", "msg_date": "Mon, 3 Aug 2020 13:42:07 -0500", "msg_from": "Merlin Moncure <mmoncure@gmail.com>", "msg_from_op": true, "msg_subject": "Re: dblnk_is_busy returns 1 for dead connecitons" } ]
[ { "msg_contents": "..which should no longer be needed since it was a performance hack for specific\nplatform snprintf, which are no longer used.", "msg_date": "Sun, 2 Aug 2020 23:59:48 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "[PATCH v1] elog.c: Remove special case which avoided %*s format\n strings.." }, { "msg_contents": "On Sun, Aug 02, 2020 at 11:59:48PM -0500, Justin Pryzby wrote:\n> ..which should no longer be needed since it was a performance hack for specific\n> platform snprintf, which are no longer used.\n\nDid you check if our implementation of src/port/snprintf.c makes %*s\nmuch slower than %s or not? FWIW, I have just run a small test on my\nlaptop, and running 100M calls of snprintf() with \"%s\" in a tight loop\ntakes 2.7s, with \"%*s\" and a padding of 0 it takes 4.2s. So this test\ntells that we are far from something that's substantially slower, and\nto simplify the code your change makes sense. Still, there could be a\npoint in keeping this optimization, but fix the comment to remove the\nplatform-dependent part of it. Any thoughts?\n--\nMichael", "msg_date": "Tue, 4 Aug 2020 16:35:55 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH v1] elog.c: Remove special case which avoided %*s format\n strings.." }, { "msg_contents": "On Tue, 4 Aug 2020 at 19:36, Michael Paquier <michael@paquier.xyz> wrote:\n> Did you check if our implementation of src/port/snprintf.c makes %*s\n> much slower than %s or not? FWIW, I have just run a small test on my\n> laptop, and running 100M calls of snprintf() with \"%s\" in a tight loop\n> takes 2.7s, with \"%*s\" and a padding of 0 it takes 4.2s. So this test\n> tells that we are far from something that's substantially slower, and\n> to simplify the code your change makes sense. Still, there could be a\n> point in keeping this optimization, but fix the comment to remove the\n> platform-dependent part of it. Any thoughts?\n\nIt's not just converting \"%s\" to \"%*s\", it's sometimes changing a\nappendStringInfoString() call to appendStringInfo(). It's hard to\nimagine the formatting version could ever be as fast as\nappendStringInfo().\n\nFWIW, the tests I did to check this when initially working on it are\nin [1]. Justin, it would be good if you could verify you're making as\nbad as what's mentioned on that thread again.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/flat/20130924165104.GQ4832%40eldon.alvh.no-ip.org#4e8a716ff0bde1e950fe7ddca1d75454\n\n\n", "msg_date": "Tue, 4 Aug 2020 21:06:16 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH v1] elog.c: Remove special case which avoided %*s format\n strings.." }, { "msg_contents": "On Tue, Aug 04, 2020 at 09:06:16PM +1200, David Rowley wrote:\n> FWIW, the tests I did to check this when initially working on it are\n> in [1]. Justin, it would be good if you could verify you're making as\n> bad as what's mentioned on that thread again.\n\nOuch. Thanks for the reference. Indeed it looks that it would hurt\neven with just a simple PL function.\n--\nMichael", "msg_date": "Wed, 5 Aug 2020 17:22:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH v1] elog.c: Remove special case which avoided %*s format\n strings.." } ]
[ { "msg_contents": "I thought that the biggest reason for the pgbench RW slowdown during a checkpoint was the flood of dirty page writes increasing the COMMIT latency. It turns out that the documentation which states that FPW's start \"after a checkpoint\" really means after a CKPT starts. And this is the really cause of the deep dip in performance. Maybe only I was fooled... :-)\n\nIf we can't eliminate FPW's can we at least solve the impact of it? Instead of writing the before images of pages inline into the WAL, which increases the COMMIT latency, write these same images to a separate physical log file. The key idea is that I don't believe that COMMIT's require these buffers to be immediately flushed to the physical log. We only need to flush these before the dirty pages are written. This delay allows the physical before image IO's to be decoupled and done in an efficient manner without an impact to COMMIT's.\n\n1. When we generate a physical image add it to an in memory buffer of before page images.\n2. Put the physical log offset of the before image into the WAL record. This is the current physical log file size plus the offset in the in-memory buffer of pages.\n3. Set a bit in the bufhdr indicating this was done.\n4. COMMIT's do not need to worry about those buffers.\n5. Periodically flush the in-memory buffer and clear the bit in the BufHdr.\n6. During any dirty page flushing if we see the bit set, which should be rare, then make sure we get our before image flushed. This would be similar to our LSN based XLogFlush().\nDo we need these before images for more than one CKPT? I don't think so. Do PITR's require before images since it is a continuous rollforward from a restore? Just some of considerations.\n\nDo I need to back this physical log up? I likely(?) need to deal with replication.\n\nTurning off FPW gives about a 20%, maybe more, boost on a pgbench TPC-B RW workload which fits in the buffer cache. Can I get this 20% improvement with a separate physical log of before page images?\n\nDoing IO's off on the side, but decoupled from the WAL stream, doesn't seem to impact COMMIT latency on modern SSD based storage systems. For instance, you can hammer a shared data and WAL SSD filesystem with dirty page writes from the CKPT, at near the MAX IOPS of the SSD, and not impact COMMIT latency. However, this presumes that the CKPT's natural spreading of dirty page writes across the CKPT target doesn't push too many outstanding IO's into the storage write Q on the OS/device.\nNOTE: I don't believe the CKPT's throttling is perfect and I think a burst of dirty pages into the cache just before a CKPT might cause the Q to be flooded and this would then also further slow TPS during the CKPT. But a fix to this is off topic from the FPW issue.\n\nThanks to Andres Freund for both making me aware of the Q depth impact on COMMIT latency and the hint that FPW might also be causing the CKPT slowdown. FYI, I always knew about FPW slowdown in general but I just didn't realize it was THE primary cause of CKPT TPS slowdown on pgbench. NOTE: I realize that spinning media might exhibit different behavior. And I didn't not say dirty page writing has NO impact on good SSD's. It depends, and this is a subject for a later date as I have a theory as to why I something see a sawtooth performance for pgbench TPC-B and sometimes a square wave but I want to prove if first.\n\n\n\n\n\n\n\n\nI thought that the biggest reason for the pgbench RW slowdown during a checkpoint was the flood of dirty page writes increasing the COMMIT latency.  It turns out that the documentation which states that FPW's start \"after a checkpoint\" really means after a CKPT starts.  And this is the really cause of the deep dip in performance.  Maybe only I was fooled... :-)\n\n\n\n\n\n\nIf we can't eliminate FPW's can we at least solve the impact of it?  Instead of writing the before images of pages inline into the WAL, which increases the COMMIT latency, write these same images to a separate physical log file.  The key idea is that I don't believe that COMMIT's require these buffers to be immediately flushed to the physical log.  We only need to flush these before the dirty pages are written.  This delay allows the physical before image IO's to be decoupled and done in an efficient manner without an impact to COMMIT's.\n\n\n\n\n\nWhen we generate a physical image add it to an in memory buffer of before page images.\nPut the physical log offset of the before image into the WAL record.  This is the current physical log file size plus the offset in the in-memory buffer of pages.\nSet a bit in the bufhdr indicating this was done.\nCOMMIT's do not need to worry about those buffers.\nPeriodically flush the in-memory buffer and clear the bit in the BufHdr.\nDuring any dirty page flushing if we see the bit set, which should be rare, then make sure we get our before image flushed.  This would be similar to our LSN based XLogFlush().\n\n\nDo we need these before images for more than one CKPT?  I don't think so.  Do PITR's require before images since it is a continuous rollforward from a restore?  Just some of considerations.\n\n\n\n\n\nDo I need to back this physical log up?  I likely(?) need to deal with replication.\n\n\n\n\n\nTurning off FPW gives about a 20%, maybe more, boost on a pgbench TPC-B RW workload which fits in the buffer cache.  Can I get this 20% improvement with a separate physical log of before page images?\n\n\n\n\n\nDoing IO's off on the side, but decoupled from the WAL stream, doesn't seem to impact COMMIT latency on modern SSD based storage systems.  For instance, you can hammer a shared data and WAL SSD filesystem with dirty page writes from the CKPT, at near the MAX IOPS of the SSD, and not impact COMMIT latency.  However, this presumes that the CKPT's natural spreading of dirty page writes across the CKPT target doesn't push too many outstanding IO's into the storage write Q on the OS/device.NOTE: I don't believe the CKPT's throttling is perfect and I think a burst of dirty pages into the cache just before a CKPT might cause the Q to be flooded and this would then also further slow TPS during the CKPT.  But a fix to this is off topic from the FPW issue.\n\n\n\n\n\nThanks to Andres Freund for both making me aware of the Q depth impact on COMMIT latency and the hint that FPW might also be causing the CKPT slowdown.  FYI, I always knew about FPW slowdown in general but I just didn't realize it was THE primary cause of CKPT TPS slowdown on pgbench.  NOTE: I realize that spinning media might exhibit different behavior.  And I didn't not say dirty page writing has NO impact on good SSD's.  It depends, and this is a subject for a later date as I have a theory as to why I something see a sawtooth performance for pgbench TPC-B and sometimes a square wave but I want to prove if first.", "msg_date": "Sun, 2 Aug 2020 22:53:07 -0700 (PDT)", "msg_from": "Daniel Wood <hexexpert@comcast.net>", "msg_from_op": true, "msg_subject": "Reduce/eliminate the impact of FPW" }, { "msg_contents": "On Mon, Aug 3, 2020 at 5:26 AM Daniel Wood <hexexpert@comcast.net> wrote:\n> If we can't eliminate FPW's can we at least solve the impact of it? Instead of writing the before images of pages inline into the WAL, which increases the COMMIT latency, write these same images to a separate physical log file. The key idea is that I don't believe that COMMIT's require these buffers to be immediately flushed to the physical log. We only need to flush these before the dirty pages are written. This delay allows the physical before image IO's to be decoupled and done in an efficient manner without an impact to COMMIT's.\n\nI think this is what's called a double-write buffer, or what was tried\nsome years ago under that name. A significant problem is that you\nhave to fsync() the double-write buffer before you can write the WAL.\nSo instead of this:\n\n- write WAL to OS\n- fsync WAL\n\nYou have to do this:\n\n- write double-write buffer to OS\n- fsync double-write buffer\n- write WAL to OS\n- fsync WAL\n\nNote that you cannot overlap these steps -- the first fsync must be\ncompleted before the second write can begin, else you might try to\nreplay WAL for which the double-write buffer information is not\navailable.\n\nBecause of this, I think this is actually quite expensive. COMMIT\nrequires the WAL to be flushed, unless you configure\nsynchronous_commit=off. So this would double the number of fsyncs we\nhave to do. It's not as bad as all that, because the individual fsyncs\nwould be smaller, and that makes a significant difference. For a big\ntransaction that writes a lot of WAL, you'd probably not notice much\ndifference; instead of writing 1000 pages to WAL, you might write 770\npages to the double-write buffer and 270 to the double-write buffer,\nor something like that. But for short transactions, such as those\nperformed by pgbench, you'd probably end up with a lot of cases where\nyou had to write 3 pages instead of 2, and not only that, but the\nwrites have to be consecutive rather than simultaneous, and to\ndifferent parts of the disk rather than sequential. That would likely\nsuck a lot.\n\nIt's entirely possible that these kinds of problems could be mitigated\nthrough really good engineering, maybe to the point where this kind of\nsolution outperforms what we have now for some or even all workloads,\nbut it seems equally possible that it's just always a loser. I don't\nreally know. It seems like a very difficult project.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 3 Aug 2020 11:26:59 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Reduce/eliminate the impact of FPW" }, { "msg_contents": "\n> On 08/03/2020 8:26 AM Robert Haas <robertmhaas@gmail.com> wrote:\n...\n> I think this is what's called a double-write buffer, or what was tried\n> some years ago under that name. A significant problem is that you\n> have to fsync() the double-write buffer before you can write the WAL.\n\nI don't think it does need to be fsync'ed before the WAL. If the\nlog record has a FPW reference beyond the physical log EOF then we\ndon't need to restore the before image because we haven't yet did\nthe dirty page write from the cache. The before image only needs\nto be flushed before the dirty page write. Usually this will have\nalready done.\n\n> ... But for short transactions, such as those\n> performed by pgbench, you'd probably end up with a lot of cases where\n> you had to write 3 pages instead of 2, and not only that, but the\n> writes have to be consecutive rather than simultaneous, and to\n> different parts of the disk rather than sequential. That would likely\n> suck a lot.\n\nWherever you write the before images, in the WAL or into a separate\nfile you would write the same number of pages. I don't understand\nthe 3 pages vs 2 pages comment.\n\nAnd, \"different parts of the disk\"??? I wouldn't enable the feature\non spinning media unless I had a dedicated disk for it.\n\nNOTE:\nIf the 90's Informix called this the physical log. Restoring at\ncrash time restored physical consistency after which redo/undo\nrecovery achieved logical consistency. From their doc's:\n \"If the before-image of a modified page is stored in the physical-log buffer, it is eventually flushed from the physical-log buffer to the physical log on disk. The before-image of the page plays a critical role in restoring data and fast recovery. For more details, see Physical-Log Buffer.\"\n\n> -- \n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 3 Aug 2020 11:06:17 -0700 (PDT)", "msg_from": "Daniel Wood <hexexpert@comcast.net>", "msg_from_op": true, "msg_subject": "Re: Reduce/eliminate the impact of FPW" }, { "msg_contents": "Increasing checkpoint_timeout helps reduce the amount of log written to the\ndisk. This has several benefits like, reduced number of WAL IO, archival\nload on the system, less network traffic to the standby replicas. However,\nthis increases the crash recovery time and impact server availability.\nInvesting in parallel recovery for Postgres helps reduce the crash recovery\ntime and allows us to change the checkpoint frequency to much higher value?\nThis idea is orthogonal to the double write improvements mentioned in the\nthread. Thomas Munro has a patch of doing page prefetching during recovery\nwhich speeds up recovery if the working set doesn't fit in the memory, we\nalso need parallel recovery to replay huge amounts of WAL, when the working\nset is in memory.\n\nThanks,\nSatya\n\nOn Mon, Aug 3, 2020 at 11:14 AM Daniel Wood <hexexpert@comcast.net> wrote:\n\n>\n> > On 08/03/2020 8:26 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> ...\n> > I think this is what's called a double-write buffer, or what was tried\n> > some years ago under that name. A significant problem is that you\n> > have to fsync() the double-write buffer before you can write the WAL.\n>\n> I don't think it does need to be fsync'ed before the WAL. If the\n> log record has a FPW reference beyond the physical log EOF then we\n> don't need to restore the before image because we haven't yet did\n> the dirty page write from the cache. The before image only needs\n> to be flushed before the dirty page write. Usually this will have\n> already done.\n>\n> > ... But for short transactions, such as those\n> > performed by pgbench, you'd probably end up with a lot of cases where\n> > you had to write 3 pages instead of 2, and not only that, but the\n> > writes have to be consecutive rather than simultaneous, and to\n> > different parts of the disk rather than sequential. That would likely\n> > suck a lot.\n>\n> Wherever you write the before images, in the WAL or into a separate\n> file you would write the same number of pages. I don't understand\n> the 3 pages vs 2 pages comment.\n>\n> And, \"different parts of the disk\"??? I wouldn't enable the feature\n> on spinning media unless I had a dedicated disk for it.\n>\n> NOTE:\n> If the 90's Informix called this the physical log. Restoring at\n> crash time restored physical consistency after which redo/undo\n> recovery achieved logical consistency. From their doc's:\n> \"If the before-image of a modified page is stored in the physical-log\n> buffer, it is eventually flushed from the physical-log buffer to the\n> physical log on disk. The before-image of the page plays a critical role in\n> restoring data and fast recovery. For more details, see Physical-Log\n> Buffer.\"\n>\n> > --\n> > Robert Haas\n> > EnterpriseDB: http://www.enterprisedb.com\n> > The Enterprise PostgreSQL Company\n>\n>\n>\n\nIncreasing checkpoint_timeout helps reduce the amount of log written to the disk. This has several benefits like, reduced number of WAL IO, archival load on the system, less network traffic to the standby replicas. However, this increases the crash recovery time and impact server availability. Investing in parallel recovery for Postgres helps reduce the crash recovery time and allows us to change the checkpoint frequency to much higher value? This idea is orthogonal to the double write improvements mentioned in the thread. Thomas Munro has a patch of doing page prefetching during recovery which speeds up recovery if the working set doesn't fit in the memory, we also need parallel recovery to replay huge amounts of WAL, when the working set is in memory.Thanks,SatyaOn Mon, Aug 3, 2020 at 11:14 AM Daniel Wood <hexexpert@comcast.net> wrote:\n> On 08/03/2020 8:26 AM Robert Haas <robertmhaas@gmail.com> wrote:\n...\n> I think this is what's called a double-write buffer, or what was tried\n> some years ago under that name.  A significant problem is that you\n> have to fsync() the double-write buffer before you can write the WAL.\n\nI don't think it does need to be fsync'ed before the WAL.  If the\nlog record has a FPW reference beyond the physical log EOF then we\ndon't need to restore the before image because we haven't yet did\nthe dirty page write from the cache.  The before image only needs\nto be flushed before the dirty page write.  Usually this will have\nalready done.\n\n> ... But for short transactions, such as those\n> performed by pgbench, you'd probably end up with a lot of cases where\n> you had to write 3 pages instead of 2, and not only that, but the\n> writes have to be consecutive rather than simultaneous, and to\n> different parts of the disk rather than sequential. That would likely\n> suck a lot.\n\nWherever you write the before images, in the WAL or into a separate\nfile you would write the same number of pages.  I don't understand\nthe 3 pages vs 2 pages comment.\n\nAnd, \"different parts of the disk\"???  I wouldn't enable the feature\non spinning media unless I had a dedicated disk for it.\n\nNOTE:\nIf the 90's Informix called this the physical log.  Restoring at\ncrash time restored physical consistency after which redo/undo\nrecovery achieved logical consistency.  From their doc's:\n    \"If the before-image of a modified page is stored in the physical-log buffer, it is eventually flushed from the physical-log buffer to the physical log on disk. The before-image of the page plays a critical role in restoring data and fast recovery. For more details, see Physical-Log Buffer.\"\n\n> -- \n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company", "msg_date": "Mon, 3 Aug 2020 13:07:32 -0700", "msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Reduce/eliminate the impact of FPW" }, { "msg_contents": "Greetings,\n\nPlease don't top-post on these lists.\n\n* SATYANARAYANA NARLAPURAM (satyanarlapuram@gmail.com) wrote:\n> Increasing checkpoint_timeout helps reduce the amount of log written to the\n> disk. This has several benefits like, reduced number of WAL IO, archival\n> load on the system, less network traffic to the standby replicas. However,\n> this increases the crash recovery time and impact server availability.\n\nSure.\n\n> Investing in parallel recovery for Postgres helps reduce the crash recovery\n> time and allows us to change the checkpoint frequency to much higher value?\n\nParallel recovery is a nice idea but it's pretty far from trivial.. Did\nyou have thoughts about how that would be accomplished?\n\n> This idea is orthogonal to the double write improvements mentioned in the\n> thread. Thomas Munro has a patch of doing page prefetching during recovery\n> which speeds up recovery if the working set doesn't fit in the memory, we\n> also need parallel recovery to replay huge amounts of WAL, when the working\n> set is in memory.\n\nWhat OS, filesystem, etc, are you running where you're seeing that the\nWAL pre-fetch is helping to speed up recovery? Based on prior\ndiscussion, that seemed to help primarily on ZFS due to the block size\nbeing larger than our block size, which, while somewhat interesting,\nisn't as exciting as finding a way to speed up recovery across the\nboard.\n\nThanks,\n\nStephen", "msg_date": "Tue, 4 Aug 2020 09:05:06 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Reduce/eliminate the impact of FPW" } ]
[ { "msg_contents": "I propose to replace the remaining uses of StrNCpy() with strlcpy() and \nremove the former. It's clear that strlcpy() has won the popularity \ncontest, and the presence of the former is just confusing now.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 3 Aug 2020 08:59:37 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Replace remaining StrNCpy() by strlcpy()" }, { "msg_contents": "On Mon, 3 Aug 2020 at 18:59, Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> I propose to replace the remaining uses of StrNCpy() with strlcpy() and\n> remove the former. It's clear that strlcpy() has won the popularity\n> contest, and the presence of the former is just confusing now.\n\nIt certainly would be good to get rid of some of these, but are some\nof the changes not a bit questionable?\n\ne.g:\n\n@@ -4367,7 +4367,7 @@ pgstat_send_archiver(const char *xlog, bool failed)\n */\n pgstat_setheader(&msg.m_hdr, PGSTAT_MTYPE_ARCHIVER);\n msg.m_failed = failed;\n- StrNCpy(msg.m_xlog, xlog, sizeof(msg.m_xlog));\n+ strlcpy(msg.m_xlog, xlog, sizeof(msg.m_xlog));\n msg.m_timestamp = GetCurrentTimestamp();\n pgstat_send(&msg, sizeof(msg));\n\nWill mean that we'll now no longer zero the full length of the m_xlog\nfield after the end of the string. Won't that mean we'll start writing\njunk bytes to the stats collector?\n\nDavid\n\n\n", "msg_date": "Mon, 3 Aug 2020 21:01:35 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replace remaining StrNCpy() by strlcpy()" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> - StrNCpy(msg.m_xlog, xlog, sizeof(msg.m_xlog));\n> + strlcpy(msg.m_xlog, xlog, sizeof(msg.m_xlog));\n\n> Will mean that we'll now no longer zero the full length of the m_xlog\n> field after the end of the string. Won't that mean we'll start writing\n> junk bytes to the stats collector?\n\nStrNCpy doesn't zero-fill the destination today either (except for\nthe very last byte). If you need that, you need to memset the\ndest buffer ahead of time.\n\nI didn't review the patch in complete detail, but the principle\nseems sound to me, and strlcpy is surely more standard than StrNCpy.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 03 Aug 2020 07:38:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replace remaining StrNCpy() by strlcpy()" }, { "msg_contents": "I wrote:\n> David Rowley <dgrowleyml@gmail.com> writes:\n>> Will mean that we'll now no longer zero the full length of the m_xlog\n>> field after the end of the string. Won't that mean we'll start writing\n>> junk bytes to the stats collector?\n\n> StrNCpy doesn't zero-fill the destination today either (except for\n> the very last byte).\n\nOh, no, I take that back --- didn't read all of the strncpy man\npage :-(. Yeah, this is a point. We'd need to check each call\nsite to see whether the zero-padding matters.\n\nIn the specific case of the stats collector, if you don't want\nto be sending junk bytes then you'd better be memset'ing the\nwhole message buffer not just this string field. So I'm not\nsure that the argument has any force there. But in places\nlike namecpy() and namestrcpy() we absolutely do mean to be\nzeroing the whole destination buffer.\n\nmemset plus strlcpy might still be preferable to StrNCpy for\nreadability by people new to Postgres; but it's less of a\nslam dunk than I thought.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 03 Aug 2020 08:12:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replace remaining StrNCpy() by strlcpy()" }, { "msg_contents": "On 2020-08-03 14:12, Tom Lane wrote:\n> I wrote:\n>> David Rowley <dgrowleyml@gmail.com> writes:\n>>> Will mean that we'll now no longer zero the full length of the m_xlog\n>>> field after the end of the string. Won't that mean we'll start writing\n>>> junk bytes to the stats collector?\n> \n>> StrNCpy doesn't zero-fill the destination today either (except for\n>> the very last byte).\n> \n> Oh, no, I take that back --- didn't read all of the strncpy man\n> page :-(. Yeah, this is a point. We'd need to check each call\n> site to see whether the zero-padding matters.\n\nOh, that's easy to miss.\n\n> In the specific case of the stats collector, if you don't want\n> to be sending junk bytes then you'd better be memset'ing the\n> whole message buffer not just this string field. So I'm not\n> sure that the argument has any force there. But in places\n> like namecpy() and namestrcpy() we absolutely do mean to be\n> zeroing the whole destination buffer.\n\nThat's easy to fix, but it's perhaps wondering briefly why it needs to \nbe zero-padded. hashname() doesn't care, heap_form_tuple() doesn't \ncare. Does anything care?\n\nWhile we're here, shouldn't namestrcpy() do some pg_mbcliplen() stuff \nlike namein()?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 3 Aug 2020 19:27:24 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Replace remaining StrNCpy() by strlcpy()" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2020-08-03 14:12, Tom Lane wrote:\n>> In the specific case of the stats collector, if you don't want\n>> to be sending junk bytes then you'd better be memset'ing the\n>> whole message buffer not just this string field. So I'm not\n>> sure that the argument has any force there. But in places\n>> like namecpy() and namestrcpy() we absolutely do mean to be\n>> zeroing the whole destination buffer.\n\n> That's easy to fix, but it's perhaps wondering briefly why it needs to \n> be zero-padded. hashname() doesn't care, heap_form_tuple() doesn't \n> care. Does anything care?\n\nWe do have an expectation that there are no undefined bytes in values to\nbe stored on-disk. There's even some code in coerce_type() that will\ncomplain about this:\n\n * For pass-by-reference data types, repeat the conversion to see if\n * the input function leaves any uninitialized bytes in the result. We\n * can only detect that reliably if RANDOMIZE_ALLOCATED_MEMORY is\n * enabled, so we don't bother testing otherwise. The reason we don't\n * want any instability in the input function is that comparison of\n * Const nodes relies on bytewise comparison of the datums, so if the\n * input function leaves garbage then subexpressions that should be\n * identical may not get recognized as such. See pgsql-hackers\n * discussion of 2008-04-04.\n\n> While we're here, shouldn't namestrcpy() do some pg_mbcliplen() stuff \n> like namein()?\n\nExcellent point --- probably so, unless the callers are all truncating\nin advance, which I doubt.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 03 Aug 2020 13:39:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replace remaining StrNCpy() by strlcpy()" }, { "msg_contents": "On 2020-08-03 19:39, Tom Lane wrote:\n>> That's easy to fix, but it's perhaps wondering briefly why it needs to\n>> be zero-padded. hashname() doesn't care, heap_form_tuple() doesn't\n>> care. Does anything care?\n> \n> We do have an expectation that there are no undefined bytes in values to\n> be stored on-disk. There's even some code in coerce_type() that will\n> complain about this:\n\nOkay, here is a new patch with improved implementations of namecpy() and \nnamestrcpy(). I didn't see any other places that relied on the \nzero-filling behavior of strncpy().\n\n>> While we're here, shouldn't namestrcpy() do some pg_mbcliplen() stuff\n>> like namein()?\n> \n> Excellent point --- probably so, unless the callers are all truncating\n> in advance, which I doubt.\n\nI will look into that separately.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 4 Aug 2020 15:31:03 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Replace remaining StrNCpy() by strlcpy()" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> Okay, here is a new patch with improved implementations of namecpy() and \n> namestrcpy(). I didn't see any other places that relied on the \n> zero-filling behavior of strncpy().\n\nI've looked through this patch, and I concur with your conclusion that\nnoplace else is depending on zero-fill, with the exception of the one\nplace in pgstat.c that David already noted. But the issue there is only\nthat valgrind might bitch about send()'ing undefined bytes, and ISTM\nthat the existing suppressions in our valgrind.supp should already\nhandle it, since we already have other pgstat messages with pad bytes.\n\nHowever I do see one remaining nit to pick, in CreateInitDecodingContext:\n \n \t/* register output plugin name with slot */\n \tSpinLockAcquire(&slot->mutex);\n-\tStrNCpy(NameStr(slot->data.plugin), plugin, NAMEDATALEN);\n+\tnamestrcpy(&slot->data.plugin, plugin);\n \tSpinLockRelease(&slot->mutex);\n\nThis is already a pro-forma violation of our rule about \"only\nstraight-line code inside a spinlock\". Now I'm not terribly concerned\nabout that right now, and the patch as it stands is only changing things\ncosmetically. But if you modify namestrcpy to do pg_mbcliplen then all\nof a sudden there is a whole lot of code that could get reached within\nthe spinlock, and I'm not a bit happy about that prospect.\n\nThe least-risk fix would be to namestrcpy() into a local variable\nand then just use a plain memcpy() inside the spinlock. There might\nbe better ways if we're willing to make assumptions about what the\nplugin name can be. For that matter, do we really need a spinlock\nhere at all? Why is the plugin name critical but the rest of the\nslot not?\n\nBTW, while we're here I think we ought to change namecpy and namestrcpy\nto return void (no caller checks their results) and drop their checks\nfor null-pointer inputs. AFAICS a null pointer would be a caller bug in\nevery case, and if it isn't, why is failing to initialize the\ndestination an OK outcome? I find the provisions for nulls in namestrcmp\npretty useless too, although it looks like at least some thought has\nbeen spent there.\n\nI think this is committable once these points are addressed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 05 Aug 2020 11:49:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replace remaining StrNCpy() by strlcpy()" }, { "msg_contents": "On Tue, 4 Aug 2020 at 00:12, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > David Rowley <dgrowleyml@gmail.com> writes:\n> >> Will mean that we'll now no longer zero the full length of the m_xlog\n> >> field after the end of the string. Won't that mean we'll start writing\n> >> junk bytes to the stats collector?\n>\n> > StrNCpy doesn't zero-fill the destination today either (except for\n> > the very last byte).\n>\n> Oh, no, I take that back --- didn't read all of the strncpy man\n> page :-(. Yeah, this is a point. We'd need to check each call\n> site to see whether the zero-padding matters.\n\nI just had a thought that even strlcpy() is not really ideal for many\nof our purposes for it.\n\nCurrently we still have cruddy code like:\n\nstrlcpy(fullname, pg_TZDIR(), sizeof(fullname));\nif (strlen(fullname) + 1 + strlen(name) >= MAXPGPATH)\nreturn -1; /* not gonna fit */\nstrcat(fullname, \"/\");\nstrcat(fullname, name);\n\nIf strlcpy() had been designed differently to take a signed size and\ndo nothing when the size is <= 0 then we could have had much cleaner\nimplementations of the above:\n\nsize_t len;\nlen = strlcpy(fullname, pg_TZDIR(), sizeof(fullname));\nlen += strlcpy(fullname + len, \"/\", sizeof(fullname) - len);\nlen += strlcpy(fullname + len, name, sizeof(fullname) - len);\nif (len >= sizeof(fullname))\nreturn -1; /* didn't fit */\n\nThis should be much more efficient, in general, due to the lack of\nstrlen() calls and the concatenation not having to refind the end of\nthe string again each time.\n\nNow, I'm not saying we should change strlcpy() to take a signed type\ninstead of size_t to allow this to work. Reusing that name for another\npurpose is likely a bad idea that will lead to misuse and confusion.\nWhat I am saying is that perhaps strlcpy() is not all that it's\ncracked up to be and it could have been done better. Perhaps we can\ninvent our own version that fixes the shortcomings.\n\nJust a thought.\n\nOn the other hand, perhaps we're not using the return value of\nstrlcpy() enough for such a change to make sense. However, a quick\nglance shows we certainly could use it more often, e.g:\n\nif (parsed->xinfo & XACT_XINFO_HAS_GID)\n{\nstrlcpy(parsed->twophase_gid, data, sizeof(parsed->twophase_gid));\ndata += strlen(data) + 1;\n}\n\nDavid\n\n\n", "msg_date": "Thu, 6 Aug 2020 10:59:56 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replace remaining StrNCpy() by strlcpy()" }, { "msg_contents": "On 2020-08-05 17:49, Tom Lane wrote:\n> However I do see one remaining nit to pick, in CreateInitDecodingContext:\n> \n> \t/* register output plugin name with slot */\n> \tSpinLockAcquire(&slot->mutex);\n> -\tStrNCpy(NameStr(slot->data.plugin), plugin, NAMEDATALEN);\n> +\tnamestrcpy(&slot->data.plugin, plugin);\n> \tSpinLockRelease(&slot->mutex);\n> \n> This is already a pro-forma violation of our rule about \"only\n> straight-line code inside a spinlock\". Now I'm not terribly concerned\n> about that right now, and the patch as it stands is only changing things\n> cosmetically. But if you modify namestrcpy to do pg_mbcliplen then all\n> of a sudden there is a whole lot of code that could get reached within\n> the spinlock, and I'm not a bit happy about that prospect.\n\nfixed\n\n> BTW, while we're here I think we ought to change namecpy and namestrcpy\n> to return void (no caller checks their results) and drop their checks\n> for null-pointer inputs. AFAICS a null pointer would be a caller bug in\n> every case, and if it isn't, why is failing to initialize the\n> destination an OK outcome? I find the provisions for nulls in namestrcmp\n> pretty useless too, although it looks like at least some thought has\n> been spent there.\n\nfixed\n\nI removed namecpy() altogether because you can just use struct assignment.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sat, 8 Aug 2020 07:57:53 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Replace remaining StrNCpy() by strlcpy()" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> I removed namecpy() altogether because you can just use struct assignment.\n\nMakes sense, and I notice it was unused anyway.\n\nv3 passes eyeball examination (I didn't bother running tests), with\nonly one remaining nit: the proposed commit message says\n\n\tThey are equivalent,\n\nwhich per this thread is incorrect. Somebody might possibly refer to this\ncommit for guidance in updating third-party code, so I don't think we want\nto leave a misleading claim here. Perhaps something like\n\n\tThey are equivalent, except that StrNCpy zero-fills the entire\n\tdestination buffer instead of providing just one trailing zero.\n\tFor all but a tiny number of callers, that's just overhead rather\n\tthan being desirable.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 08 Aug 2020 12:09:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replace remaining StrNCpy() by strlcpy()" }, { "msg_contents": "On 2020-08-08 18:09, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> I removed namecpy() altogether because you can just use struct assignment.\n> \n> Makes sense, and I notice it was unused anyway.\n> \n> v3 passes eyeball examination (I didn't bother running tests), with\n> only one remaining nit: the proposed commit message says\n> \n> \tThey are equivalent,\n> \n> which per this thread is incorrect. Somebody might possibly refer to this\n> commit for guidance in updating third-party code, so I don't think we want\n> to leave a misleading claim here. Perhaps something like\n> \n> \tThey are equivalent, except that StrNCpy zero-fills the entire\n> \tdestination buffer instead of providing just one trailing zero.\n> \tFor all but a tiny number of callers, that's just overhead rather\n> \tthan being desirable.\n\nCommitted with that change.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 11 Aug 2020 01:23:15 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Replace remaining StrNCpy() by strlcpy()" } ]
[ { "msg_contents": " Hi,\n\nAs a follow-up to bug #16570 [1] and other previous discussions\non the mailing-lists, I'm checking out PG13 beta for Windows\nfrom:\n https://www.enterprisedb.com/postgresql-early-experience\nand it ships with the same obsolete ICU 53 that was used\nfor PG 10,11,12.\nBesides not having the latest Unicode features and fixes, ICU 53\nignores the BCP 47 tags syntax in collations used as examples\nin Postgres documentation, which leads to confusion and\nfalse bug reports.\nThe current version is ICU 67.\n\nI don't see where the suggestion to upgrade it before the\nnext PG release should be addressed but maybe some people on\nthis list do know or have the leverage to make it happen?\n\n[1]\nhttps://www.postgresql.org/message-id/16570-58cc04e1a6ef3c3f%40postgresql.org\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: https://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n", "msg_date": "Mon, 03 Aug 2020 20:56:06 +0200", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": true, "msg_subject": "EDB builds Postgres 13 with an obsolete ICU version" }, { "msg_contents": "On Mon, Aug 3, 2020 at 08:56:06PM +0200, Daniel Verite wrote:\n> Hi,\n> \n> As a follow-up to bug #16570 [1] and other previous discussions\n> on the mailing-lists, I'm checking out PG13 beta for Windows\n> from:\n> https://www.enterprisedb.com/postgresql-early-experience\n> and it ships with the same obsolete ICU 53 that was used\n> for PG 10,11,12.\n> Besides not having the latest Unicode features and fixes, ICU 53\n> ignores the BCP 47 tags syntax in collations used as examples\n> in Postgres documentation, which leads to confusion and\n> false bug reports.\n> The current version is ICU 67.\n> \n> I don't see where the suggestion to upgrade it before the\n> next PG release should be addressed but maybe some people on\n> this list do know or have the leverage to make it happen?\n\nWell, you can ask EDB about this, but perhaps the have kept the same ICU\nversion so indexes will not need to be reindexed.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Mon, 3 Aug 2020 20:04:28 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: EDB builds Postgres 13 with an obsolete ICU version" }, { "msg_contents": "On Tue, Aug 4, 2020 at 1:04 AM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Mon, Aug 3, 2020 at 08:56:06PM +0200, Daniel Verite wrote:\n> > Hi,\n> >\n> > As a follow-up to bug #16570 [1] and other previous discussions\n> > on the mailing-lists, I'm checking out PG13 beta for Windows\n> > from:\n> > https://www.enterprisedb.com/postgresql-early-experience\n> > and it ships with the same obsolete ICU 53 that was used\n> > for PG 10,11,12.\n> > Besides not having the latest Unicode features and fixes, ICU 53\n> > ignores the BCP 47 tags syntax in collations used as examples\n> > in Postgres documentation, which leads to confusion and\n> > false bug reports.\n> > The current version is ICU 67.\n> >\n> > I don't see where the suggestion to upgrade it before the\n> > next PG release should be addressed but maybe some people on\n> > this list do know or have the leverage to make it happen?\n>\n> Well, you can ask EDB about this, but perhaps the have kept the same ICU\n> version so indexes will not need to be reindexed.\n>\n\nCorrect - updating ICU would mean a reindex is required following any\nupgrade, major or minor.\n\nI would really like to find an acceptable solution to this however as it\nreally would be good to be able to update ICU.\n\n-- \nDave Page\nBlog: http://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: http://www.enterprisedb.com\n\nOn Tue, Aug 4, 2020 at 1:04 AM Bruce Momjian <bruce@momjian.us> wrote:On Mon, Aug  3, 2020 at 08:56:06PM +0200, Daniel Verite wrote:\n>  Hi,\n> \n> As a follow-up to bug #16570 [1] and other previous discussions\n> on the mailing-lists, I'm checking out PG13 beta for Windows\n> from:\n>  https://www.enterprisedb.com/postgresql-early-experience\n> and it ships with the same obsolete ICU 53 that was used\n> for PG 10,11,12.\n> Besides not having the latest Unicode features and fixes, ICU 53\n> ignores the BCP 47 tags syntax in collations used as examples\n> in Postgres documentation, which leads to confusion and\n> false bug reports.\n> The current version is ICU 67.\n> \n> I don't see where the suggestion to upgrade it before the\n> next PG release should be addressed but maybe some people on\n> this list do know or have the leverage to make it happen?\n\nWell, you can ask EDB about this, but perhaps the have kept the same ICU\nversion so indexes will not need to be reindexed.Correct - updating ICU would mean a reindex is required following any upgrade, major or minor.I would really like to find an acceptable solution to this however as it really would be good to be able to update ICU.-- Dave PageBlog: http://pgsnake.blogspot.comTwitter: @pgsnakeEDB: http://www.enterprisedb.com", "msg_date": "Tue, 4 Aug 2020 09:06:40 +0100", "msg_from": "Dave Page <dpage@pgadmin.org>", "msg_from_op": false, "msg_subject": "Re: EDB builds Postgres 13 with an obsolete ICU version" }, { "msg_contents": "Dave Page schrieb am 04.08.2020 um 10:06:\n> Correct - updating ICU would mean a reindex is required following any\n> upgrade, major or minor.\n>\n> I would really like to find an acceptable solution to this however as\n> it really would be good to be able to update ICU.\n>\n\nWhat about providing a newer ICU version as kind of an \"add-on\" download containing only the needed DLLs (assuming it's as easy as only replacing the DLLs)?\n\nThen everyone who wishes to use a newer ICU version can manually install them.\nIf that download carries a big \"ATTENTION: reindex required\" I don't think this would be a big risk.\n\nThomas\n\n\n\n\n", "msg_date": "Tue, 4 Aug 2020 11:24:11 +0200", "msg_from": "Thomas Kellerer <shammat@gmx.net>", "msg_from_op": false, "msg_subject": "Re: EDB builds Postgres 13 with an obsolete ICU version" }, { "msg_contents": "On Tue, Aug 4, 2020 at 10:07 AM Dave Page <dpage@pgadmin.org> wrote:\n\n>\n>\n> On Tue, Aug 4, 2020 at 1:04 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n>> On Mon, Aug 3, 2020 at 08:56:06PM +0200, Daniel Verite wrote:\n>> > Hi,\n>> >\n>> > As a follow-up to bug #16570 [1] and other previous discussions\n>> > on the mailing-lists, I'm checking out PG13 beta for Windows\n>> > from:\n>> > https://www.enterprisedb.com/postgresql-early-experience\n>> > and it ships with the same obsolete ICU 53 that was used\n>> > for PG 10,11,12.\n>> > Besides not having the latest Unicode features and fixes, ICU 53\n>> > ignores the BCP 47 tags syntax in collations used as examples\n>> > in Postgres documentation, which leads to confusion and\n>> > false bug reports.\n>> > The current version is ICU 67.\n>> >\n>> > I don't see where the suggestion to upgrade it before the\n>> > next PG release should be addressed but maybe some people on\n>> > this list do know or have the leverage to make it happen?\n>>\n>> Well, you can ask EDB about this, but perhaps the have kept the same ICU\n>> version so indexes will not need to be reindexed.\n>>\n>\n> Correct - updating ICU would mean a reindex is required following any\n> upgrade, major or minor.\n>\n> I would really like to find an acceptable solution to this however as it\n> really would be good to be able to update ICU.\n>\n\nIt certainly couldn't and shouldn't be done in a minor.\n\nBut doing so in v13 doesn't seem entirely unreasonable, especially given\nthat I believe we will detect the requirement to reindex thanks to the\nversioning, and not just start returning invalid results (like, say, with\nthose glibc updates).\n\nWould it be possible to have the installer even check if there are any icu\nindexes in the database. If there aren't, just put in the new version of\nicu. If there are, give the user a choice of the old version or new version\nand reindex?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Aug 4, 2020 at 10:07 AM Dave Page <dpage@pgadmin.org> wrote:On Tue, Aug 4, 2020 at 1:04 AM Bruce Momjian <bruce@momjian.us> wrote:On Mon, Aug  3, 2020 at 08:56:06PM +0200, Daniel Verite wrote:\n>  Hi,\n> \n> As a follow-up to bug #16570 [1] and other previous discussions\n> on the mailing-lists, I'm checking out PG13 beta for Windows\n> from:\n>  https://www.enterprisedb.com/postgresql-early-experience\n> and it ships with the same obsolete ICU 53 that was used\n> for PG 10,11,12.\n> Besides not having the latest Unicode features and fixes, ICU 53\n> ignores the BCP 47 tags syntax in collations used as examples\n> in Postgres documentation, which leads to confusion and\n> false bug reports.\n> The current version is ICU 67.\n> \n> I don't see where the suggestion to upgrade it before the\n> next PG release should be addressed but maybe some people on\n> this list do know or have the leverage to make it happen?\n\nWell, you can ask EDB about this, but perhaps the have kept the same ICU\nversion so indexes will not need to be reindexed.Correct - updating ICU would mean a reindex is required following any upgrade, major or minor.I would really like to find an acceptable solution to this however as it really would be good to be able to update ICU.It certainly couldn't and shouldn't be done in a minor.But doing so in v13 doesn't seem entirely unreasonable, especially given that I believe we will detect the requirement to reindex thanks to the versioning, and not just start returning invalid results (like, say, with those glibc updates). Would it be possible to have the installer even check if there are any icu indexes in the database. If there aren't, just put in the new version of icu. If there are, give the user a choice of the old version or new version and reindex?--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Tue, 4 Aug 2020 11:28:58 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: EDB builds Postgres 13 with an obsolete ICU version" }, { "msg_contents": "On Tue, Aug 4, 2020 at 10:29 AM Magnus Hagander <magnus@hagander.net> wrote:\n\n> On Tue, Aug 4, 2020 at 10:07 AM Dave Page <dpage@pgadmin.org> wrote:\n>\n>>\n>>\n>> On Tue, Aug 4, 2020 at 1:04 AM Bruce Momjian <bruce@momjian.us> wrote:\n>>\n>>> On Mon, Aug 3, 2020 at 08:56:06PM +0200, Daniel Verite wrote:\n>>> > Hi,\n>>> >\n>>> > As a follow-up to bug #16570 [1] and other previous discussions\n>>> > on the mailing-lists, I'm checking out PG13 beta for Windows\n>>> > from:\n>>> > https://www.enterprisedb.com/postgresql-early-experience\n>>> > and it ships with the same obsolete ICU 53 that was used\n>>> > for PG 10,11,12.\n>>> > Besides not having the latest Unicode features and fixes, ICU 53\n>>> > ignores the BCP 47 tags syntax in collations used as examples\n>>> > in Postgres documentation, which leads to confusion and\n>>> > false bug reports.\n>>> > The current version is ICU 67.\n>>> >\n>>> > I don't see where the suggestion to upgrade it before the\n>>> > next PG release should be addressed but maybe some people on\n>>> > this list do know or have the leverage to make it happen?\n>>>\n>>> Well, you can ask EDB about this, but perhaps the have kept the same ICU\n>>> version so indexes will not need to be reindexed.\n>>>\n>>\n>> Correct - updating ICU would mean a reindex is required following any\n>> upgrade, major or minor.\n>>\n>> I would really like to find an acceptable solution to this however as it\n>> really would be good to be able to update ICU.\n>>\n>\n> It certainly couldn't and shouldn't be done in a minor.\n>\n> But doing so in v13 doesn't seem entirely unreasonable, especially given\n> that I believe we will detect the requirement to reindex thanks to the\n> versioning, and not just start returning invalid results (like, say, with\n> those glibc updates).\n>\n> Would it be possible to have the installer even check if there are any icu\n> indexes in the database. If there aren't, just put in the new version of\n> icu. If there are, give the user a choice of the old version or new version\n> and reindex?\n>\n\nThat would require fairly large changes to the installer to allow it to\nlogin to the database server (whether that would work would be dependent on\nhow pg_hba.conf is configured), and also assumes that the ICU ABI hasn't\nchanged between releases. It would also require some hacky renaming of\nDLLs, as they have the version number in them.\n\nThe chances of designing, building and testing that thoroughly before v13\nis released is about zero I'd say.\n\n-- \nDave Page\nBlog: http://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: http://www.enterprisedb.com\n\nOn Tue, Aug 4, 2020 at 10:29 AM Magnus Hagander <magnus@hagander.net> wrote:On Tue, Aug 4, 2020 at 10:07 AM Dave Page <dpage@pgadmin.org> wrote:On Tue, Aug 4, 2020 at 1:04 AM Bruce Momjian <bruce@momjian.us> wrote:On Mon, Aug  3, 2020 at 08:56:06PM +0200, Daniel Verite wrote:\n>  Hi,\n> \n> As a follow-up to bug #16570 [1] and other previous discussions\n> on the mailing-lists, I'm checking out PG13 beta for Windows\n> from:\n>  https://www.enterprisedb.com/postgresql-early-experience\n> and it ships with the same obsolete ICU 53 that was used\n> for PG 10,11,12.\n> Besides not having the latest Unicode features and fixes, ICU 53\n> ignores the BCP 47 tags syntax in collations used as examples\n> in Postgres documentation, which leads to confusion and\n> false bug reports.\n> The current version is ICU 67.\n> \n> I don't see where the suggestion to upgrade it before the\n> next PG release should be addressed but maybe some people on\n> this list do know or have the leverage to make it happen?\n\nWell, you can ask EDB about this, but perhaps the have kept the same ICU\nversion so indexes will not need to be reindexed.Correct - updating ICU would mean a reindex is required following any upgrade, major or minor.I would really like to find an acceptable solution to this however as it really would be good to be able to update ICU.It certainly couldn't and shouldn't be done in a minor.But doing so in v13 doesn't seem entirely unreasonable, especially given that I believe we will detect the requirement to reindex thanks to the versioning, and not just start returning invalid results (like, say, with those glibc updates). Would it be possible to have the installer even check if there are any icu indexes in the database. If there aren't, just put in the new version of icu. If there are, give the user a choice of the old version or new version and reindex?That would require fairly large changes to the installer to allow it to login to the database server (whether that would work would be dependent on how pg_hba.conf is configured), and also assumes that the ICU ABI hasn't changed between releases. It would also require some hacky renaming of DLLs, as they have the version number in them.The chances of designing, building and testing that thoroughly before v13 is released is about zero I'd say. -- Dave PageBlog: http://pgsnake.blogspot.comTwitter: @pgsnakeEDB: http://www.enterprisedb.com", "msg_date": "Tue, 4 Aug 2020 10:41:54 +0100", "msg_from": "Dave Page <dpage@pgadmin.org>", "msg_from_op": false, "msg_subject": "Re: EDB builds Postgres 13 with an obsolete ICU version" }, { "msg_contents": "On Tue, Aug 4, 2020 at 11:42 AM Dave Page <dpage@pgadmin.org> wrote:\n\n>\n>\n> On Tue, Aug 4, 2020 at 10:29 AM Magnus Hagander <magnus@hagander.net>\n> wrote:\n>\n>> On Tue, Aug 4, 2020 at 10:07 AM Dave Page <dpage@pgadmin.org> wrote:\n>>\n>>>\n>>>\n>>> On Tue, Aug 4, 2020 at 1:04 AM Bruce Momjian <bruce@momjian.us> wrote:\n>>>\n>>>> On Mon, Aug 3, 2020 at 08:56:06PM +0200, Daniel Verite wrote:\n>>>> > Hi,\n>>>> >\n>>>> > As a follow-up to bug #16570 [1] and other previous discussions\n>>>> > on the mailing-lists, I'm checking out PG13 beta for Windows\n>>>> > from:\n>>>> > https://www.enterprisedb.com/postgresql-early-experience\n>>>> > and it ships with the same obsolete ICU 53 that was used\n>>>> > for PG 10,11,12.\n>>>> > Besides not having the latest Unicode features and fixes, ICU 53\n>>>> > ignores the BCP 47 tags syntax in collations used as examples\n>>>> > in Postgres documentation, which leads to confusion and\n>>>> > false bug reports.\n>>>> > The current version is ICU 67.\n>>>> >\n>>>> > I don't see where the suggestion to upgrade it before the\n>>>> > next PG release should be addressed but maybe some people on\n>>>> > this list do know or have the leverage to make it happen?\n>>>>\n>>>> Well, you can ask EDB about this, but perhaps the have kept the same ICU\n>>>> version so indexes will not need to be reindexed.\n>>>>\n>>>\n>>> Correct - updating ICU would mean a reindex is required following any\n>>> upgrade, major or minor.\n>>>\n>>> I would really like to find an acceptable solution to this however as it\n>>> really would be good to be able to update ICU.\n>>>\n>>\n>> It certainly couldn't and shouldn't be done in a minor.\n>>\n>> But doing so in v13 doesn't seem entirely unreasonable, especially given\n>> that I believe we will detect the requirement to reindex thanks to the\n>> versioning, and not just start returning invalid results (like, say, with\n>> those glibc updates).\n>>\n>> Would it be possible to have the installer even check if there are any\n>> icu indexes in the database. If there aren't, just put in the new version\n>> of icu. If there are, give the user a choice of the old version or new\n>> version and reindex?\n>>\n>\n> That would require fairly large changes to the installer to allow it to\n> login to the database server (whether that would work would be dependent on\n> how pg_hba.conf is configured), and also assumes that the ICU ABI hasn't\n> changed between releases. It would also require some hacky renaming of\n> DLLs, as they have the version number in them.\n>\n\nI assumed it had code for that stuff already. Mainly because I assumed it\nsupported doing pg_upgrade, which requires similar things no?\n\n\n\n>\n> The chances of designing, building and testing that thoroughly before v13\n> is released is about zero I'd say.\n>\n\nYeah, I can see how it would be for 13 -- unfortunately. But I definitely\nthink it's something that should go high on the list of things to get fixed\nfor 14.\n\n//Magnus\n\nOn Tue, Aug 4, 2020 at 11:42 AM Dave Page <dpage@pgadmin.org> wrote:On Tue, Aug 4, 2020 at 10:29 AM Magnus Hagander <magnus@hagander.net> wrote:On Tue, Aug 4, 2020 at 10:07 AM Dave Page <dpage@pgadmin.org> wrote:On Tue, Aug 4, 2020 at 1:04 AM Bruce Momjian <bruce@momjian.us> wrote:On Mon, Aug  3, 2020 at 08:56:06PM +0200, Daniel Verite wrote:\n>  Hi,\n> \n> As a follow-up to bug #16570 [1] and other previous discussions\n> on the mailing-lists, I'm checking out PG13 beta for Windows\n> from:\n>  https://www.enterprisedb.com/postgresql-early-experience\n> and it ships with the same obsolete ICU 53 that was used\n> for PG 10,11,12.\n> Besides not having the latest Unicode features and fixes, ICU 53\n> ignores the BCP 47 tags syntax in collations used as examples\n> in Postgres documentation, which leads to confusion and\n> false bug reports.\n> The current version is ICU 67.\n> \n> I don't see where the suggestion to upgrade it before the\n> next PG release should be addressed but maybe some people on\n> this list do know or have the leverage to make it happen?\n\nWell, you can ask EDB about this, but perhaps the have kept the same ICU\nversion so indexes will not need to be reindexed.Correct - updating ICU would mean a reindex is required following any upgrade, major or minor.I would really like to find an acceptable solution to this however as it really would be good to be able to update ICU.It certainly couldn't and shouldn't be done in a minor.But doing so in v13 doesn't seem entirely unreasonable, especially given that I believe we will detect the requirement to reindex thanks to the versioning, and not just start returning invalid results (like, say, with those glibc updates). Would it be possible to have the installer even check if there are any icu indexes in the database. If there aren't, just put in the new version of icu. If there are, give the user a choice of the old version or new version and reindex?That would require fairly large changes to the installer to allow it to login to the database server (whether that would work would be dependent on how pg_hba.conf is configured), and also assumes that the ICU ABI hasn't changed between releases. It would also require some hacky renaming of DLLs, as they have the version number in them.I assumed it had code for that stuff already. Mainly because I assumed it supported doing pg_upgrade, which requires similar things no? The chances of designing, building and testing that thoroughly before v13 is released is about zero I'd say.Yeah, I can see how it would be for 13 -- unfortunately. But I definitely think it's something that should go high on the list of things to get fixed for 14.//Magnus", "msg_date": "Tue, 11 Aug 2020 14:58:30 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: EDB builds Postgres 13 with an obsolete ICU version" }, { "msg_contents": "On Mon, 3 Aug 2020 at 13:56, Daniel Verite <daniel@manitou-mail.org> wrote:\n>\n> Hi,\n>\n> As a follow-up to bug #16570 [1] and other previous discussions\n> on the mailing-lists, I'm checking out PG13 beta for Windows\n> from:\n> https://www.enterprisedb.com/postgresql-early-experience\n> and it ships with the same obsolete ICU 53 that was used\n> for PG 10,11,12.\n> Besides not having the latest Unicode features and fixes, ICU 53\n> ignores the BCP 47 tags syntax in collations used as examples\n> in Postgres documentation, which leads to confusion and\n> false bug reports.\n> The current version is ICU 67.\n>\n\nHi,\n\nSadly, that is managed by EDB and not by the community.\n\nYou can try https://www.2ndquadrant.com/en/resources/postgresql-installer-2ndquadrant/\nwhich uses ICU-62.2, is not the latest but should allow you to follow\nthe examples in the documentation.\n\n-- \nJaime Casanova www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 11 Aug 2020 13:39:02 -0500", "msg_from": "Jaime Casanova <jaime.casanova@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: EDB builds Postgres 13 with an obsolete ICU version" }, { "msg_contents": "Jaime Casanova schrieb am 11.08.2020 um 20:39:\n>> As a follow-up to bug #16570 [1] and other previous discussions\n>> on the mailing-lists, I'm checking out PG13 beta for Windows\n>> from:\n>> https://www.enterprisedb.com/postgresql-early-experience\n>> and it ships with the same obsolete ICU 53 that was used\n>> for PG 10,11,12.\n>> Besides not having the latest Unicode features and fixes, ICU 53\n>> ignores the BCP 47 tags syntax in collations used as examples\n>> in Postgres documentation, which leads to confusion and\n>> false bug reports.\n>> The current version is ICU 67.\n>>\n>\n> Sadly, that is managed by EDB and not by the community.\n>\n> You can try https://www.2ndquadrant.com/en/resources/postgresql-installer-2ndquadrant/\n> which uses ICU-62.2, is not the latest but should allow you to follow\n> the examples in the documentation.\n\n\nOne of the reasons I prefer the EDB builds is, that they provide a ZIP file without the installer overhead.\nAny chance 2ndQuadrant can supply something like that as well?\n\nThomas\n\n\n", "msg_date": "Tue, 11 Aug 2020 20:45:20 +0200", "msg_from": "Thomas Kellerer <shammat@gmx.net>", "msg_from_op": false, "msg_subject": "Re: EDB builds Postgres 13 with an obsolete ICU version" }, { "msg_contents": "On Tue, 11 Aug 2020 at 13:45, Thomas Kellerer <shammat@gmx.net> wrote:\n>\n> Jaime Casanova schrieb am 11.08.2020 um 20:39:\n> >> As a follow-up to bug #16570 [1] and other previous discussions\n> >> on the mailing-lists, I'm checking out PG13 beta for Windows\n> >> from:\n> >> https://www.enterprisedb.com/postgresql-early-experience\n> >> and it ships with the same obsolete ICU 53 that was used\n> >> for PG 10,11,12.\n> >> Besides not having the latest Unicode features and fixes, ICU 53\n> >> ignores the BCP 47 tags syntax in collations used as examples\n> >> in Postgres documentation, which leads to confusion and\n> >> false bug reports.\n> >> The current version is ICU 67.\n> >>\n> >\n> > Sadly, that is managed by EDB and not by the community.\n> >\n> > You can try https://www.2ndquadrant.com/en/resources/postgresql-installer-2ndquadrant/\n> > which uses ICU-62.2, is not the latest but should allow you to follow\n> > the examples in the documentation.\n>\n>\n> One of the reasons I prefer the EDB builds is, that they provide a ZIP file without the installer overhead.\n> Any chance 2ndQuadrant can supply something like that as well?\n>\n\ni don't think so, an unattended install mode is the closest\n\n-- \nJaime Casanova www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 12 Aug 2020 12:54:45 -0500", "msg_from": "Jaime Casanova <jaime.casanova@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: EDB builds Postgres 13 with an obsolete ICU version" }, { "msg_contents": "On Tue, Aug 11, 2020 at 02:58:30PM +0200, Magnus Hagander wrote:\n> On Tue, Aug 4, 2020 at 11:42 AM Dave Page <dpage@pgadmin.org> wrote:\n> That would require fairly large changes to the installer to allow it to\n> login to the database server (whether that would work would be�dependent on\n> how pg_hba.conf is configured), and also assumes that the ICU ABI hasn't\n> changed between releases. It would also require some hacky renaming of\n> DLLs, as they have the version number in them.\n> \n> I assumed it had code for that stuff already. Mainly because I assumed it\n> supported doing pg_upgrade, which requires similar things no?\n\nWhile pg_upgrade requires having the old and new cluster software in\nplace, I don't think it helps allowing different ICU versions for each\ncluster. I guess you can argue that if you know the user is _not_ going\nto be using pg_upgrade, then a new ICU version should be used for the\nnew cluster.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Fri, 14 Aug 2020 09:00:06 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: EDB builds Postgres 13 with an obsolete ICU version" }, { "msg_contents": "On Fri, Aug 14, 2020 at 09:00:06AM -0400, Bruce Momjian wrote:\n> On Tue, Aug 11, 2020 at 02:58:30PM +0200, Magnus Hagander wrote:\n>> I assumed it had code for that stuff already. Mainly because I assumed it\n>> supported doing pg_upgrade, which requires similar things no?\n> \n> While pg_upgrade requires having the old and new cluster software in\n> place, I don't think it helps allowing different ICU versions for each\n> cluster. I guess you can argue that if you know the user is _not_ going\n> to be using pg_upgrade, then a new ICU version should be used for the\n> new cluster.\n\nWe have nothing in core, yet, that helps with this kind of problem\nwith binary upgrades. In the last year, Julien and I worked on an\nupgrade case where a glibc upgrade was involved with pg_upgrade used\nfor PG, and it could not afford the use of a new host to allow a\nlogical dump/restore to rebuild the indexes from scratch. You can\nalways run a \"reindex -a\" after the upgrade to be sure that no indexes\nare broken because of the changes with collation versions, but once\nyou have to give the guarantee that an upgrade does not take longer\nthan a certain amount of time, the reindex easily becomes the\nbottleneck. That's one motivation behind the recent work to add\ncollation versions to pg_depend entries, which would lead to more\nfiltering facilities for REINDEX on the backend to get for example the\noption to only reindex collation-sensitive indexes (imagine just a\nreindexdb --jobs with the collation filtering done at table-level,\nthat would be fast, or a script doing this work generated by\npg_upgrade).\n--\nMichael", "msg_date": "Fri, 14 Aug 2020 22:23:27 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: EDB builds Postgres 13 with an obsolete ICU version" }, { "msg_contents": "On Fri, Aug 14, 2020 at 10:23:27PM +0900, Michael Paquier wrote:\n> We have nothing in core, yet, that helps with this kind of problem\n> with binary upgrades. In the last year, Julien and I worked on an\n> upgrade case where a glibc upgrade was involved with pg_upgrade used\n> for PG, and it could not afford the use of a new host to allow a\n> logical dump/restore to rebuild the indexes from scratch. You can\n> always run a \"reindex -a\" after the upgrade to be sure that no indexes\n> are broken because of the changes with collation versions, but once\n> you have to give the guarantee that an upgrade does not take longer\n> than a certain amount of time, the reindex easily becomes the\n> bottleneck. That's one motivation behind the recent work to add\n> collation versions to pg_depend entries, which would lead to more\n> filtering facilities for REINDEX on the backend to get for example the\n> option to only reindex collation-sensitive indexes (imagine just a\n> reindexdb --jobs with the collation filtering done at table-level,\n> that would be fast, or a script doing this work generated by\n> pg_upgrade).\n\nAgreed --- only a small percentage of indexes are affected by\ncollations, and it would be great if we could tell users how to easily\nidentify them.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Fri, 14 Aug 2020 12:42:35 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: EDB builds Postgres 13 with an obsolete ICU version" }, { "msg_contents": "On Fri, Aug 14, 2020 at 3:00 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Tue, Aug 11, 2020 at 02:58:30PM +0200, Magnus Hagander wrote:\n> > On Tue, Aug 4, 2020 at 11:42 AM Dave Page <dpage@pgadmin.org> wrote:\n> > That would require fairly large changes to the installer to allow it\n> to\n> > login to the database server (whether that would work would\n> be dependent on\n> > how pg_hba.conf is configured), and also assumes that the ICU ABI\n> hasn't\n> > changed between releases. It would also require some hacky renaming\n> of\n> > DLLs, as they have the version number in them.\n> >\n> > I assumed it had code for that stuff already. Mainly because I assumed it\n> > supported doing pg_upgrade, which requires similar things no?\n>\n> While pg_upgrade requires having the old and new cluster software in\n> place, I don't think it helps allowing different ICU versions for each\n> cluster.\n\n\nDepends on where they are installed (and disclaimer, I don't know how the\nwindows installers do that). But as long as the ICU libraries are installed\nin separate locations for the two versions, which I *think* they are or at\nleast used to be, it shouldn't have an effect on this in either direction.\n\nThat argument really only holds for different versions, not for different\nclusters of the same version. But I don't think the installers (natively)\nsupports multiple clusters of the same version anyway.\n\nThe tricky thing is if you want to allow the user to *choose* which ICU\nversion should be used with postgres version <x>. Because then the user\nmight also expect an upgrade-path wherein they only upgrade the icu library\non an existing install...\n\n\n> I guess you can argue that if you know the user is _not_ going\n> to be using pg_upgrade, then a new ICU version should be used for the\n> new cluster.\n>\n\nYes, that's exactly the argument I meant :) If the user got the option to\n\"pick version of ICU: <old>, <new>\", with a comment saying \"pick old only\nif you plan to do a pg_upgrade based upgrade of a different cluster, or if\nthis instance should participate in replication with a node using <old>\",\nthat would probably help for the vast majority of cases. And of course, if\nthe installer through other options can determine with certainty that it's\ngoing to be running pg_upgrade for the user, then it can reword the dialog\nbased on that (that is, it should still allow the user to pick the new\nversion, as long as they know that their indexes are going to need\nreindexing)\n\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Fri, Aug 14, 2020 at 3:00 PM Bruce Momjian <bruce@momjian.us> wrote:On Tue, Aug 11, 2020 at 02:58:30PM +0200, Magnus Hagander wrote:\n> On Tue, Aug 4, 2020 at 11:42 AM Dave Page <dpage@pgadmin.org> wrote:\n>     That would require fairly large changes to the installer to allow it to\n>     login to the database server (whether that would work would be dependent on\n>     how pg_hba.conf is configured), and also assumes that the ICU ABI hasn't\n>     changed between releases. It would also require some hacky renaming of\n>     DLLs, as they have the version number in them.\n> \n> I assumed it had code for that stuff already. Mainly because I assumed it\n> supported doing pg_upgrade, which requires similar things no?\n\nWhile pg_upgrade requires having the old and new cluster software in\nplace, I don't think it helps allowing different ICU versions for each\ncluster. Depends on where they are installed (and disclaimer, I don't know how the windows installers do that). But as long as the ICU libraries are installed in separate locations for the two versions, which I *think* they are or at least used to be, it shouldn't have an effect on this in either direction.That argument really only holds for different versions, not for different clusters of the same version. But I don't think the installers (natively) supports multiple clusters of the same version anyway.The tricky thing is if you want to allow the user to *choose* which ICU version should be used with postgres version <x>.  Because then the user might also expect an upgrade-path wherein they only upgrade the icu library on an existing install...  I guess you can argue that if you know the user is _not_ going\nto be using pg_upgrade, then a new ICU version should be used for the\nnew cluster.Yes, that's exactly the argument I meant :) If the user got the option to \"pick version of ICU: <old>, <new>\", with a comment saying \"pick old only if you plan to do a pg_upgrade based upgrade of a different cluster, or if this instance should participate in replication with a node using <old>\", that would probably help for the vast majority of cases. And of course, if the installer through other options can determine with certainty that it's going to be running pg_upgrade for the user, then it can reword the dialog based on that (that is, it should still allow the user to pick the new version, as long as they know that their indexes are going to need reindexing)--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Mon, 17 Aug 2020 12:19:13 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: EDB builds Postgres 13 with an obsolete ICU version" }, { "msg_contents": "On Mon, Aug 17, 2020 at 11:19 AM Magnus Hagander <magnus@hagander.net>\nwrote:\n\n>\n>\n> On Fri, Aug 14, 2020 at 3:00 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n>> On Tue, Aug 11, 2020 at 02:58:30PM +0200, Magnus Hagander wrote:\n>> > On Tue, Aug 4, 2020 at 11:42 AM Dave Page <dpage@pgadmin.org> wrote:\n>> > That would require fairly large changes to the installer to allow\n>> it to\n>> > login to the database server (whether that would work would\n>> be dependent on\n>> > how pg_hba.conf is configured), and also assumes that the ICU ABI\n>> hasn't\n>> > changed between releases. It would also require some hacky renaming\n>> of\n>> > DLLs, as they have the version number in them.\n>> >\n>> > I assumed it had code for that stuff already. Mainly because I assumed\n>> it\n>> > supported doing pg_upgrade, which requires similar things no?\n>>\n>\nNo, the installers don't support pg_upgrade directly. They ship it of\ncourse, and the user can manually run it, but the installers won't do that,\nand have no ability to login to a cluster except during the post-initdb\nphase.\n\n\n>\n>> While pg_upgrade requires having the old and new cluster software in\n>> place, I don't think it helps allowing different ICU versions for each\n>> cluster.\n>\n>\n> Depends on where they are installed (and disclaimer, I don't know how the\n> windows installers do that). But as long as the ICU libraries are installed\n> in separate locations for the two versions, which I *think* they are or at\n> least used to be, it shouldn't have an effect on this in either direction.\n>\n\nThey are.\n\n\n>\n> That argument really only holds for different versions, not for different\n> clusters of the same version. But I don't think the installers (natively)\n> supports multiple clusters of the same version anyway.\n>\n\nThey don't. You'd need to manually init a new cluster and register a new\nserver instance. The installer only has any knowledge of the cluster it\nsets up.\n\n\n>\n> The tricky thing is if you want to allow the user to *choose* which ICU\n> version should be used with postgres version <x>. Because then the user\n> might also expect an upgrade-path wherein they only upgrade the icu library\n> on an existing install...\n>\n>\n>> I guess you can argue that if you know the user is _not_ going\n>> to be using pg_upgrade, then a new ICU version should be used for the\n>> new cluster.\n>>\n>\n> Yes, that's exactly the argument I meant :) If the user got the option to\n> \"pick version of ICU: <old>, <new>\", with a comment saying \"pick old only\n> if you plan to do a pg_upgrade based upgrade of a different cluster, or if\n> this instance should participate in replication with a node using <old>\",\n> that would probably help for the vast majority of cases. And of course, if\n> the installer through other options can determine with certainty that it's\n> going to be running pg_upgrade for the user, then it can reword the dialog\n> based on that (that is, it should still allow the user to pick the new\n> version, as long as they know that their indexes are going to need\n> reindexing)\n>\n\nThat seems like a very hacky and extremely user-unfriendly approach. How\nmany users are going to understand options in the installer to deal with\nthat, or want to go decode the ICU filenames on their existing\ninstallations (which our installers may not actually know about) to figure\nout what their current version is?\n\nI would suggest that the better way to handle this would be for pg_upgrade\nto (somehow) check the ICU version on the old and new clusters and if\nthere's a mismatch perform a reindex of any ICU based indexes. I suspect\nthat may require that the server exposes the ICU version though. That way,\nthe installers could freely upgrade the ICU version with a new major\nrelease.\n\n-- \nDave Page\nBlog: http://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: http://www.enterprisedb.com\n\nOn Mon, Aug 17, 2020 at 11:19 AM Magnus Hagander <magnus@hagander.net> wrote:On Fri, Aug 14, 2020 at 3:00 PM Bruce Momjian <bruce@momjian.us> wrote:On Tue, Aug 11, 2020 at 02:58:30PM +0200, Magnus Hagander wrote:\n> On Tue, Aug 4, 2020 at 11:42 AM Dave Page <dpage@pgadmin.org> wrote:\n>     That would require fairly large changes to the installer to allow it to\n>     login to the database server (whether that would work would be dependent on\n>     how pg_hba.conf is configured), and also assumes that the ICU ABI hasn't\n>     changed between releases. It would also require some hacky renaming of\n>     DLLs, as they have the version number in them.\n> \n> I assumed it had code for that stuff already. Mainly because I assumed it\n> supported doing pg_upgrade, which requires similar things no?No, the installers don't support pg_upgrade directly. They ship it of course, and the user can manually run it, but the installers won't do that, and have no ability to login to a cluster except during the post-initdb phase. \n\nWhile pg_upgrade requires having the old and new cluster software in\nplace, I don't think it helps allowing different ICU versions for each\ncluster. Depends on where they are installed (and disclaimer, I don't know how the windows installers do that). But as long as the ICU libraries are installed in separate locations for the two versions, which I *think* they are or at least used to be, it shouldn't have an effect on this in either direction.They are. That argument really only holds for different versions, not for different clusters of the same version. But I don't think the installers (natively) supports multiple clusters of the same version anyway.They don't. You'd need to manually init a new cluster and register a new server instance. The installer only has any knowledge of the cluster it sets up. The tricky thing is if you want to allow the user to *choose* which ICU version should be used with postgres version <x>.  Because then the user might also expect an upgrade-path wherein they only upgrade the icu library on an existing install...  I guess you can argue that if you know the user is _not_ going\nto be using pg_upgrade, then a new ICU version should be used for the\nnew cluster.Yes, that's exactly the argument I meant :) If the user got the option to \"pick version of ICU: <old>, <new>\", with a comment saying \"pick old only if you plan to do a pg_upgrade based upgrade of a different cluster, or if this instance should participate in replication with a node using <old>\", that would probably help for the vast majority of cases. And of course, if the installer through other options can determine with certainty that it's going to be running pg_upgrade for the user, then it can reword the dialog based on that (that is, it should still allow the user to pick the new version, as long as they know that their indexes are going to need reindexing)That seems like a very hacky and extremely user-unfriendly approach. How many users are going to understand options in the installer to deal with that, or want to go decode the ICU filenames on their existing installations (which our installers may not actually know about) to figure out what their current version is?I would suggest that the better way to handle this would be for pg_upgrade to (somehow) check the ICU version on the old and new clusters and if there's a mismatch perform a reindex of any ICU based indexes. I suspect that may require that the server exposes the ICU version though. That way, the installers could freely upgrade the ICU version with a new major release.-- Dave PageBlog: http://pgsnake.blogspot.comTwitter: @pgsnakeEDB: http://www.enterprisedb.com", "msg_date": "Mon, 17 Aug 2020 12:44:43 +0100", "msg_from": "Dave Page <dpage@pgadmin.org>", "msg_from_op": false, "msg_subject": "Re: EDB builds Postgres 13 with an obsolete ICU version" }, { "msg_contents": "On Mon, Aug 17, 2020 at 1:44 PM Dave Page <dpage@pgadmin.org> wrote:\n\n>\n>\n> On Mon, Aug 17, 2020 at 11:19 AM Magnus Hagander <magnus@hagander.net>\n> wrote:\n>\n>>\n>>\n>> On Fri, Aug 14, 2020 at 3:00 PM Bruce Momjian <bruce@momjian.us> wrote:\n>>\n>>> On Tue, Aug 11, 2020 at 02:58:30PM +0200, Magnus Hagander wrote:\n>>> > On Tue, Aug 4, 2020 at 11:42 AM Dave Page <dpage@pgadmin.org> wrote:\n>>> > That would require fairly large changes to the installer to allow\n>>> it to\n>>> > login to the database server (whether that would work would\n>>> be dependent on\n>>> > how pg_hba.conf is configured), and also assumes that the ICU ABI\n>>> hasn't\n>>> > changed between releases. It would also require some hacky\n>>> renaming of\n>>> > DLLs, as they have the version number in them.\n>>> >\n>>> > I assumed it had code for that stuff already. Mainly because I assumed\n>>> it\n>>> > supported doing pg_upgrade, which requires similar things no?\n>>>\n>>\n> No, the installers don't support pg_upgrade directly. They ship it of\n> course, and the user can manually run it, but the installers won't do that,\n> and have no ability to login to a cluster except during the post-initdb\n> phase.\n>\n\nOh, I just assumed it did :)\n\nIf it doesn't, I think shipping with a modern ICU is a much smaller problem\nreally...\n\n\nWhile pg_upgrade requires having the old and new cluster software in\n>>> place, I don't think it helps allowing different ICU versions for each\n>>> cluster.\n>>\n>>\n>> Depends on where they are installed (and disclaimer, I don't know how the\n>> windows installers do that). But as long as the ICU libraries are installed\n>> in separate locations for the two versions, which I *think* they are or at\n>> least used to be, it shouldn't have an effect on this in either direction.\n>>\n>\n> They are.\n>\n\nGood. So putting both in wouldn't break things.\n\n\n\nThat argument really only holds for different versions, not for different\n>> clusters of the same version. But I don't think the installers (natively)\n>> supports multiple clusters of the same version anyway.\n>>\n>\n> They don't. You'd need to manually init a new cluster and register a new\n> server instance. The installer only has any knowledge of the cluster it\n> sets up.\n>\n\nI'd say that's \"unsupported enough\" to not be a scenario one has to\nconsider.\n\n\n\n>> The tricky thing is if you want to allow the user to *choose* which ICU\n>> version should be used with postgres version <x>. Because then the user\n>> might also expect an upgrade-path wherein they only upgrade the icu library\n>> on an existing install...\n>>\n>>\n>>> I guess you can argue that if you know the user is _not_ going\n>>> to be using pg_upgrade, then a new ICU version should be used for the\n>>> new cluster.\n>>>\n>>\n>> Yes, that's exactly the argument I meant :) If the user got the option to\n>> \"pick version of ICU: <old>, <new>\", with a comment saying \"pick old only\n>> if you plan to do a pg_upgrade based upgrade of a different cluster, or if\n>> this instance should participate in replication with a node using <old>\",\n>> that would probably help for the vast majority of cases. And of course, if\n>> the installer through other options can determine with certainty that it's\n>> going to be running pg_upgrade for the user, then it can reword the dialog\n>> based on that (that is, it should still allow the user to pick the new\n>> version, as long as they know that their indexes are going to need\n>> reindexing)\n>>\n>\n> That seems like a very hacky and extremely user-unfriendly approach. How\n> many users are going to understand options in the installer to deal with\n> that, or want to go decode the ICU filenames on their existing\n> installations (which our installers may not actually know about) to figure\n> out what their current version is?\n>\n\n\nThat was more if the installer actually handles the whole chain. It clearly\ndoesn't today (since it doesn't support upgrades), I agree this might\ndefinitely be overkill. But then also I don't really see the problem with\njust putting a new version of ICU in with the newer versions of PostgreSQL.\nThat's just puts the user in the same position as they are with any other\nplatform wrt manual pg_upgrade runs.\n\n\n\n>\n> I would suggest that the better way to handle this would be for pg_upgrade\n> to (somehow) check the ICU version on the old and new clusters and if\n> there's a mismatch perform a reindex of any ICU based indexes. I suspect\n> that may require that the server exposes the ICU version though. That way,\n> the installers could freely upgrade the ICU version with a new major\n> release.\n>\n\nHaving pg_upgrade spit out a script that does reindex specifically on the\nindexes required would certainly be useful in the generic case, and help\nothers as well.\n\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, Aug 17, 2020 at 1:44 PM Dave Page <dpage@pgadmin.org> wrote:On Mon, Aug 17, 2020 at 11:19 AM Magnus Hagander <magnus@hagander.net> wrote:On Fri, Aug 14, 2020 at 3:00 PM Bruce Momjian <bruce@momjian.us> wrote:On Tue, Aug 11, 2020 at 02:58:30PM +0200, Magnus Hagander wrote:\n> On Tue, Aug 4, 2020 at 11:42 AM Dave Page <dpage@pgadmin.org> wrote:\n>     That would require fairly large changes to the installer to allow it to\n>     login to the database server (whether that would work would be dependent on\n>     how pg_hba.conf is configured), and also assumes that the ICU ABI hasn't\n>     changed between releases. It would also require some hacky renaming of\n>     DLLs, as they have the version number in them.\n> \n> I assumed it had code for that stuff already. Mainly because I assumed it\n> supported doing pg_upgrade, which requires similar things no?No, the installers don't support pg_upgrade directly. They ship it of course, and the user can manually run it, but the installers won't do that, and have no ability to login to a cluster except during the post-initdb phase.Oh, I just assumed it did :)If it doesn't, I think shipping with a modern ICU is a much smaller problem really...\nWhile pg_upgrade requires having the old and new cluster software in\nplace, I don't think it helps allowing different ICU versions for each\ncluster. Depends on where they are installed (and disclaimer, I don't know how the windows installers do that). But as long as the ICU libraries are installed in separate locations for the two versions, which I *think* they are or at least used to be, it shouldn't have an effect on this in either direction.They are.Good. So putting both in wouldn't break things.That argument really only holds for different versions, not for different clusters of the same version. But I don't think the installers (natively) supports multiple clusters of the same version anyway.They don't. You'd need to manually init a new cluster and register a new server instance. The installer only has any knowledge of the cluster it sets up.I'd say that's \"unsupported enough\" to not be a scenario one has to consider.The tricky thing is if you want to allow the user to *choose* which ICU version should be used with postgres version <x>.  Because then the user might also expect an upgrade-path wherein they only upgrade the icu library on an existing install...  I guess you can argue that if you know the user is _not_ going\nto be using pg_upgrade, then a new ICU version should be used for the\nnew cluster.Yes, that's exactly the argument I meant :) If the user got the option to \"pick version of ICU: <old>, <new>\", with a comment saying \"pick old only if you plan to do a pg_upgrade based upgrade of a different cluster, or if this instance should participate in replication with a node using <old>\", that would probably help for the vast majority of cases. And of course, if the installer through other options can determine with certainty that it's going to be running pg_upgrade for the user, then it can reword the dialog based on that (that is, it should still allow the user to pick the new version, as long as they know that their indexes are going to need reindexing)That seems like a very hacky and extremely user-unfriendly approach. How many users are going to understand options in the installer to deal with that, or want to go decode the ICU filenames on their existing installations (which our installers may not actually know about) to figure out what their current version is?That was more if the installer actually handles the whole chain. It clearly doesn't today (since it doesn't support upgrades), I agree this might definitely be overkill. But then also I don't really see the problem with just putting a new version of ICU in with the newer versions of PostgreSQL. That's just puts the user in the same position as they are with any other platform wrt manual pg_upgrade runs. I would suggest that the better way to handle this would be for pg_upgrade to (somehow) check the ICU version on the old and new clusters and if there's a mismatch perform a reindex of any ICU based indexes. I suspect that may require that the server exposes the ICU version though. That way, the installers could freely upgrade the ICU version with a new major release.Having pg_upgrade spit out a script that does reindex specifically on the indexes required would certainly be useful in the generic case, and help others as well.--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Mon, 17 Aug 2020 17:14:46 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: EDB builds Postgres 13 with an obsolete ICU version" }, { "msg_contents": "On Mon, Aug 17, 2020 at 4:14 PM Magnus Hagander <magnus@hagander.net> wrote:\n\n>\n>\n> On Mon, Aug 17, 2020 at 1:44 PM Dave Page <dpage@pgadmin.org> wrote:\n>\n>>\n>>\n>> On Mon, Aug 17, 2020 at 11:19 AM Magnus Hagander <magnus@hagander.net>\n>> wrote:\n>>\n>>>\n>>>\n>>> On Fri, Aug 14, 2020 at 3:00 PM Bruce Momjian <bruce@momjian.us> wrote:\n>>>\n>>>> On Tue, Aug 11, 2020 at 02:58:30PM +0200, Magnus Hagander wrote:\n>>>> > On Tue, Aug 4, 2020 at 11:42 AM Dave Page <dpage@pgadmin.org> wrote:\n>>>> > That would require fairly large changes to the installer to allow\n>>>> it to\n>>>> > login to the database server (whether that would work would\n>>>> be dependent on\n>>>> > how pg_hba.conf is configured), and also assumes that the ICU ABI\n>>>> hasn't\n>>>> > changed between releases. It would also require some hacky\n>>>> renaming of\n>>>> > DLLs, as they have the version number in them.\n>>>> >\n>>>> > I assumed it had code for that stuff already. Mainly because I\n>>>> assumed it\n>>>> > supported doing pg_upgrade, which requires similar things no?\n>>>>\n>>>\n>> No, the installers don't support pg_upgrade directly. They ship it of\n>> course, and the user can manually run it, but the installers won't do that,\n>> and have no ability to login to a cluster except during the post-initdb\n>> phase.\n>>\n>\n> Oh, I just assumed it did :)\n>\n> If it doesn't, I think shipping with a modern ICU is a much smaller\n> problem really...\n>\n>\n> While pg_upgrade requires having the old and new cluster software in\n>>>> place, I don't think it helps allowing different ICU versions for each\n>>>> cluster.\n>>>\n>>>\n>>> Depends on where they are installed (and disclaimer, I don't know how\n>>> the windows installers do that). But as long as the ICU libraries are\n>>> installed in separate locations for the two versions, which I *think* they\n>>> are or at least used to be, it shouldn't have an effect on this in either\n>>> direction.\n>>>\n>>\n>> They are.\n>>\n>\n> Good. So putting both in wouldn't break things.\n>\n>\n>\n> That argument really only holds for different versions, not for different\n>>> clusters of the same version. But I don't think the installers (natively)\n>>> supports multiple clusters of the same version anyway.\n>>>\n>>\n>> They don't. You'd need to manually init a new cluster and register a new\n>> server instance. The installer only has any knowledge of the cluster it\n>> sets up.\n>>\n>\n> I'd say that's \"unsupported enough\" to not be a scenario one has to\n> consider.\n>\n\nAgreed. Plus it's not really any different from running multiple clusters\non other OSs where we're likely to be using a vendor supplied ICU that the\nuser also couldn't change easily.\n\n\n>\n>\n>\n>>> The tricky thing is if you want to allow the user to *choose* which ICU\n>>> version should be used with postgres version <x>. Because then the user\n>>> might also expect an upgrade-path wherein they only upgrade the icu library\n>>> on an existing install...\n>>>\n>>>\n>>>> I guess you can argue that if you know the user is _not_ going\n>>>> to be using pg_upgrade, then a new ICU version should be used for the\n>>>> new cluster.\n>>>>\n>>>\n>>> Yes, that's exactly the argument I meant :) If the user got the option\n>>> to \"pick version of ICU: <old>, <new>\", with a comment saying \"pick old\n>>> only if you plan to do a pg_upgrade based upgrade of a different cluster,\n>>> or if this instance should participate in replication with a node using\n>>> <old>\", that would probably help for the vast majority of cases. And of\n>>> course, if the installer through other options can determine with certainty\n>>> that it's going to be running pg_upgrade for the user, then it can reword\n>>> the dialog based on that (that is, it should still allow the user to pick\n>>> the new version, as long as they know that their indexes are going to need\n>>> reindexing)\n>>>\n>>\n>> That seems like a very hacky and extremely user-unfriendly approach. How\n>> many users are going to understand options in the installer to deal with\n>> that, or want to go decode the ICU filenames on their existing\n>> installations (which our installers may not actually know about) to figure\n>> out what their current version is?\n>>\n>\n>\n> That was more if the installer actually handles the whole chain. It\n> clearly doesn't today (since it doesn't support upgrades), I agree this\n> might definitely be overkill. But then also I don't really see the problem\n> with just putting a new version of ICU in with the newer versions of\n> PostgreSQL. That's just puts the user in the same position as they are with\n> any other platform wrt manual pg_upgrade runs.\n>\n\nWell we can certainly do that if everyone is happy in the knowledge that\nit'll mean pg_upgrade users will need to reindex if they've used ICU\ncollations.\n\nSandeep; can you have someone do a test build with the latest ICU please\n(for background, this would be with the Windows and Mac installers)? If\nnoone objects, we can push that into the v13 builds before GA. We'd also\nneed to update the README if we do so.\n\n\n>\n>\n>\n>>\n>> I would suggest that the better way to handle this would be for\n>> pg_upgrade to (somehow) check the ICU version on the old and new clusters\n>> and if there's a mismatch perform a reindex of any ICU based indexes. I\n>> suspect that may require that the server exposes the ICU version though.\n>> That way, the installers could freely upgrade the ICU version with a new\n>> major release.\n>>\n>\n> Having pg_upgrade spit out a script that does reindex specifically on the\n> indexes required would certainly be useful in the generic case, and help\n> others as well.\n>\n\n+1\n\n-- \nDave Page\nBlog: http://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: http://www.enterprisedb.com\n\nOn Mon, Aug 17, 2020 at 4:14 PM Magnus Hagander <magnus@hagander.net> wrote:On Mon, Aug 17, 2020 at 1:44 PM Dave Page <dpage@pgadmin.org> wrote:On Mon, Aug 17, 2020 at 11:19 AM Magnus Hagander <magnus@hagander.net> wrote:On Fri, Aug 14, 2020 at 3:00 PM Bruce Momjian <bruce@momjian.us> wrote:On Tue, Aug 11, 2020 at 02:58:30PM +0200, Magnus Hagander wrote:\n> On Tue, Aug 4, 2020 at 11:42 AM Dave Page <dpage@pgadmin.org> wrote:\n>     That would require fairly large changes to the installer to allow it to\n>     login to the database server (whether that would work would be dependent on\n>     how pg_hba.conf is configured), and also assumes that the ICU ABI hasn't\n>     changed between releases. It would also require some hacky renaming of\n>     DLLs, as they have the version number in them.\n> \n> I assumed it had code for that stuff already. Mainly because I assumed it\n> supported doing pg_upgrade, which requires similar things no?No, the installers don't support pg_upgrade directly. They ship it of course, and the user can manually run it, but the installers won't do that, and have no ability to login to a cluster except during the post-initdb phase.Oh, I just assumed it did :)If it doesn't, I think shipping with a modern ICU is a much smaller problem really...\nWhile pg_upgrade requires having the old and new cluster software in\nplace, I don't think it helps allowing different ICU versions for each\ncluster. Depends on where they are installed (and disclaimer, I don't know how the windows installers do that). But as long as the ICU libraries are installed in separate locations for the two versions, which I *think* they are or at least used to be, it shouldn't have an effect on this in either direction.They are.Good. So putting both in wouldn't break things.That argument really only holds for different versions, not for different clusters of the same version. But I don't think the installers (natively) supports multiple clusters of the same version anyway.They don't. You'd need to manually init a new cluster and register a new server instance. The installer only has any knowledge of the cluster it sets up.I'd say that's \"unsupported enough\" to not be a scenario one has to consider.Agreed. Plus it's not really any different from running multiple clusters on other OSs where we're likely to be using a vendor supplied ICU that the user also couldn't change easily. The tricky thing is if you want to allow the user to *choose* which ICU version should be used with postgres version <x>.  Because then the user might also expect an upgrade-path wherein they only upgrade the icu library on an existing install...  I guess you can argue that if you know the user is _not_ going\nto be using pg_upgrade, then a new ICU version should be used for the\nnew cluster.Yes, that's exactly the argument I meant :) If the user got the option to \"pick version of ICU: <old>, <new>\", with a comment saying \"pick old only if you plan to do a pg_upgrade based upgrade of a different cluster, or if this instance should participate in replication with a node using <old>\", that would probably help for the vast majority of cases. And of course, if the installer through other options can determine with certainty that it's going to be running pg_upgrade for the user, then it can reword the dialog based on that (that is, it should still allow the user to pick the new version, as long as they know that their indexes are going to need reindexing)That seems like a very hacky and extremely user-unfriendly approach. How many users are going to understand options in the installer to deal with that, or want to go decode the ICU filenames on their existing installations (which our installers may not actually know about) to figure out what their current version is?That was more if the installer actually handles the whole chain. It clearly doesn't today (since it doesn't support upgrades), I agree this might definitely be overkill. But then also I don't really see the problem with just putting a new version of ICU in with the newer versions of PostgreSQL. That's just puts the user in the same position as they are with any other platform wrt manual pg_upgrade runs.Well we can certainly do that if everyone is happy in the knowledge that it'll mean pg_upgrade users will need to reindex if they've used ICU collations.Sandeep; can you have someone do a test build with the latest ICU please (for background, this would be with the Windows and Mac installers)? If noone objects, we can push that into the v13 builds before GA. We'd also need to update the README if we do so.  I would suggest that the better way to handle this would be for pg_upgrade to (somehow) check the ICU version on the old and new clusters and if there's a mismatch perform a reindex of any ICU based indexes. I suspect that may require that the server exposes the ICU version though. That way, the installers could freely upgrade the ICU version with a new major release.Having pg_upgrade spit out a script that does reindex specifically on the indexes required would certainly be useful in the generic case, and help others as well.+1 -- Dave PageBlog: http://pgsnake.blogspot.comTwitter: @pgsnakeEDB: http://www.enterprisedb.com", "msg_date": "Mon, 17 Aug 2020 16:55:13 +0100", "msg_from": "Dave Page <dpage@pgadmin.org>", "msg_from_op": false, "msg_subject": "Re: EDB builds Postgres 13 with an obsolete ICU version" }, { "msg_contents": "On Mon, Aug 17, 2020 at 04:55:13PM +0100, Dave Page wrote:\n> That was more if the installer actually handles the whole chain. It clearly\n> doesn't today (since it doesn't support upgrades), I agree this might\n> definitely be overkill. But then also I don't really see the problem with\n> just putting a new version of ICU in with the newer versions of PostgreSQL.\n> That's just puts the user in the same position as they are with any other\n> platform wrt manual pg_upgrade runs.\n> \n> Well we can certainly do that if everyone is happy in the knowledge that it'll\n> mean pg_upgrade users will need to reindex if they've used ICU collations.\n> \n> Sandeep; can you have someone do a test build with the latest ICU please (for\n> background, this would be with the Windows and Mac installers)? If noone\n> objects, we can push that into the v13 builds before GA. We'd also need to\n> update the README if we do so.\n\nWoh, we don't have any support in pg_upgrade to reindex just indexes\nthat use ICU collations, and frankly, if they have to reindex, they\nmight decide that they should just do pg_dump/reload of their cluster at\nthat point because pg_upgrade is going to be very slow, and they will be\nsurprised. I can see a lot more people being disappointed by this than\nwill be happy to have Postgres using a newer ICU library.\n\nAlso, is it the ICU library version we should be tracking for reindex,\nor each _collation_ version? If the later, do we store the collation\nversion for each index?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Mon, 17 Aug 2020 14:23:57 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: EDB builds Postgres 13 with an obsolete ICU version" }, { "msg_contents": "On Mon, Aug 17, 2020 at 02:23:57PM -0400, Bruce Momjian wrote:\n> Also, is it the ICU library version we should be tracking for reindex,\n> or each _collation_ version? If the later, do we store the collation\n> version for each index?\n\nYou need to store the collation version(s) for each index. This\nthread deals with the problem:\nhttps://commitfest.postgresql.org/29/2367/\nhttps://www.postgresql.org/message-id/CAEepm%3D0uEQCpfq_%2BLYFBdArCe4Ot98t1aR4eYiYTe%3DyavQygiQ%40mail.gmail.com\n\nThat's not all of it as you would still need some filtering\ncapabilities in the backend to reindex only the collation-sensitive\nindexes with a reindex, but that's one step forward into being able to\ndo that.\n--\nMichael", "msg_date": "Tue, 18 Aug 2020 09:44:35 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: EDB builds Postgres 13 with an obsolete ICU version" }, { "msg_contents": "On Tue, Aug 18, 2020 at 09:44:35AM +0900, Michael Paquier wrote:\n> On Mon, Aug 17, 2020 at 02:23:57PM -0400, Bruce Momjian wrote:\n> > Also, is it the ICU library version we should be tracking for reindex,\n> > or each _collation_ version? If the later, do we store the collation\n> > version for each index?\n> \n> You need to store the collation version(s) for each index. This\n> thread deals with the problem:\n> https://commitfest.postgresql.org/29/2367/\n> https://www.postgresql.org/message-id/CAEepm%3D0uEQCpfq_%2BLYFBdArCe4Ot98t1aR4eYiYTe%3DyavQygiQ%40mail.gmail.com\n> \n> That's not all of it as you would still need some filtering\n> capabilities in the backend to reindex only the collation-sensitive\n> indexes with a reindex, but that's one step forward into being able to\n> do that.\n\nOh, we don't even have the version in the system catalogs yet? I guess\nwhen pg_upgrade runs create_index we could grab it then, and for the\npg_upgrade _after_ that, do the checks. This seems like it is years\naway from being useful.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Mon, 17 Aug 2020 20:47:28 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: EDB builds Postgres 13 with an obsolete ICU version" }, { "msg_contents": "On Mon, Aug 17, 2020 at 7:23 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Mon, Aug 17, 2020 at 04:55:13PM +0100, Dave Page wrote:\n> > That was more if the installer actually handles the whole chain. It\n> clearly\n> > doesn't today (since it doesn't support upgrades), I agree this might\n> > definitely be overkill. But then also I don't really see the problem\n> with\n> > just putting a new version of ICU in with the newer versions of\n> PostgreSQL.\n> > That's just puts the user in the same position as they are with any\n> other\n> > platform wrt manual pg_upgrade runs.\n> >\n> > Well we can certainly do that if everyone is happy in the knowledge that\n> it'll\n> > mean pg_upgrade users will need to reindex if they've used ICU\n> collations.\n> >\n> > Sandeep; can you have someone do a test build with the latest ICU please\n> (for\n> > background, this would be with the Windows and Mac installers)? If noone\n> > objects, we can push that into the v13 builds before GA. We'd also need\n> to\n> > update the README if we do so.\n>\n> Woh, we don't have any support in pg_upgrade to reindex just indexes\n> that use ICU collations, and frankly, if they have to reindex, they\n> might decide that they should just do pg_dump/reload of their cluster at\n> that point because pg_upgrade is going to be very slow, and they will be\n> surprised.\n\n\nNot necessarily. It's likely that not all indexes use ICU collations, and\nyou still save time loading what may be large amounts of data.\n\nI agree though, that it *could* be slow.\n\n\n> I can see a lot more people being disappointed by this than\n> will be happy to have Postgres using a newer ICU library.\n>\n\nQuite possibly, hence my hesitation to push ahead with anything more than a\nsimple test build at this time.\n\n\n>\n> Also, is it the ICU library version we should be tracking for reindex,\n> or each _collation_ version? If the later, do we store the collation\n> version for each index?\n>\n\nI wasn't aware that ICU had the concept of collation versions internally\n(which Michael seems to have confirmed downthread). That would potentially\nmake the number of users needing a reindex even smaller, but as you point\nout won't help us for years as we don't store it anyway.\n\n-- \nDave Page\nBlog: http://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: http://www.enterprisedb.com\n\nOn Mon, Aug 17, 2020 at 7:23 PM Bruce Momjian <bruce@momjian.us> wrote:On Mon, Aug 17, 2020 at 04:55:13PM +0100, Dave Page wrote:\n>     That was more if the installer actually handles the whole chain. It clearly\n>     doesn't today (since it doesn't support upgrades), I agree this might\n>     definitely be overkill. But then also I don't really see the problem with\n>     just putting a new version of ICU in with the newer versions of PostgreSQL.\n>     That's just puts the user in the same position as they are with any other\n>     platform wrt manual pg_upgrade runs.\n> \n> Well we can certainly do that if everyone is happy in the knowledge that it'll\n> mean pg_upgrade users will need to reindex if they've used ICU collations.\n> \n> Sandeep; can you have someone do a test build with the latest ICU please (for\n> background, this would be with the Windows and Mac installers)? If noone\n> objects, we can push that into the v13 builds before GA. We'd also need to\n> update the README if we do so.\n\nWoh, we don't have any support in pg_upgrade to reindex just indexes\nthat use ICU collations, and frankly, if they have to reindex, they\nmight decide that they should just do pg_dump/reload of their cluster at\nthat point because pg_upgrade is going to be very slow, and they will be\nsurprised.  Not necessarily. It's likely that not all indexes use ICU collations, and you still save time loading what may be large amounts of data.I agree though, that it *could* be slow. I can see a lot more people being disappointed by this than\nwill be happy to have Postgres using a newer ICU library.Quite possibly, hence my hesitation to push ahead with anything more than a simple test build at this time. \n\nAlso, is it the ICU library version we should be tracking for reindex,\nor each _collation_ version?  If the later, do we store the collation\nversion for each index?I wasn't aware that ICU had the concept of collation versions internally (which Michael seems to have confirmed downthread). That would potentially make the number of users needing a reindex even smaller, but as you point out won't help us for years as we don't store it anyway. -- Dave PageBlog: http://pgsnake.blogspot.comTwitter: @pgsnakeEDB: http://www.enterprisedb.com", "msg_date": "Tue, 18 Aug 2020 10:24:42 +0100", "msg_from": "Dave Page <dpage@pgadmin.org>", "msg_from_op": false, "msg_subject": "Re: EDB builds Postgres 13 with an obsolete ICU version" }, { "msg_contents": "On Tue, Aug 18, 2020 at 11:24 AM Dave Page <dpage@pgadmin.org> wrote:\n\n>\n>\n> On Mon, Aug 17, 2020 at 7:23 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n>> On Mon, Aug 17, 2020 at 04:55:13PM +0100, Dave Page wrote:\n>> > That was more if the installer actually handles the whole chain. It\n>> clearly\n>> > doesn't today (since it doesn't support upgrades), I agree this\n>> might\n>> > definitely be overkill. But then also I don't really see the\n>> problem with\n>> > just putting a new version of ICU in with the newer versions of\n>> PostgreSQL.\n>> > That's just puts the user in the same position as they are with any\n>> other\n>> > platform wrt manual pg_upgrade runs.\n>> >\n>> > Well we can certainly do that if everyone is happy in the knowledge\n>> that it'll\n>> > mean pg_upgrade users will need to reindex if they've used ICU\n>> collations.\n>> >\n>> > Sandeep; can you have someone do a test build with the latest ICU\n>> please (for\n>> > background, this would be with the Windows and Mac installers)? If noone\n>> > objects, we can push that into the v13 builds before GA. We'd also need\n>> to\n>> > update the README if we do so.\n>>\n>> Woh, we don't have any support in pg_upgrade to reindex just indexes\n>> that use ICU collations, and frankly, if they have to reindex, they\n>> might decide that they should just do pg_dump/reload of their cluster at\n>> that point because pg_upgrade is going to be very slow, and they will be\n>> surprised.\n>\n>\n> Not necessarily. It's likely that not all indexes use ICU collations, and\n> you still save time loading what may be large amounts of data.\n>\n> I agree though, that it *could* be slow.\n>\n\nI agree it definitely could, but I'm not sure I see any case where it would\nactually be slower than the alternative (which would be dump/reload).\n\nI'm also willing to say that given that (1) the windows installers don't\nprovide a way to do it automatically, and (2) the \"commandline challenge\"\nof running pg_upgrade on WIndows in general, I bet there's a larger\npercentage of users who are using dump/reload rather than pg_upgrade on\nWindows than on other platforms already in the first place.\n\n\n\n> I can see a lot more people being disappointed by this than\n>> will be happy to have Postgres using a newer ICU library.\n>>\n>\n> Quite possibly, hence my hesitation to push ahead with anything more than\n> a simple test build at this time.\n>\n\nMy guess would be in the other direction :) But in particular, the vast\nmajority probably don't care at all, because they're not using ICU\ncollations.\n\nIt might be a slightly larger percentage on Windows who use it, but I'm\nwilling to bet it's still quite low.\n\n\nAlso, is it the ICU library version we should be tracking for reindex,\n>> or each _collation_ version? If the later, do we store the collation\n>> version for each index?\n>>\n>\n> I wasn't aware that ICU had the concept of collation versions internally\n> (which Michael seems to have confirmed downthread). That would potentially\n> make the number of users needing a reindex even smaller, but as you point\n> out won't help us for years as we don't store it anyway.\n>\n\nIt does -- and we track it in pg_collation at this point.\n\nI think the part that Michael is referring to is we don't track enough\ndetails on a per-index basis. The suggested changes (in the separate\nthread) are to get rid of it from pg_collation and move it to a per-object\ndependency.\n\n(And fwiw contains a patch to pg_upgrade to at least give it the ability to\nfor all old indexes say \"i know that my icu is compatible\". But yeah, the\nfull functionality won't be available until upgrading *from* 14)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Aug 18, 2020 at 11:24 AM Dave Page <dpage@pgadmin.org> wrote:On Mon, Aug 17, 2020 at 7:23 PM Bruce Momjian <bruce@momjian.us> wrote:On Mon, Aug 17, 2020 at 04:55:13PM +0100, Dave Page wrote:\n>     That was more if the installer actually handles the whole chain. It clearly\n>     doesn't today (since it doesn't support upgrades), I agree this might\n>     definitely be overkill. But then also I don't really see the problem with\n>     just putting a new version of ICU in with the newer versions of PostgreSQL.\n>     That's just puts the user in the same position as they are with any other\n>     platform wrt manual pg_upgrade runs.\n> \n> Well we can certainly do that if everyone is happy in the knowledge that it'll\n> mean pg_upgrade users will need to reindex if they've used ICU collations.\n> \n> Sandeep; can you have someone do a test build with the latest ICU please (for\n> background, this would be with the Windows and Mac installers)? If noone\n> objects, we can push that into the v13 builds before GA. We'd also need to\n> update the README if we do so.\n\nWoh, we don't have any support in pg_upgrade to reindex just indexes\nthat use ICU collations, and frankly, if they have to reindex, they\nmight decide that they should just do pg_dump/reload of their cluster at\nthat point because pg_upgrade is going to be very slow, and they will be\nsurprised.  Not necessarily. It's likely that not all indexes use ICU collations, and you still save time loading what may be large amounts of data.I agree though, that it *could* be slow.I agree it definitely could, but I'm not sure I see any case where it would actually be slower than the alternative (which would be dump/reload).I'm also willing to say that given that (1) the windows installers don't provide a way to do it automatically, and (2) the \"commandline challenge\" of running pg_upgrade on WIndows in general, I bet there's a larger percentage of users who are using dump/reload rather than pg_upgrade on Windows than on other platforms already in the first place. I can see a lot more people being disappointed by this than\nwill be happy to have Postgres using a newer ICU library.Quite possibly, hence my hesitation to push ahead with anything more than a simple test build at this time.My guess would be in the other direction :) But in particular, the vast majority probably don't care at all, because they're not using ICU collations.It might be a slightly larger percentage on Windows who use it, but I'm willing to bet it's still quite low.\nAlso, is it the ICU library version we should be tracking for reindex,\nor each _collation_ version?  If the later, do we store the collation\nversion for each index?I wasn't aware that ICU had the concept of collation versions internally (which Michael seems to have confirmed downthread). That would potentially make the number of users needing a reindex even smaller, but as you point out won't help us for years as we don't store it anyway. It does -- and we track it in pg_collation at this point.I think the part that Michael is referring to is we don't track enough details on a per-index basis. The suggested changes (in the separate thread) are to get rid of it from pg_collation and move it to a per-object dependency.(And fwiw contains a patch to pg_upgrade to at least give it the ability to for all old indexes say \"i know that my icu is compatible\". But yeah, the full functionality won't be available until upgrading *from* 14)--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Tue, 18 Aug 2020 11:38:38 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: EDB builds Postgres 13 with an obsolete ICU version" }, { "msg_contents": "Magnus Hagander schrieb am 18.08.2020 um 11:38:\n> It might be a slightly larger percentage on Windows who use it, but\n> I'm willing to bet it's still quite low.\n\nI have seen increasingly more questions around ICU collations on Windows due to the fact that people that migrate from SQL Server to Postgres very often keep Windows as the operating system and they want to get SQL Server's case-insensitivity (at least partially) using ICU collations.\n\nThomas\n\n\n\n\n", "msg_date": "Tue, 18 Aug 2020 11:54:05 +0200", "msg_from": "Thomas Kellerer <shammat@gmx.net>", "msg_from_op": false, "msg_subject": "Re: EDB builds Postgres 13 with an obsolete ICU version" }, { "msg_contents": "On Tue, Aug 18, 2020 at 11:39 AM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> On Tue, Aug 18, 2020 at 11:24 AM Dave Page <dpage@pgadmin.org> wrote:\n>>\n>> On Mon, Aug 17, 2020 at 7:23 PM Bruce Momjian <bruce@momjian.us> wrote:\n>>>\n>>> On Mon, Aug 17, 2020 at 04:55:13PM +0100, Dave Page wrote:\n>> I wasn't aware that ICU had the concept of collation versions internally (which Michael seems to have confirmed downthread). That would potentially make the number of users needing a reindex even smaller, but as you point out won't help us for years as we don't store it anyway.\n>\n> It does -- and we track it in pg_collation at this point.\n>\n> I think the part that Michael is referring to is we don't track enough details on a per-index basis. The suggested changes (in the separate thread) are to get rid of it from pg_collation and move it to a per-object dependency.\n>\n> (And fwiw contains a patch to pg_upgrade to at least give it the ability to for all old indexes say \"i know that my icu is compatible\". But yeah, the full functionality won't be available until upgrading *from* 14)\n\nIndeed, when upgrading from something older than 14, all indexes would\nbe marked as depending on an unknown collation version as in possibly\ncorrupted.\n\n\n", "msg_date": "Tue, 18 Aug 2020 11:58:31 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: EDB builds Postgres 13 with an obsolete ICU version" }, { "msg_contents": "On Tue, Aug 18, 2020 at 11:38:38AM +0200, Magnus Hagander wrote:\n> On Tue, Aug 18, 2020 at 11:24 AM Dave Page <dpage@pgadmin.org> wrote:\n> Not necessarily. It's likely that not all indexes use ICU collations, and\n> you still save time loading what may be large amounts of data.\n> \n> I agree though, that it *could* be slow.\n> \n> I agree it definitely could, but I'm not sure I see any case where it would\n> actually be slower than the alternative (which would be dump/reload).\n\nWell, given that pg_upgrade is more complex to run than pg_dump/reload,\nyou then have to weigh the complexity of using pg_upgrade with index\nrebuild vs. the simpler pg_dump. Right now, you know pg_upgrade in link\nmode is going to be fast, but with the reindex, you have a much more\ncomplex decision to make.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Tue, 18 Aug 2020 11:13:27 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: EDB builds Postgres 13 with an obsolete ICU version" } ]
[ { "msg_contents": "I've been working on the ability to detach a partition from a\npartitioned table, without causing blockages to concurrent activity.\nI think this operation is critical for some use cases.\n\nThere was a lot of great discussion which ended up in Robert completing\na much sought implementation of non-blocking ATTACH. DETACH was\ndiscussed too because it was a goal initially, but eventually dropped\nfrom that patch altogether. Nonetheless, that thread provided a lot of\nuseful input to this implementation. Important ones:\n\n[1] https://postgr.es/m/CA+TgmoYg4x7AH=_QSptvuBKf+3hUdiCa4frPkt+RvXZyjX1n=w@mail.gmail.com\n[2] https://postgr.es/m/CA+TgmoaAjkTibkEr=xJg3ndbRsHHSiYi2SJgX69MVosj=LJmug@mail.gmail.com\n[3] https://postgr.es/m/CA+TgmoY13KQZF-=HNTrt9UYWYx3_oYOQpu9ioNT49jGgiDpUEA@mail.gmail.com\n\nAttached is a patch that implements\nALTER TABLE ... DETACH PARTITION .. CONCURRENTLY.\n\nIn the previous thread we were able to implement the concurrent model\nwithout the extra keyword. For this one I think that won't work; my\nimplementation works in two transactions so there's a restriction that\nyou can't run it in a transaction block. Also, there's a wait phase\nthat makes it slower than the non-concurrent one. Those two drawbacks\nmake me think that it's better to keep both modes available, just like\nwe offer both CREATE INDEX and CREATE INDEX CONCURRENTLY.\n\nWhy two transactions? The reason is that in order for this to work, we\nmake a catalog change (mark it detached), and commit so that all\nconcurrent transactions can see the change. A second transaction waits\nfor anybody who holds any lock on the partitioned table and grabs Access\nExclusive on the partition (which now no one cares about, if they're\nlooking at the partitioned table), where the DDL action on the partition\ncan be completed.\n\nALTER TABLE is normally unable to run in two transactions. I hacked it\n(0001) so that the relation can be closed and reopened in the Exec phase\n(by having the rel as part of AlteredTableInfo: when ATRewriteCatalogs\nreturns, it uses that pointer to close the rel). It turns out that this\nis sufficient to make that work. This means that ALTER TABLE DETACH\nCONCURRENTLY cannot work as part of a multi-command ALTER TABLE, but\nthat's alreay enforced by the grammar anyway.\n\nDETACH CONCURRENTLY doesn't work if a default partition exists. It's\njust too problematic a case; you would still need to have AEL on the\ndefault partition.\n\n\nI haven't yet experimented with queries running in a standby in tandem\nwith a detach.\n\n-- \n�lvaro Herrera", "msg_date": "Mon, 3 Aug 2020 19:48:54 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On 2020-Aug-03, Alvaro Herrera wrote:\n\n> There was a lot of great discussion which ended up in Robert completing\n> a much sought implementation of non-blocking ATTACH. DETACH was\n> discussed too because it was a goal initially, but eventually dropped\n> from that patch altogether. Nonetheless, that thread provided a lot of\n> useful input to this implementation. Important ones:\n> \n> [1] https://postgr.es/m/CA+TgmoYg4x7AH=_QSptvuBKf+3hUdiCa4frPkt+RvXZyjX1n=w@mail.gmail.com\n> [2] https://postgr.es/m/CA+TgmoaAjkTibkEr=xJg3ndbRsHHSiYi2SJgX69MVosj=LJmug@mail.gmail.com\n> [3] https://postgr.es/m/CA+TgmoY13KQZF-=HNTrt9UYWYx3_oYOQpu9ioNT49jGgiDpUEA@mail.gmail.com\n\nThere was some discussion about having a version number in the partition\ndescriptor somewhere as a means to implement this. I couldn't figure\nout how that would work, or what the version number would be attached\nto. Surely the idea wasn't to increment the version number to every\npartition other than the one being detached?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 3 Aug 2020 19:51:44 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On 2020-Aug-03, Alvaro Herrera wrote:\n\n> Why two transactions? The reason is that in order for this to work, we\n> make a catalog change (mark it detached), and commit so that all\n> concurrent transactions can see the change. A second transaction waits\n> for anybody who holds any lock on the partitioned table and grabs Access\n> Exclusive on the partition (which now no one cares about, if they're\n> looking at the partitioned table), where the DDL action on the partition\n> can be completed.\n\nI forgot to mention. If for whatever reason the second transaction\nfails, (say the user aborts it or there is a crash), then the partition\nis still marked as detached, so no queries would see it; but all the\nancillary catalog data remains. Just like when CREATE INDEX\nCONCURRENTLY fails, you keep an invalid index that must be dropped; in\nthis case, the changes to do are much more extensive, so manual action\nis out of the question. So there's another DDL command to be invoked,\n\nALTER TABLE parent DETACH PARTITION part FINALIZE;\n\nwhich will complete the detach action.\n\nIf we had UNDO then perhaps it would be possible to register an action\nso that the detach is completed automatically. But for now this seems\nsufficient.\n\n\nAnother aspect worth mentioning is constraints. In the patch, I create\na CHECK constraint to stand for the partition constraint that's going to\nlogically disappear. This was mentioned as a potential problem in one\nof Robert's emails (I didn't actually verify that this is a problem).\nHowever, a funny thing is that if a constraint already exists, you get a\ndupe, so after a few rounds of attach/detach you can see them pile up.\nI'll have to fix this at some point. But also, I need to think about\nwhether foreign keys have similar problems, since they are also used by\nthe optimizer.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 4 Aug 2020 12:56:25 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On Mon, Aug 3, 2020 at 7:49 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> Why two transactions? The reason is that in order for this to work, we\n> make a catalog change (mark it detached), and commit so that all\n> concurrent transactions can see the change. A second transaction waits\n> for anybody who holds any lock on the partitioned table and grabs Access\n> Exclusive on the partition (which now no one cares about, if they're\n> looking at the partitioned table), where the DDL action on the partition\n> can be completed.\n\nIs there a more detailed theory of operation of this patch somewhere?\nWhat exactly do you mean by marking it detached? Committing the change\nmakes it possible for all concurrent transactions to see the change,\nbut does not guarantee that they will. If you can't guarantee that,\nthen I'm not sure how you can guarantee that they will behave sanely.\nEven if you can, it's not clear what the sane behavior is: what\nhappens when a tuple gets routed to an ex-partition? What happens when\nan ex-partition needs to be scanned? What prevents problems if a\npartition is detached, possibly modified, and then reattached,\npossibly with different partition bounds?\n\nI think the two-transaction approach is interesting and I can imagine\nthat it possibly solves some problems, but it's not clear to me\nexactly which problems it solves or how it does so.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 4 Aug 2020 13:53:17 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On 2020-Aug-04, Robert Haas wrote:\n\n> On Mon, Aug 3, 2020 at 7:49 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > Why two transactions? The reason is that in order for this to work, we\n> > make a catalog change (mark it detached), and commit so that all\n> > concurrent transactions can see the change. A second transaction waits\n> > for anybody who holds any lock on the partitioned table and grabs Access\n> > Exclusive on the partition (which now no one cares about, if they're\n> > looking at the partitioned table), where the DDL action on the partition\n> > can be completed.\n> \n> Is there a more detailed theory of operation of this patch somewhere?\n> What exactly do you mean by marking it detached? Committing the change\n> makes it possible for all concurrent transactions to see the change,\n> but does not guarantee that they will. If you can't guarantee that,\n> then I'm not sure how you can guarantee that they will behave sanely.\n\nSorry for the long delay. I haven't written up the theory of operation.\nI suppose it is complicated enough that it should be documented\nsomewhere.\n\nTo mark it detached means to set pg_inherits.inhdetached=true. That\ncolumn name is a bit of a misnomer, since that column really means \"is\nin the process of being detached\"; the pg_inherits row goes away\nentirely once the detach process completes. This mark guarantees that\neveryone will see that row because the detaching session waits long\nenough after committing the first transaction and before deleting the\npg_inherits row, until everyone else has moved on.\n\nThe other point is that the partition directory code can be asked to\ninclude detached partitions, or not to. The executor does the former,\nand the planner does the latter. This allows a transient period during\nwhich the partition descriptor returned to planner and executor is\ndifferent; this makes the situation equivalent to what would have\nhappened if the partition was attached during the operation: in executor\nwe would detect that there is an additional partition that was not seen\nby the planner, and we already know how to deal with that situation by\nyour handling of the ATTACH code.\n\n> Even if you can, it's not clear what the sane behavior is: what\n> happens when a tuple gets routed to an ex-partition? What happens when\n> an ex-partition needs to be scanned? \n\nDuring the transient period, any transaction that obtained a partition\ndescriptor before the inhdetached mark is committed should be able to\nget the tuple routing done successfully, but transactions using later\nsnapshots should get their insertions rejected. Reads should behave in\nthe same way.\n\n> What prevents problems if a partition is detached, possibly modified,\n> and then reattached, possibly with different partition bounds?\n\nThis should not be a problem, because the partition needs to be fully\ndetached before it can be attached again. And if the partition bounds\nare different, that won't be a problem, because the previous partition\nbounds won't be present in the pg_class row. Of course, the new\npartition bounds will be checked to the existing contents.\n\nThere is one fly in the ointment though, which is that if you cancel the\nwait and then immediately do the ALTER TABLE DETACH FINALIZE without\nwaiting for as long as the original execution would have waited, you\nmight end up killing the partition ahead of time. One solution to this\nwould be to cause the FINALIZE action to wait again at start. This\nwould cause it to take even longer, but it would be safer. (If you\ndon't want it to take longer, you can just not cancel it in the first\nplace.) This is not a problem if the server crashes in between (which\nis the scenario I had in mind when doing the FINALIZE thing), because of\ncourse no transaction can continue to run across a crash.\n\n\nI'm going to see if I can get the new delay_execution module to help\ntest this, to verify whether my claims are true.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 26 Aug 2020 19:40:07 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On Wed, Aug 26, 2020 at 7:40 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> To mark it detached means to set pg_inherits.inhdetached=true. That\n> column name is a bit of a misnomer, since that column really means \"is\n> in the process of being detached\"; the pg_inherits row goes away\n> entirely once the detach process completes. This mark guarantees that\n> everyone will see that row because the detaching session waits long\n> enough after committing the first transaction and before deleting the\n> pg_inherits row, until everyone else has moved on.\n\nOK. Do you just wait until the XID of the transaction that set\ninhdetached is all-visible, or how do you do it?\n\n> The other point is that the partition directory code can be asked to\n> include detached partitions, or not to. The executor does the former,\n> and the planner does the latter. This allows a transient period during\n> which the partition descriptor returned to planner and executor is\n> different; this makes the situation equivalent to what would have\n> happened if the partition was attached during the operation: in executor\n> we would detect that there is an additional partition that was not seen\n> by the planner, and we already know how to deal with that situation by\n> your handling of the ATTACH code.\n\nAh ha! That is quite clever and I don't think that I would have\nthought of it. So all the plans that were created before you set\ninhdetached=true have to be guaranteed to be invaliated or gone\naltogether before you can actually delete the pg_inherits row. It\nseems like it ought to be possible to ensure that, though I am not\nsurely of the details exactly. Is it sufficient to wait for all\ntransactions that have locked the table to go away? I'm not sure\nexactly how this stuff interacts with the plan cache.\n\n> There is one fly in the ointment though, which is that if you cancel the\n> wait and then immediately do the ALTER TABLE DETACH FINALIZE without\n> waiting for as long as the original execution would have waited, you\n> might end up killing the partition ahead of time. One solution to this\n> would be to cause the FINALIZE action to wait again at start. This\n> would cause it to take even longer, but it would be safer. (If you\n> don't want it to take longer, you can just not cancel it in the first\n> place.) This is not a problem if the server crashes in between (which\n> is the scenario I had in mind when doing the FINALIZE thing), because of\n> course no transaction can continue to run across a crash.\n\nYeah, it sounds like this will require some solution, but I agree that\njust waiting \"longer\" seems acceptable.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 27 Aug 2020 11:46:09 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "Hi hacker,\n\nI tested the patch provided by Alvaro. It seems not well in REPEATABLE READ.\n\ngpadmin=*# \\d+ tpart\n Partitioned table \"public.tpart\"\n Column | Type | Collation | Nullable | Default | Storage | Stats target | Description\n--------+---------+-----------+----------+---------+---------+--------------+-------------\n i | integer | | | | plain | |\n j | integer | | | | plain | |\nPartition key: RANGE (i)\nPartitions: tpart_1 FOR VALUES FROM (0) TO (100),\n tpart_2 FOR VALUES FROM (100) TO (200)\n\nbegin isolation level repeatable read;\nBEGIN\ngpadmin=*# select * from tpart;\n i | j\n-----+-----\n 10 | 10\n 50 | 50\n 110 | 110\n 120 | 120\n 150 | 150\n(5 rows)\n-- Here, run `ALTER TABLE tpart DROP PARTITION tpart_2 CONCURRENTLY`\n-- but only complete the first transaction.\n\n-- the tuples from tpart_2 are gone.\ngpadmin=*# select * from tpart;\n i | j\n----+----\n 10 | 10\n 50 | 50\n(2 rows)\n\ngpadmin=*# \\d+ tpart_2\n Table \"public.tpart_2\"\n Column | Type | Collation | Nullable | Default | Storage | Stats target | Description\n--------+---------+-----------+----------+---------+---------+--------------+-------------\n i | integer | | | | plain | |\n j | integer | | | | plain | |\nPartition of: tpart FOR VALUES FROM (100) TO (200)\nPartition constraint: ((i IS NOT NULL) AND (i >= 100) AND (i < 200))\nAccess method: heap\n\n-- the part tpart_2 is not showed as DETACHED\ngpadmin=*# \\d+ tpart\n Partitioned table \"public.tpart\"\n Column | Type | Collation | Nullable | Default | Storage | Stats target | Description\n--------+---------+-----------+----------+---------+---------+--------------+-------------\n i | integer | | | | plain | |\n j | integer | | | | plain | |\nPartition key: RANGE (i)\nPartitions: tpart_1 FOR VALUES FROM (0) TO (100),\n tpart_2 FOR VALUES FROM (100) TO (200)\n\n-- next, the insert failed. It's OK.\ngpadmin=*# insert into tpart values(130,130);\nERROR: no partition of relation \"tpart\" found for row\nDETAIL: Partition key of the failing row contains (i) = (130).\n\n\nIs this an expected behavior?\n\nRegards,\nHao Wu\n\n________________________________\nFrom: Robert Haas <robertmhaas@gmail.com>\nSent: Thursday, August 27, 2020 11:46 PM\nTo: Alvaro Herrera <alvherre@2ndquadrant.com>\nCc: Pg Hackers <pgsql-hackers@lists.postgresql.org>\nSubject: Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY\n\nOn Wed, Aug 26, 2020 at 7:40 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> To mark it detached means to set pg_inherits.inhdetached=true. That\n> column name is a bit of a misnomer, since that column really means \"is\n> in the process of being detached\"; the pg_inherits row goes away\n> entirely once the detach process completes. This mark guarantees that\n> everyone will see that row because the detaching session waits long\n> enough after committing the first transaction and before deleting the\n> pg_inherits row, until everyone else has moved on.\n\nOK. Do you just wait until the XID of the transaction that set\ninhdetached is all-visible, or how do you do it?\n\n> The other point is that the partition directory code can be asked to\n> include detached partitions, or not to. The executor does the former,\n> and the planner does the latter. This allows a transient period during\n> which the partition descriptor returned to planner and executor is\n> different; this makes the situation equivalent to what would have\n> happened if the partition was attached during the operation: in executor\n> we would detect that there is an additional partition that was not seen\n> by the planner, and we already know how to deal with that situation by\n> your handling of the ATTACH code.\n\nAh ha! That is quite clever and I don't think that I would have\nthought of it. So all the plans that were created before you set\ninhdetached=true have to be guaranteed to be invaliated or gone\naltogether before you can actually delete the pg_inherits row. It\nseems like it ought to be possible to ensure that, though I am not\nsurely of the details exactly. Is it sufficient to wait for all\ntransactions that have locked the table to go away? I'm not sure\nexactly how this stuff interacts with the plan cache.\n\n> There is one fly in the ointment though, which is that if you cancel the\n> wait and then immediately do the ALTER TABLE DETACH FINALIZE without\n> waiting for as long as the original execution would have waited, you\n> might end up killing the partition ahead of time. One solution to this\n> would be to cause the FINALIZE action to wait again at start. This\n> would cause it to take even longer, but it would be safer. (If you\n> don't want it to take longer, you can just not cancel it in the first\n> place.) This is not a problem if the server crashes in between (which\n> is the scenario I had in mind when doing the FINALIZE thing), because of\n> course no transaction can continue to run across a crash.\n\nYeah, it sounds like this will require some solution, but I agree that\njust waiting \"longer\" seems acceptable.\n\n--\nRobert Haas\nEnterpriseDB: https://urldefense.proofpoint.com/v2/url?u=http-3A__www.enterprisedb.com&d=DwIBaQ&c=lnl9vOaLMzsy2niBC8-h_K-7QJuNJEsFrzdndhuJ3Sw&r=tqYUKh-fXcYPWSaF4E-D6A&m=SEDl-6dEISo7BA0qWuv1-idQUVtO0M6qz7hcfwlrF3I&s=pZ7Dx6xrJOYkKKMlXR4wpJNZv-W10wQkMfXdEjtIXJY&e=\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n\n\n\n\n\nHi hacker,\n\n\n\n\nI tested the patch provided by Alvaro. It seems not well in REPEATABLE READ.\n\n\n\n\n\n\ngpadmin=*# \\d+ tpart\n                             Partitioned table \"public.tpart\"\n Column |  Type   | Collation | Nullable | Default | Storage | Stats target | Description\n--------+---------+-----------+----------+---------+---------+--------------+-------------\n i      | integer |           |          |         | plain   |              |\n j      | integer |           |          |         | plain   |              |\nPartition key: RANGE (i)\nPartitions: tpart_1 FOR VALUES FROM (0) TO (100),\n\n\n            tpart_2 FOR VALUES FROM (100) TO (200)\n\n\n\n\n\n\n\nbegin isolation level repeatable read;\n\nBEGIN\n\n\ngpadmin=*# select * from tpart;\n\n\n  i  |  j\n\n\n-----+-----\n\n\n  10 |  10\n\n\n  50 |  50\n\n\n 110 | 110\n\n\n 120 | 120\n\n\n 150 | 150\n\n\n(5 rows)\n\n\n-- Here, run `ALTER TABLE tpart DROP PARTITION tpart_2 CONCURRENTLY`\n-- but only complete the first transaction.\n\n\n\n\n\n\n\n-- the tuples from tpart_2 are gone.\n\n\n\n\n\ngpadmin=*# select * from tpart;\n\n\n i  | j\n\n\n----+----\n\n\n 10 | 10\n\n\n 50 | 50\n\n\n(2 rows)\n\n\n\n\n\n\ngpadmin=*# \\d+ tpart_2\n\n\n                                  Table \"public.tpart_2\"\n\n\n Column |  Type   | Collation | Nullable | Default | Storage | Stats target | Description\n\n\n--------+---------+-----------+----------+---------+---------+--------------+-------------\n\n\n i      | integer |           |          |         | plain   |              |\n\n\n j      | integer |           |          |         | plain   |              |\n\n\nPartition of: tpart FOR VALUES FROM (100) TO (200)\n\n\nPartition constraint: ((i IS NOT NULL) AND (i >= 100) AND (i < 200))\n\n\nAccess method: heap\n\n\n\n\n\n\n\n-- the part tpart_2 is not showed as DETACHED\n\n\n\n\n\ngpadmin=*# \\d+ tpart\n\n\n                             Partitioned table \"public.tpart\"\n\n\n Column |  Type   | Collation | Nullable | Default | Storage | Stats target | Description\n\n\n--------+---------+-----------+----------+---------+---------+--------------+-------------\n\n\n i      | integer |           |          |         | plain   |              |\n\n\n j      | integer |           |          |         | plain   |              |\n\n\nPartition key: RANGE (i)\n\n\nPartitions: tpart_1 FOR VALUES FROM (0) TO (100),\n\n\n            tpart_2 FOR VALUES FROM (100) TO (200)\n\n\n\n\n\n\n-- next, the insert failed. It's OK.\n\n\ngpadmin=*# insert into tpart values(130,130);\n\n\nERROR:  no partition of relation \"tpart\" found for row\n\n\nDETAIL:  Partition key of the failing row contains (i) = (130).\n\n\n\n\n\n\n\n\nIs this an expected behavior?\n\n\n\n\nRegards,\n\nHao Wu\n\n\n\n\n\n\nFrom: Robert Haas <robertmhaas@gmail.com>\nSent: Thursday, August 27, 2020 11:46 PM\nTo: Alvaro Herrera <alvherre@2ndquadrant.com>\nCc: Pg Hackers <pgsql-hackers@lists.postgresql.org>\nSubject: Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY\n \n\n\nOn Wed, Aug 26, 2020 at 7:40 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> To mark it detached means to set pg_inherits.inhdetached=true.  That\n> column name is a bit of a misnomer, since that column really means \"is\n> in the process of being detached\"; the pg_inherits row goes away\n> entirely once the detach process completes.  This mark guarantees that\n> everyone will see that row because the detaching session waits long\n> enough after committing the first transaction and before deleting the\n> pg_inherits row, until everyone else has moved on.\n\nOK. Do you just wait until the XID of the transaction that set\ninhdetached is all-visible, or how do you do it?\n\n> The other point is that the partition directory code can be asked to\n> include detached partitions, or not to.  The executor does the former,\n> and the planner does the latter.  This allows a transient period during\n> which the partition descriptor returned to planner and executor is\n> different; this makes the situation equivalent to what would have\n> happened if the partition was attached during the operation: in executor\n> we would detect that there is an additional partition that was not seen\n> by the planner, and we already know how to deal with that situation by\n> your handling of the ATTACH code.\n\nAh ha! That is quite clever and I don't think that I would have\nthought of it. So all the plans that were created before you set\ninhdetached=true have to be guaranteed to be invaliated or gone\naltogether before you can actually delete the pg_inherits row. It\nseems like it ought to be possible to ensure that, though I am not\nsurely of the details exactly. Is it sufficient to wait for all\ntransactions that have locked the table to go away? I'm not sure\nexactly how this stuff interacts with the plan cache.\n\n> There is one fly in the ointment though, which is that if you cancel the\n> wait and then immediately do the ALTER TABLE DETACH FINALIZE without\n> waiting for as long as the original execution would have waited, you\n> might end up killing the partition ahead of time.  One solution to this\n> would be to cause the FINALIZE action to wait again at start.  This\n> would cause it to take even longer, but it would be safer.  (If you\n> don't want it to take longer, you can just not cancel it in the first\n> place.)  This is not a problem if the server crashes in between (which\n> is the scenario I had in mind when doing the FINALIZE thing), because of\n> course no transaction can continue to run across a crash.\n\nYeah, it sounds like this will require some solution, but I agree that\njust waiting \"longer\" seems acceptable.\n\n-- \nRobert Haas\nEnterpriseDB: \nhttps://urldefense.proofpoint.com/v2/url?u=http-3A__www.enterprisedb.com&d=DwIBaQ&c=lnl9vOaLMzsy2niBC8-h_K-7QJuNJEsFrzdndhuJ3Sw&r=tqYUKh-fXcYPWSaF4E-D6A&m=SEDl-6dEISo7BA0qWuv1-idQUVtO0M6qz7hcfwlrF3I&s=pZ7Dx6xrJOYkKKMlXR4wpJNZv-W10wQkMfXdEjtIXJY&e=\n\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 31 Aug 2020 07:00:19 +0000", "msg_from": "Hao Wu <hawu@vmware.com>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On 2020-Aug-27, Robert Haas wrote:\n\n> On Wed, Aug 26, 2020 at 7:40 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > To mark it detached means to set pg_inherits.inhdetached=true. That\n> > column name is a bit of a misnomer, since that column really means \"is\n> > in the process of being detached\"; the pg_inherits row goes away\n> > entirely once the detach process completes. This mark guarantees that\n> > everyone will see that row because the detaching session waits long\n> > enough after committing the first transaction and before deleting the\n> > pg_inherits row, until everyone else has moved on.\n> \n> OK. Do you just wait until the XID of the transaction that set\n> inhdetached is all-visible, or how do you do it?\n\nI'm just doing WaitForLockers( ... AccessExclusiveLock ...) on the\npartitioned table at the start of the second transaction. That will\nwait until all lockers that have obtained a partition descriptor with\nthe old definition are gone. Note we don't actually lock the\npartitioned table with that lock level.\n\nIn the second transaction we additionally obtain AccessExclusiveLock on\nthe partition itself, but that's after nobody sees it as a partition\nanymore. That lock level is needed for some of the internal DDL\nchanges, and should not cause problems.\n\nI thought about using WaitForOlderSnapshots() instead of waiting for\nlockers, but it didn't seem to solve any actual problem.\n\nNote that on closing the first transaction, the locks on both tables are\nreleased. This avoids the deadlock hazards because of the lock upgrades\nthat would otherwise occur. This means that the tables could be dropped\nor changed in the meantime. The case where either relation is dropped\nis handled by using try_relation_open() in the second transaction; if\neither table is gone, then we can just mark the operation as completed.\nThis part is a bit fuzzy. One thing that should probably be done is\nhave a few operations (such as other ALTER TABLE) raise an error when\nrun on a table with inhdetached=true, because that might get things out\nof step and potentially cause other problems. I've not done that yet. \n\n> So all the plans that were created before you set\n> inhdetached=true have to be guaranteed to be invaliated or gone\n> altogether before you can actually delete the pg_inherits row. It\n> seems like it ought to be possible to ensure that, though I am not\n> surely of the details exactly. Is it sufficient to wait for all\n> transactions that have locked the table to go away? I'm not sure\n> exactly how this stuff interacts with the plan cache.\n\nHmm, any cached plan should be released with relcache inval events, per\nPlanCacheRelCallback(). There are some comments in plancache.h about\n\"unsaved\" cached plans that I don't really understand :-(\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 1 Sep 2020 14:15:27 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "Not related to DETACH PARTITION, but I found a bug in ATTACH PARTITION.\nHere are the steps to reproduce the issue:\n\n\ncreate table tpart(i int, j int) partition by range(i);\ncreate table tpart_1(like tpart);\ncreate table tpart_2(like tpart);\ncreate table tpart_default(like tpart);alter table tpart attach partition tpart_1 for values from(0) to (100);\nalter table tpart attach partition tpart_default default;insert into tpart_2 values(110,110),(120,120),(150,150);1: begin;\n1: alter table tpart attach partition tpart_2 for values from(100) to (200);\n2: begin;\n-- insert will be blocked by ALTER TABLE ATTACH PARTITION\n2: insert into tpart values (110,110),(120,120),(150,150);\n1: end;\n2: select * from tpart_default;\n2: end;\n\nAfter that the partition tpart_default contains (110,110),(120,120),(150,150)\ninserted in session 2, which obviously violates the partition constraint.\n\nRegards,\nHao Wu\n\n\n\n\n\n\n\nNot related to DETACH PARTITION, but I found a bug in ATTACH PARTITION.\nHere are the steps to reproduce the issue:\n\n\n\ncreate table tpart(i int, j int) partition by range(i);create table tpart_1(like tpart);create table tpart_2(like tpart);create table tpart_default(like tpart);alter table tpart attach partition tpart_1 for values from(0) to (100);alter table tpart attach partition tpart_default default;insert into tpart_2 values(110,110),(120,120),(150,150);1: begin;1: alter table tpart attach partition tpart_2 for values from(100) to (200);2: begin;-- insert will be blocked by ALTER TABLE ATTACH PARTITION2: insert into tpart values (110,110),(120,120),(150,150);1: end;2: select * from tpart_default;2: end;\n\n\nAfter that the partition tpart_default contains (110,110),(120,120),(150,150)\ninserted in session 2, which obviously violates the partition constraint.\n\n\nRegards,\nHao Wu", "msg_date": "Wed, 2 Sep 2020 04:25:16 +0000", "msg_from": "Hao Wu <hawu@vmware.com>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "Hi Hao,\n\nOn Wed, Sep 2, 2020 at 5:25 PM Hao Wu <hawu@vmware.com> wrote:\n>\n> Not related to DETACH PARTITION, but I found a bug in ATTACH PARTITION.\n> Here are the steps to reproduce the issue:\n>\n> create table tpart(i int, j int) partition by range(i);\n> create table tpart_1(like tpart);\n> create table tpart_2(like tpart);\n> create table tpart_default(like tpart);alter table tpart attach partition tpart_1 for values from(0) to (100);\n> alter table tpart attach partition tpart_default default;insert into tpart_2 values(110,110),(120,120),(150,150);1: begin;\n> 1: alter table tpart attach partition tpart_2 for values from(100) to (200);\n> 2: begin;\n> -- insert will be blocked by ALTER TABLE ATTACH PARTITION\n> 2: insert into tpart values (110,110),(120,120),(150,150);\n> 1: end;\n> 2: select * from tpart_default;\n> 2: end;\n>\n>\n> After that the partition tpart_default contains (110,110),(120,120),(150,150)\n> inserted in session 2, which obviously violates the partition constraint.\n\nThanks for the report. That looks like a bug.\n\nI have started another thread to discuss this bug and a patch to fix\nit to keep the discussion here focused on the new feature.\n\nSubject: default partition and concurrent attach partition\nhttps://www.postgresql.org/message-id/CA%2BHiwqFqBmcSSap4sFnCBUEL_VfOMmEKaQ3gwUhyfa4c7J_-nA%40mail.gmail.com\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 3 Sep 2020 18:53:38 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On 2020-Aug-31, Hao Wu wrote:\n\n> I tested the patch provided by Alvaro. It seems not well in REPEATABLE READ.\n\n> -- the tuples from tpart_2 are gone.\n> gpadmin=*# select * from tpart;\n> i | j\n> ----+----\n> 10 | 10\n> 50 | 50\n> (2 rows)\n\nInteresting example, thanks. It seems this can be fixed without\nbreaking anything else by changing the planner so that it includes\ndetached partitions when we are in a snapshot-isolation transaction.\nIndeed, the results from the detach-partition-concurrently-1.spec\nisolation test are more satisfying with this change.\n\nThe attached v2, on current master, includes that change, as well as\nfixes a couple of silly bugs in the previous one.\n\n(Because of experimenting with git workflow I did not keep the 0001\npart split in this one, but that part is unchanged from v1.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 10 Sep 2020 17:54:24 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On Thu, Sep 10, 2020 at 4:54 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> Interesting example, thanks. It seems this can be fixed without\n> breaking anything else by changing the planner so that it includes\n> detached partitions when we are in a snapshot-isolation transaction.\n> Indeed, the results from the detach-partition-concurrently-1.spec\n> isolation test are more satisfying with this change.\n\nHmm, so I think the idea here is that since we're out-waiting plans\nwith the old partition descriptor by waiting for lock release, it's OK\nfor anyone who has a lock to keep using the old partition descriptor\nas long as they continuously hold the lock. Is that right? I can't\nthink of a hole in that logic, but it's probably worth noting in the\ncomments, in case someone is tempted to change the way that we\nout-wait plans with the old partition descriptor to some other\nmechanism.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 11 Sep 2020 16:28:49 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "Hi Alvaro,\n\nStudying the patch to understand how it works.\n\nOn Tue, Aug 4, 2020 at 8:49 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> Why two transactions? The reason is that in order for this to work, we\n> make a catalog change (mark it detached), and commit so that all\n> concurrent transactions can see the change. A second transaction waits\n> for anybody who holds any lock on the partitioned table and grabs Access\n> Exclusive on the partition (which now no one cares about, if they're\n> looking at the partitioned table), where the DDL action on the partition\n> can be completed.\n\n+ /*\n+ * Concurrent mode has to work harder; first we add a new constraint to the\n+ * partition that matches the partition constraint. The reason for this is\n+ * that the planner may have made optimizations that depend on the\n+ * constraint. XXX Isn't it sufficient to invalidate the partition's\n+ * relcache entry?\n...\n+ /* Add constraint on partition, equivalent to the partition\nconstraint */\n+ n = makeNode(Constraint);\n+ n->contype = CONSTR_CHECK;\n+ n->conname = NULL;\n+ n->location = -1;\n+ n->is_no_inherit = false;\n+ n->raw_expr = NULL;\n+ n->cooked_expr =\nnodeToString(make_ands_explicit(RelationGetPartitionQual(partRel)));\n+ n->initially_valid = true;\n+ n->skip_validation = true;\n+ /* It's a re-add, since it nominally already exists */\n+ ATAddCheckConstraint(wqueue, tab, partRel, n,\n+ true, false, true, ShareUpdateExclusiveLock);\n\nI suspect that we don't really need this defensive constraint. I mean\neven after committing the 1st transaction, the partition being\ndetached still has relispartition set to true and\nRelationGetPartitionQual() still returns the partition constraint, so\nit seems the constraint being added above is duplicative. Moreover,\nthe constraint is not removed as part of any cleaning up after the\nDETACH process, so repeated attach/detach of the same partition\nresults in the constraints piling up:\n\ncreate table foo (a int, b int) partition by range (a);\ncreate table foo1 partition of foo for values from (1) to (2);\ncreate table foo2 partition of foo for values from (2) to (3);\nalter table foo detach partition foo2 concurrently;\nalter table foo attach partition foo2 for values from (2) to (3);\nalter table foo detach partition foo2 concurrently;\nalter table foo attach partition foo2 for values from (2) to (3);\n\\d foo2\n Table \"public.foo2\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n a | integer | | |\n b | integer | | |\nPartition of: foo FOR VALUES FROM (2) TO (3)\nCheck constraints:\n \"foo2_a_check\" CHECK (a IS NOT NULL AND a >= 2 AND a < 3)\n \"foo2_a_check1\" CHECK (a IS NOT NULL AND a >= 2 AND a < 3)\n\nNoticed a bug/typo in the patched RelationBuildPartitionDesc():\n\n- inhoids = find_inheritance_children(RelationGetRelid(rel), NoLock);\n+ inhoids = find_inheritance_children(RelationGetRelid(rel), NoLock,\n+ include_detached);\n\nYou're passing NoLock for include_detached which means you never\nactually end up including detached partitions from here. I think it\nis due to this bug that partition-concurrent-attach test fails in my\nrun. Also, the error seen in the following hunk of\ndetach-partition-concurrently-1 test is not intentional:\n\n+starting permutation: s1brr s1prep s1s s2d s1s s1exec2 s1c\n+step s1brr: BEGIN ISOLATION LEVEL REPEATABLE READ;\n+step s1prep: PREPARE f(int) AS INSERT INTO d_listp VALUES ($1);\n+step s1s: SELECT * FROM d_listp;\n+a\n+\n+1\n+2\n+step s2d: ALTER TABLE d_listp DETACH PARTITION d_listp2 CONCURRENTLY;\n<waiting ...>\n+step s1s: SELECT * FROM d_listp;\n+a\n+\n+1\n+step s1exec2: EXECUTE f(2); DEALLOCATE f;\n+step s2d: <... completed>\n+error in steps s1exec2 s2d: ERROR: no partition of relation\n\"d_listp\" found for row\n+step s1c: COMMIT;\n\nAs you're intending to make the executor always *include* detached\npartitions, the insert should be able route (2) to d_listp2, the\ndetached partition. It's the bug mentioned above that causes the\nexecutor's partition descriptor build to falsely fail to include the\ndetached partition.\n\n--\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 23 Sep 2020 14:39:55 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On 2020-Sep-23, Amit Langote wrote:\n\nHi Amit, thanks for reviewing this patch!\n\n> On Tue, Aug 4, 2020 at 8:49 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> I suspect that we don't really need this defensive constraint. I mean\n> even after committing the 1st transaction, the partition being\n> detached still has relispartition set to true and\n> RelationGetPartitionQual() still returns the partition constraint, so\n> it seems the constraint being added above is duplicative.\n\nAh, thanks for thinking through that. I had vague thoughts about this\nbeing unnecessary in the current mechanics, but hadn't fully\nmaterialized the thought. (The patch was completely different a few\nunposted iterations ago).\n\n> Moreover, the constraint is not removed as part of any cleaning up\n> after the DETACH process, so repeated attach/detach of the same\n> partition results in the constraints piling up:\n\nYeah, I knew about this issue (mentioned in my self-reply on Aug 4) and\ndidn't worry too much about it because I was thinking I'd rather get rid\nof the constraint addition in the first place.\n\n> Noticed a bug/typo in the patched RelationBuildPartitionDesc():\n> \n> - inhoids = find_inheritance_children(RelationGetRelid(rel), NoLock);\n> + inhoids = find_inheritance_children(RelationGetRelid(rel), NoLock,\n> + include_detached);\n> \n> You're passing NoLock for include_detached which means you never\n> actually end up including detached partitions from here.\n\nI fixed this in the version I posted on Sept 10. I think you were\nreading the version posted at the start of this thread.\n\nThanks,\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 23 Sep 2020 12:23:21 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "Hi Alvaro,\n\nSorry I totally failed to see the v2 you had posted and a couple of\nother emails where you mentioned the issues I brought up.\n\nOn Thu, Sep 24, 2020 at 12:23 AM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n> On 2020-Sep-23, Amit Langote wrote:\n > I suspect that we don't really need this defensive constraint. I mean\n> > even after committing the 1st transaction, the partition being\n> > detached still has relispartition set to true and\n> > RelationGetPartitionQual() still returns the partition constraint, so\n> > it seems the constraint being added above is duplicative.\n>\n> Ah, thanks for thinking through that. I had vague thoughts about this\n> being unnecessary in the current mechanics, but hadn't fully\n> materialized the thought. (The patch was completely different a few\n> unposted iterations ago).\n>\n> > Moreover, the constraint is not removed as part of any cleaning up\n> > after the DETACH process, so repeated attach/detach of the same\n> > partition results in the constraints piling up:\n>\n> Yeah, I knew about this issue (mentioned in my self-reply on Aug 4) and\n> didn't worry too much about it because I was thinking I'd rather get rid\n> of the constraint addition in the first place.\n\nOkay, gotcha.\n\n> > Noticed a bug/typo in the patched RelationBuildPartitionDesc():\n> >\n> > - inhoids = find_inheritance_children(RelationGetRelid(rel), NoLock);\n> > + inhoids = find_inheritance_children(RelationGetRelid(rel), NoLock,\n> > + include_detached);\n> >\n> > You're passing NoLock for include_detached which means you never\n> > actually end up including detached partitions from here.\n>\n> I fixed this in the version I posted on Sept 10. I think you were\n> reading the version posted at the start of this thread.\n\nI am trying the v2 now and I can confirm that those problems are now fixed.\n\nHowever, I am a bit curious about including detached partitions in\nsome cases while not in other, which can result in a (to me)\nsurprising behavior as follows:\n\nSession 1:\n\ncreate table foo (a int, b int) partition by range (a);\ncreate table foo1 partition of foo for values from (1) to (2);\ncreate table foo2 partition of foo for values from (2) to (3);\n\n...attach GDB and set breakpoint so as to block right after finishing\nthe 1st transaction of DETACH PARTITION CONCURRENTLY...\nalter table foo detach partition foo2 concurrently;\n<hits breakpoint, wait...>\n\nSession 2:\n\nbegin;\ninsert into foo values (2); -- ok\nselect * from foo;\nselect * from foo; -- ?!\n a | b\n---+---\n(0 rows)\n\nMaybe, it's fine to just always exclude detached partitions, although\nperhaps I am missing some corner cases that you have thought of?\n\nAlso, I noticed that looking up a parent's partitions via\nRelationBuildPartitionDesc or directly will inspect inhdetached to\ninclude or exclude partitions, but checking if a child table is a\npartition of a given parent table via get_partition_parent doesn't.\nNow if you fix get_partition_parent() to also take into account\ninhdetached, for example, to return InvalidOid if true, then the\ncallers would need to not consider the child table a valid partition.\nSo, RelationGetPartitionQual() on a detached partition should actually\nreturn NIL, making my earlier claim about not needing the defensive\nCHECK constraint invalid. But maybe that's fine if all places agree\non a detached partition not being a valid partition anymore?\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 24 Sep 2020 12:51:52 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On Thu, Sep 24, 2020 at 12:51:52PM +0900, Amit Langote wrote:\n> Also, I noticed that looking up a parent's partitions via\n> RelationBuildPartitionDesc or directly will inspect inhdetached to\n> include or exclude partitions, but checking if a child table is a\n> partition of a given parent table via get_partition_parent doesn't.\n> Now if you fix get_partition_parent() to also take into account\n> inhdetached, for example, to return InvalidOid if true, then the\n> callers would need to not consider the child table a valid partition.\n> So, RelationGetPartitionQual() on a detached partition should actually\n> return NIL, making my earlier claim about not needing the defensive\n> CHECK constraint invalid. But maybe that's fine if all places agree\n> on a detached partition not being a valid partition anymore?\n\nIt would be good to get that answered, and while on it please note\nthat the patch needs a rebase.\n--\nMichael", "msg_date": "Thu, 1 Oct 2020 12:50:39 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "Hi Alvaro:\n\nOn Tue, Aug 4, 2020 at 7:49 AM Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> I've been working on the ability to detach a partition from a\n> partitioned table, without causing blockages to concurrent activity.\n> I think this operation is critical for some use cases.\n>\n\nI think if it is possible to implement the detech with a NoWait option .\n\nALTER TABLE ... DETACH PARTITION .. [NoWait].\n\nif it can't get the lock, raise \"Resource is Busy\" immediately, without\nblocking others.\nthis should be a default behavior. If people do want to keep trying, it\ncan set\na ddl_lock_timeout to 'some-interval', in this case, it will still block\nothers(so it\ncan't be as good as what you are doing, but very simple), however the user\nwould know what would happen exactly and can coordinate with their\napplication accordingly. I'm sorry about this since it is a bit of\noff-topics\nor it has been discussed already.\n\n-- \nBest Regards\nAndy Fan\n\nHi Alvaro:On Tue, Aug 4, 2020 at 7:49 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:I've been working on the ability to detach a partition from a\npartitioned table, without causing blockages to concurrent activity.\nI think this operation is critical for some use cases.I think if it is possible to implement the detech with a NoWait option . ALTER TABLE ... DETACH PARTITION ..  [NoWait]. if it can't get the lock, raise \"Resource is Busy\" immediately, without blocking others. this should be a default behavior.   If people do want to keep trying, it can set a ddl_lock_timeout to 'some-interval',  in this case, it will still block others(so itcan't be as good as what you are doing, but very simple),  however the userwould know what would happen exactly and can coordinate with theirapplication accordingly.   I'm sorry about this since it is a bit of off-topicsor it has been discussed already. -- Best RegardsAndy Fan", "msg_date": "Thu, 15 Oct 2020 09:04:24 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On 2020-Oct-15, Andy Fan wrote:\n\n> I think if it is possible to implement the detech with a NoWait option .\n> \n> ALTER TABLE ... DETACH PARTITION .. [NoWait].\n> \n> if it can't get the lock, raise \"Resource is Busy\" immediately,\n> without blocking others. this should be a default behavior. If\n> people do want to keep trying, it can set a ddl_lock_timeout to\n> 'some-interval', in this case, it will still block others(so it can't\n> be as good as what you are doing, but very simple), however the user\n> would know what would happen exactly and can coordinate with their\n> application accordingly. I'm sorry about this since it is a bit of\n> off-topics or it has been discussed already.\n\nHi. I don't think this has been discussed, but it doesn't really solve\nthe use case I want to -- in many cases where the tables are\ncontinuously busy, this would lead to starvation. I think the proposal\nto make the algorithm work with reduced lock level is much more useful.\n\nI think you can already do NOWAIT behavior, with LOCK TABLE .. NOWAIT\nfollowed by DETACH PARTITION, perhaps with a nonzero statement timeout.\n\n\n", "msg_date": "Wed, 14 Oct 2020 22:08:40 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On Thu, 15 Oct 2020 at 14:04, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> I think if it is possible to implement the detech with a NoWait option .\n>\n> ALTER TABLE ... DETACH PARTITION .. [NoWait].\n>\n> if it can't get the lock, raise \"Resource is Busy\" immediately, without blocking others.\n> this should be a default behavior. If people do want to keep trying, it can set\n> a ddl_lock_timeout to 'some-interval', in this case, it will still block others(so it\n> can't be as good as what you are doing, but very simple), however the user\n> would know what would happen exactly and can coordinate with their\n> application accordingly. I'm sorry about this since it is a bit of off-topics\n> or it has been discussed already.\n\nHow would that differ from setting a low lock_timeout and running the DDL?\n\nI think what Alvaro wants to avoid is taking the AEL in the first\nplace. When you have multiple long overlapping queries to the\npartitioned table, then there be no point in time where there are zero\nlocks on the table. It does not sound like your idea would help with\nthat.\n\nDavid\n\n\n", "msg_date": "Thu, 15 Oct 2020 14:09:11 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "Hi David/Alvaro:\n\nOn Thu, Oct 15, 2020 at 9:09 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Thu, 15 Oct 2020 at 14:04, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> >\n> > I think if it is possible to implement the detech with a NoWait option .\n> >\n> > ALTER TABLE ... DETACH PARTITION .. [NoWait].\n> >\n> > if it can't get the lock, raise \"Resource is Busy\" immediately, without\n> blocking others.\n> > this should be a default behavior. If people do want to keep trying,\n> it can set\n> > a ddl_lock_timeout to 'some-interval', in this case, it will still\n> block others(so it\n> > can't be as good as what you are doing, but very simple), however the\n> user\n> > would know what would happen exactly and can coordinate with their\n> > application accordingly. I'm sorry about this since it is a bit of\n> off-topics\n> > or it has been discussed already.\n>\n> How would that differ from setting a low lock_timeout and running the DDL?\n>\n\nThey are exactly the same (I didn't realize this parameter when I sent the\nemail).\n\n\n> I think what Alvaro wants to avoid is taking the AEL in the first\n> place.\n\n\nI'm agreed with this, that's why I said \"so it can't be as good as what\nyou are doing\"\n\n\n> When you have multiple long overlapping queries to the\n> partitioned table, then there be no point in time where there are zero\n> locks on the table. It does not sound like your idea would help with that.\n\n\n\nBased on my current knowledge, \"detach\" will hold an exclusive lock\nand it will have higher priority than other waiters. so it has to wait for\nthe lock\nholder before it (named as sess 1). and at the same time, block all the\nother\nwaiters which are requiring a lock even the lock mode is compatible with\nsession 1.\nSo \"deteach\" can probably get its lock in a short time (unless some long\ntransaction\nbefore it). I'm not sure if I have some misunderstanding here.\n\nOverall I'd be +1 for this patch.\n\n-- \nBest Regards\nAndy Fan\n\nHi David/Alvaro:On Thu, Oct 15, 2020 at 9:09 AM David Rowley <dgrowleyml@gmail.com> wrote:On Thu, 15 Oct 2020 at 14:04, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> I think if it is possible to implement the detech with a NoWait option .\n>\n> ALTER TABLE ... DETACH PARTITION ..  [NoWait].\n>\n> if it can't get the lock, raise \"Resource is Busy\" immediately, without blocking others.\n> this should be a default behavior.   If people do want to keep trying, it can set\n> a ddl_lock_timeout to 'some-interval',  in this case, it will still block others(so it\n> can't be as good as what you are doing, but very simple),  however the user\n> would know what would happen exactly and can coordinate with their\n> application accordingly.   I'm sorry about this since it is a bit of off-topics\n> or it has been discussed already.\n\nHow would that differ from setting a low lock_timeout and running the DDL? They are exactly the same (I didn't realize this parameter when I sent the email).   \nI think what Alvaro wants to avoid is taking the AEL in the first\nplace. I'm agreed with this,  that's why I said \"so it can't be as good as what you are doing\" When you have multiple long overlapping queries to the\npartitioned table, then there be no point in time where there are zero\nlocks on the table. It does not sound like your idea would help with that. Based on my current knowledge,  \"detach\" will hold an exclusive lock and it will have higher priority than other waiters.  so it has to wait for the lockholder before it (named as sess 1).  and at the same time, block all the otherwaiters which are requiring a lock even the lock mode is compatible with session 1. So \"deteach\" can probably get its lock in a short time (unless some long transactionbefore it). I'm not sure if I have some misunderstanding here. Overall I'd be +1 for this patch. -- Best RegardsAndy Fan", "msg_date": "Thu, 15 Oct 2020 11:38:23 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On 2020-Sep-24, Amit Langote wrote:\n\nHello Amit,\n\n> Sorry I totally failed to see the v2 you had posted and a couple of\n> other emails where you mentioned the issues I brought up.\n\nNo worries, I appreciate you reviewing this.\n\n> However, I am a bit curious about including detached partitions in\n> some cases while not in other, which can result in a (to me)\n> surprising behavior as follows:\n[ snip ]\n> begin;\n> insert into foo values (2); -- ok\n> select * from foo;\n> select * from foo; -- ?!\n> a | b\n> ---+---\n> (0 rows)\n> \n> Maybe, it's fine to just always exclude detached partitions, although\n> perhaps I am missing some corner cases that you have thought of?\n\nWell, this particular case can be fixed by changing\nExecInitPartitionDispatchInfo specifically, from including detached\npartitions to excluding them, as in the attached version. Given your\nexample I think the case is fairly good that they should be excluded\nthere. I can't think of a case that this change break.\n\nHowever I'm not sure that excluding them everywhere is sensible. There\nare currently two cases where they are included (search for calls to\nCreatePartitionDirectory if you're curious). One is snapshot-isolation\ntransactions (repeatable read and serializable) in\nset_relation_partition_info, per the example from Hao Wu. If we simply\nexclude detached transaction there, repeatable read no longer works\nproperly; rows will just go missing for no apparent reason. I don't\nthink this is acceptable.\n\nThe other case is ExecCreatePartitionPruneState(). The whole point of\nincluding detached partitions here is to make them available for queries\nthat were planned before the detach and executed after the detach. My\nfear is that the pruning plan will contain references (from planner) to\npartitions that the executor doesn't know about. If there are extra\npartitions at the executor side, it shouldn't harm anything (and it\nshouldn't change query results); but I'm not sure that things will work\nOK if partitions seen by the planner disappear from under the executor.\n\nI'm posting this version just as a fresh rebase -- it is not yet\nintended for commit. I haven't touched the constraint stuff. I still\nthink that that can be removed before commit, which is a simple change;\nbut even if not, the problem with the duplicated constraints should be\neasy to fix.\n\nAgain, my thanks for reviewing.", "msg_date": "Fri, 16 Oct 2020 19:13:21 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "Here's an updated version of this patch.\n\nApart from rebasing to current master, I made the following changes:\n\n* On the first transaction (the one that marks the partition as\ndetached), the partition is locked with ShareLock rather than\nShareUpdateExclusiveLock. This means that DML is not allowed anymore.\nThis seems a reasonable restriction to me; surely by the time you're\ndetaching the partition you're not inserting data into it anymore.\n\n* In ExecInitPartitionDispatchInfo, the previous version always\nexcluded detached partitions. Now it does include them in isolation\nlevel repeatable read and higher. Considering the point above, this\nsounds a bit contradictory: you shouldn't be inserting new tuples in\npartitions being detached. But if you do, it makes more sense: in RR\ntwo queries that insert tuples in the same partition would not fail\nmid-transaction. (In a read-committed transaction, the second query\ndoes fail, but to me that does not sound surprising.)\n\n* ALTER TABLE .. DETACH PARTITION FINALIZE now waits for concurrent old\nsnapshots, as previously discussed. This should ensure that the user\ndoesn't just cancel the wait during the second transaction by Ctrl-C and\nrun FINALIZE immediately afterwards, which I claimed would bring\ninconsistency.\n\n* Avoid creating the CHECK constraint if an identical one already\nexists.\n\n(Note I do not remove the constraint on ATTACH. That seems pointless.)\n\nStill to do: test this using the new hook added by 6f0b632f083b.", "msg_date": "Tue, 3 Nov 2020 20:56:06 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On 04.11.2020 02:56, Alvaro Herrera wrote:\n> Here's an updated version of this patch.\n>\n> Apart from rebasing to current master, I made the following changes:\n>\n> * On the first transaction (the one that marks the partition as\n> detached), the partition is locked with ShareLock rather than\n> ShareUpdateExclusiveLock. This means that DML is not allowed anymore.\n> This seems a reasonable restriction to me; surely by the time you're\n> detaching the partition you're not inserting data into it anymore.\n>\n> * In ExecInitPartitionDispatchInfo, the previous version always\n> excluded detached partitions. Now it does include them in isolation\n> level repeatable read and higher. Considering the point above, this\n> sounds a bit contradictory: you shouldn't be inserting new tuples in\n> partitions being detached. But if you do, it makes more sense: in RR\n> two queries that insert tuples in the same partition would not fail\n> mid-transaction. (In a read-committed transaction, the second query\n> does fail, but to me that does not sound surprising.)\n>\n> * ALTER TABLE .. DETACH PARTITION FINALIZE now waits for concurrent old\n> snapshots, as previously discussed. This should ensure that the user\n> doesn't just cancel the wait during the second transaction by Ctrl-C and\n> run FINALIZE immediately afterwards, which I claimed would bring\n> inconsistency.\n>\n> * Avoid creating the CHECK constraint if an identical one already\n> exists.\n>\n> (Note I do not remove the constraint on ATTACH. That seems pointless.)\n>\n> Still to do: test this using the new hook added by 6f0b632f083b.\n\nStatus update for a commitfest entry.\n\nThe commitfest is nearing the end and this thread is \"Waiting on Author\".\nAs far as I see the last message contains a patch. Is there anything \nleft to work on or it needs review now? Alvaro, are you planning to \ncontinue working on it?\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Mon, 30 Nov 2020 18:22:35 +0300", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On 2020-Nov-30, Anastasia Lubennikova wrote:\n\n> The commitfest is nearing the end and this thread is \"Waiting on Author\".\n> As far as I see the last message contains a patch. Is there anything left to\n> work on or it needs review now? Alvaro, are you planning to continue working\n> on it?\n\nThanks Anastasia. I marked it as needs review.\n\n\n", "msg_date": "Mon, 30 Nov 2020 12:29:51 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On Tue, Nov 03, 2020 at 08:56:06PM -0300, Alvaro Herrera wrote:\n> Here's an updated version of this patch.\n> \n> Apart from rebasing to current master, I made the following changes:\n> \n> * On the first transaction (the one that marks the partition as\n> detached), the partition is locked with ShareLock rather than\n> ShareUpdateExclusiveLock. This means that DML is not allowed anymore.\n> This seems a reasonable restriction to me; surely by the time you're\n> detaching the partition you're not inserting data into it anymore.\n\nI don't think it's an issue with your patch, but FYI that sounds like something\nI had to do recently: detach *all* partitions of various tabls to promote their\npartition key column from timestamp to timestamptz. And we insert directly\ninto child tables, not routed via parent.\n\nI don't your patch is still useful, but not to us. So the documentation should\nbe clear about that.\n\n> * ALTER TABLE .. DETACH PARTITION FINALIZE now waits for concurrent old\n> snapshots, as previously discussed. This should ensure that the user\n> doesn't just cancel the wait during the second transaction by Ctrl-C and\n> run FINALIZE immediately afterwards, which I claimed would bring\n> inconsistency.\n> \n> * Avoid creating the CHECK constraint if an identical one already\n> exists.\n> \n> (Note I do not remove the constraint on ATTACH. That seems pointless.)\n> \n> Still to do: test this using the new hook added by 6f0b632f083b.\n\ntab complete?\n\n> + * Ensure that foreign keys still hold after this detach. This keeps lock\n> + * on the referencing tables, which prevent concurrent transactions from\n\nkeeps locks or\nwhich prevents\n\n> +++ b/doc/src/sgml/ref/alter_table.sgml\n> @@ -947,6 +950,24 @@ WITH ( MODULUS <replaceable class=\"parameter\">numeric_literal</replaceable>, REM\n> attached to the target table's indexes are detached. Any triggers that\n> were created as clones of those in the target table are removed.\n> </para>\n> + <para>\n> + If <literal>CONCURRENTLY</literal> is specified, this process runs in two\n> + transactions in order to avoid blocking other sessions that might be accessing\n> + the partitioned table. During the first transaction,\n> + <literal>SHARE UPDATE EXCLUSIVE</literal> is taken in both parent table and\n\nmissing \"lock\"\ntaken *on* ?\n\n> + partition, and the partition is marked detached; at that point, the transaction\n\nprobably \"its partition,\"\n\n> + If <literal>FINALIZE</literal> is specified, complete actions of a\n> + previous <literal>DETACH CONCURRENTLY</literal> invocation that\n> + was cancelled or crashed.\n\nsay \"actions are completed\" or:\n\n If FINALIZE is specified, a previous DETACH that was cancelled or interrupted\n is completed.\n\n> +\t\t\tif (!inhdetached && detached)\n> +\t\t\t\tereport(ERROR,\n> +\t\t\t\t\t\t(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> +\t\t\t\t\t\t errmsg(\"cannot complete detaching partition \\\"%s\\\"\",\n> +\t\t\t\t\t\t\t\tchildname),\n> +\t\t\t\t\t\t errdetail(\"There's no partial concurrent detach in progress.\")));\n\nmaybe say \"partially-complete\" or remove \"partial\"\n\n> +\t\t * the partition being detached? Putting them on the partition being\n> +\t\t * detached would be wrong, since they'd become \"lost\" after the but\nafter *that* ?\n\n> +\t * Concurrent mode has to work harder; first we add a new constraint to the\n> +\t * partition that matches the partition constraint, if there isn't a matching\n> +\t * one already. The reason for this is that the planner may have made\n> +\t * optimizations that depend on the constraint. XXX Isn't it sufficient to\n> +\t * invalidate the partition's relcache entry?\n\nHa. I suggested this years ago.\nhttps://www.postgresql.org/message-id/20180601221428.GU5164@telsasoft.com\n|. The docs say: if detaching/re-attach a partition, should first ADD CHECK to\n| avoid a slow ATTACH operation. Perhaps DETACHing a partition could\n| implicitly CREATE a constraint which is usable when reATTACHing?\n\n> +\t * Then we close our existing transaction, and in a new one wait for\n> +\t * all process to catch up on the catalog updates we've done so far; at\n\nprocesses\n\n> +\t\t * We don't need to concern ourselves with waiting for a lock the\n> +\t\t * partition itself, since we will acquire AccessExclusiveLock below.\n\nlock *on* ?\n\n> +\t * If asked to, wait until existing snapshots are gone. This is important\n> +\t * in the second transaction of DETACH PARTITION CONCURRENTLY is canceled:\n\ns/in/if/\n\n> +++ b/src/bin/psql/describe.c\n> -\t\t\tprintfPQExpBuffer(&tmpbuf, _(\"Partition of: %s %s\"), parent_name,\n> -\t\t\t\t\t\t\t partdef);\n> +\t\t\tprintfPQExpBuffer(&tmpbuf, _(\"Partition of: %s %s%s\"), parent_name,\n> +\t\t\t\t\t\t\t partdef,\n> +\t\t\t\t\t\t\t strcmp(detached, \"t\") == 0 ? \" DETACHED\" : \"\");\n\nThe attname \"detached\" is a stretch of what's intuitive (it's more like\n\"detachING\" or half-detached). But I think psql should for sure show something\nmore obvious to users. Esp. seeing as psql output isn't documented. Let's\nfigure out what to show to users and then maybe rename the column that, too.\n\n> +PG_KEYWORD(\"finalize\", FINALIZE, UNRESERVED_KEYWORD, BARE_LABEL)\n\nInstead of finalize .. deferred ? Or ??\n\nATExecDetachPartition:\nDoesn't this need to lock the table before testing for default partition ?\n\nI ended up with apparently broken constraint when running multiple loops around\na concurrent detach / attach:\n\nwhile psql -h /tmp postgres -c \"ALTER TABLE p ATTACH PARTITION p1 FOR VALUES FROM (1)TO(2)\" -c \"ALTER TABLE p DETACH PARTITION p1 CONCURRENTLY\"; do :; done&\nwhile psql -h /tmp postgres -c \"ALTER TABLE p ATTACH PARTITION p1 FOR VALUES FROM (1)TO(2)\" -c \"ALTER TABLE p DETACH PARTITION p1 CONCURRENTLY\"; do :; done&\n\n \"p1_check\" CHECK (true)\n \"p1_i_check\" CHECK (i IS NOT NULL AND i >= 1 AND i < 2)\n\n\n", "msg_date": "Mon, 30 Nov 2020 19:30:51 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "Hi Justin,\n\nThanks for all the comments. I'll incorporate everything and submit an\nupdated version later.\n\nOn 2020-Nov-30, Justin Pryzby wrote:\n\n> On Tue, Nov 03, 2020 at 08:56:06PM -0300, Alvaro Herrera wrote:\n\n> > +++ b/src/bin/psql/describe.c\n> > -\t\t\tprintfPQExpBuffer(&tmpbuf, _(\"Partition of: %s %s\"), parent_name,\n> > -\t\t\t\t\t\t\t partdef);\n> > +\t\t\tprintfPQExpBuffer(&tmpbuf, _(\"Partition of: %s %s%s\"), parent_name,\n> > +\t\t\t\t\t\t\t partdef,\n> > +\t\t\t\t\t\t\t strcmp(detached, \"t\") == 0 ? \" DETACHED\" : \"\");\n> \n> The attname \"detached\" is a stretch of what's intuitive (it's more like\n> \"detachING\" or half-detached). But I think psql should for sure show something\n> more obvious to users. Esp. seeing as psql output isn't documented. Let's\n> figure out what to show to users and then maybe rename the column that, too.\n\nOK. I agree that \"being detached\" is the state we want users to see, or\nmaybe \"detach pending\", or \"unfinisheddetach\" (ugh). I'm not sure that\npg_inherits.inhbeingdetached\" is a great column name. Opinions welcome.\n\n> > +PG_KEYWORD(\"finalize\", FINALIZE, UNRESERVED_KEYWORD, BARE_LABEL)\n> \n> Instead of finalize .. deferred ? Or ??\n\nWell, I'm thinking that this has to be a verb in the imperative mood.\nThe user is commanding the server to \"finalize this detach operation\".\nI'm not sure that DEFERRED fits that grammatical role. If there are\nother ideas, let's discuss them.\n\nALTER TABLE tst DETACH PARTITION tst_1 FINALIZE <-- decent\nALTER TABLE tst DETACH PARTITION tst_1 COMPLETE <-- I don't like it\nALTER TABLE tst DETACH PARTITION tst_1 DEFERRED <-- grammatically faulty?\n\n> ATExecDetachPartition:\n> Doesn't this need to lock the table before testing for default partition ?\n\nCorrect, it does.\n\n> I ended up with apparently broken constraint when running multiple loops around\n> a concurrent detach / attach:\n> \n> while psql -h /tmp postgres -c \"ALTER TABLE p ATTACH PARTITION p1 FOR VALUES FROM (1)TO(2)\" -c \"ALTER TABLE p DETACH PARTITION p1 CONCURRENTLY\"; do :; done&\n> while psql -h /tmp postgres -c \"ALTER TABLE p ATTACH PARTITION p1 FOR VALUES FROM (1)TO(2)\" -c \"ALTER TABLE p DETACH PARTITION p1 CONCURRENTLY\"; do :; done&\n> \n> \"p1_check\" CHECK (true)\n> \"p1_i_check\" CHECK (i IS NOT NULL AND i >= 1 AND i < 2)\n\nNot good.\n\n\n", "msg_date": "Tue, 1 Dec 2020 12:25:19 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On Tue, Aug 4, 2020 at 7:49 AM Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> I've been working on the ability to detach a partition from a\n> partitioned table, without causing blockages to concurrent activity.\n> I think this operation is critical for some use cases.\n>\n>\nThis would be a very great feature. When we can't handle thousands of\npartitions\nvery well, and user agree to detach some old partitions automatically, the\nblocking\nissue here would be a big blocker for this solution. Thanks for working on\nthis!\n\n\n\n> There was a lot of great discussion which ended up in Robert completing\n> a much sought implementation of non-blocking ATTACH. DETACH was\n> discussed too because it was a goal initially, but eventually dropped\n> from that patch altogether. Nonetheless, that thread provided a lot of\n> useful input to this implementation. Important ones:\n>\n> [1]\n> https://postgr.es/m/CA+TgmoYg4x7AH=_QSptvuBKf+3hUdiCa4frPkt+RvXZyjX1n=w@mail.gmail.com\n> [2]\n> https://postgr.es/m/CA+TgmoaAjkTibkEr=xJg3ndbRsHHSiYi2SJgX69MVosj=LJmug@mail.gmail.com\n> [3]\n> https://postgr.es/m/CA+TgmoY13KQZF-=HNTrt9UYWYx3_oYOQpu9ioNT49jGgiDpUEA@mail.gmail.com\n>\n> Attached is a patch that implements\n> ALTER TABLE ... DETACH PARTITION .. CONCURRENTLY.\n>\n> In the previous thread we were able to implement the concurrent model\n> without the extra keyword. For this one I think that won't work; my\n> implementation works in two transactions so there's a restriction that\n> you can't run it in a transaction block. Also, there's a wait phase\n> that makes it slower than the non-concurrent one. Those two drawbacks\n> make me think that it's better to keep both modes available, just like\n> we offer both CREATE INDEX and CREATE INDEX CONCURRENTLY.\n>\n> Why two transactions? The reason is that in order for this to work, we\n> make a catalog change (mark it detached), and commit so that all\n> concurrent transactions can see the change. A second transaction waits\n> for anybody who holds any lock on the partitioned table and grabs Access\n> Exclusive on the partition (which now no one cares about, if they're\n> looking at the partitioned table), where the DDL action on the partition\n> can be completed.\n>\n> ALTER TABLE is normally unable to run in two transactions. I hacked it\n> (0001) so that the relation can be closed and reopened in the Exec phase\n> (by having the rel as part of AlteredTableInfo: when ATRewriteCatalogs\n> returns, it uses that pointer to close the rel). It turns out that this\n> is sufficient to make that work. This means that ALTER TABLE DETACH\n> CONCURRENTLY cannot work as part of a multi-command ALTER TABLE, but\n> that's alreay enforced by the grammar anyway.\n>\n> DETACH CONCURRENTLY doesn't work if a default partition exists. It's\n> just too problematic a case; you would still need to have AEL on the\n> default partition.\n>\n>\n> I haven't yet experimented with queries running in a standby in tandem\n> with a detach.\n>\n> --\n> Álvaro Herrera\n>\n\n\n-- \nBest Regards\nAndy Fan\n\nOn Tue, Aug 4, 2020 at 7:49 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:I've been working on the ability to detach a partition from a\npartitioned table, without causing blockages to concurrent activity.\nI think this operation is critical for some use cases.\nThis would be a very great feature.  When we can't handle thousands of partitionsvery well, and user agree to detach some old partitions automatically, the blockingissue here would be a big blocker for this solution. Thanks for working on this! \nThere was a lot of great discussion which ended up in Robert completing\na much sought implementation of non-blocking ATTACH.  DETACH was\ndiscussed too because it was a goal initially, but eventually dropped\nfrom that patch altogether. Nonetheless, that thread provided a lot of\nuseful input to this implementation.  Important ones:\n\n[1] https://postgr.es/m/CA+TgmoYg4x7AH=_QSptvuBKf+3hUdiCa4frPkt+RvXZyjX1n=w@mail.gmail.com\n[2] https://postgr.es/m/CA+TgmoaAjkTibkEr=xJg3ndbRsHHSiYi2SJgX69MVosj=LJmug@mail.gmail.com\n[3] https://postgr.es/m/CA+TgmoY13KQZF-=HNTrt9UYWYx3_oYOQpu9ioNT49jGgiDpUEA@mail.gmail.com\n\nAttached is a patch that implements\nALTER TABLE ... DETACH PARTITION .. CONCURRENTLY.\n\nIn the previous thread we were able to implement the concurrent model\nwithout the extra keyword.  For this one I think that won't work; my\nimplementation works in two transactions so there's a restriction that\nyou can't run it in a transaction block.  Also, there's a wait phase\nthat makes it slower than the non-concurrent one.  Those two drawbacks\nmake me think that it's better to keep both modes available, just like\nwe offer both CREATE INDEX and CREATE INDEX CONCURRENTLY.\n\nWhy two transactions?  The reason is that in order for this to work, we\nmake a catalog change (mark it detached), and commit so that all\nconcurrent transactions can see the change.  A second transaction waits\nfor anybody who holds any lock on the partitioned table and grabs Access\nExclusive on the partition (which now no one cares about, if they're\nlooking at the partitioned table), where the DDL action on the partition\ncan be completed.\n\nALTER TABLE is normally unable to run in two transactions.  I hacked it\n(0001) so that the relation can be closed and reopened in the Exec phase\n(by having the rel as part of AlteredTableInfo: when ATRewriteCatalogs\nreturns, it uses that pointer to close the rel).  It turns out that this\nis sufficient to make that work.  This means that ALTER TABLE DETACH\nCONCURRENTLY cannot work as part of a multi-command ALTER TABLE, but\nthat's alreay enforced by the grammar anyway.\n\nDETACH CONCURRENTLY doesn't work if a default partition exists.  It's\njust too problematic a case; you would still need to have AEL on the\ndefault partition.\n\n\nI haven't yet experimented with queries running in a standby in tandem\nwith a detach.\n\n-- \nÁlvaro Herrera\n-- Best RegardsAndy Fan", "msg_date": "Fri, 25 Dec 2020 16:02:05 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On 2020-Dec-01, Alvaro Herrera wrote:\n\n> On 2020-Nov-30, Justin Pryzby wrote:\n\n> Thanks for all the comments. I'll incorporate everything and submit an\n> updated version later.\n\nHere's a rebased version 5, with the typos fixed. More comments below.\n\n> > The attname \"detached\" is a stretch of what's intuitive (it's more like\n> > \"detachING\" or half-detached). But I think psql should for sure show something\n> > more obvious to users. Esp. seeing as psql output isn't documented. Let's\n> > figure out what to show to users and then maybe rename the column that, too.\n> \n> OK. I agree that \"being detached\" is the state we want users to see, or\n> maybe \"detach pending\", or \"unfinisheddetach\" (ugh). I'm not sure that\n> pg_inherits.inhbeingdetached\" is a great column name. Opinions welcome.\n\nI haven't changed this yet; I can't make up my mind about what I like\nbest.\n\nPartition of: parent FOR VALUES IN (1) UNFINISHED DETACH\nPartition of: parent FOR VALUES IN (1) UNDER DETACH\nPartition of: parent FOR VALUES IN (1) BEING DETACHED\n\n> > ATExecDetachPartition:\n> > Doesn't this need to lock the table before testing for default partition ?\n> \n> Correct, it does.\n\nI failed to point out that by the time ATExecDetachPartition is called,\nthe relation has already been locked by the invoking ALTER TABLE support\ncode.\n\n> > I ended up with apparently broken constraint when running multiple loops around\n> > a concurrent detach / attach:\n> > \n> > while psql -h /tmp postgres -c \"ALTER TABLE p ATTACH PARTITION p1 FOR VALUES FROM (1)TO(2)\" -c \"ALTER TABLE p DETACH PARTITION p1 CONCURRENTLY\"; do :; done&\n> > while psql -h /tmp postgres -c \"ALTER TABLE p ATTACH PARTITION p1 FOR VALUES FROM (1)TO(2)\" -c \"ALTER TABLE p DETACH PARTITION p1 CONCURRENTLY\"; do :; done&\n> > \n> > \"p1_check\" CHECK (true)\n> > \"p1_i_check\" CHECK (i IS NOT NULL AND i >= 1 AND i < 2)\n> \n> Not good.\n\nHaven't had time to investigate this problem yet.\n\n-- \n�lvaro Herrera", "msg_date": "Fri, 8 Jan 2021 16:14:33 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On Fri, Jan 08, 2021 at 04:14:33PM -0300, Alvaro Herrera wrote:\n> > > I ended up with apparently broken constraint when running multiple loops around\n> > > a concurrent detach / attach:\n> > > \n> > > while psql -h /tmp postgres -c \"ALTER TABLE p ATTACH PARTITION p1 FOR VALUES FROM (1)TO(2)\" -c \"ALTER TABLE p DETACH PARTITION p1 CONCURRENTLY\"; do :; done&\n> > > while psql -h /tmp postgres -c \"ALTER TABLE p ATTACH PARTITION p1 FOR VALUES FROM (1)TO(2)\" -c \"ALTER TABLE p DETACH PARTITION p1 CONCURRENTLY\"; do :; done&\n> > > \n> > > \"p1_check\" CHECK (true)\n> > > \"p1_i_check\" CHECK (i IS NOT NULL AND i >= 1 AND i < 2)\n> > \n> > Not good.\n> \n> Haven't had time to investigate this problem yet.\n\nI guess it's because you commited the txn and released lock in the middle of\nthe command.\n\n-- \nJustin", "msg_date": "Sun, 10 Jan 2021 16:15:41 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On 2021-Jan-10, Justin Pryzby wrote:\n\n> On Fri, Jan 08, 2021 at 04:14:33PM -0300, Alvaro Herrera wrote:\n> > > > I ended up with apparently broken constraint when running multiple loops around\n> > > > a concurrent detach / attach:\n> > > > \n> > > > while psql -h /tmp postgres -c \"ALTER TABLE p ATTACH PARTITION p1 FOR VALUES FROM (1)TO(2)\" -c \"ALTER TABLE p DETACH PARTITION p1 CONCURRENTLY\"; do :; done&\n> > > > while psql -h /tmp postgres -c \"ALTER TABLE p ATTACH PARTITION p1 FOR VALUES FROM (1)TO(2)\" -c \"ALTER TABLE p DETACH PARTITION p1 CONCURRENTLY\"; do :; done&\n> > > > \n> > > > \"p1_check\" CHECK (true)\n> > > > \"p1_i_check\" CHECK (i IS NOT NULL AND i >= 1 AND i < 2)\n> > > \n> > > Not good.\n> > \n> > Haven't had time to investigate this problem yet.\n> \n> I guess it's because you commited the txn and released lock in the middle of\n> the command.\n\nHmm, but if we take this approach, then we're still vulnerable to the\nproblem that somebody can do DETACH CONCURRENTLY and cancel the wait (or\ncrash the server), then mess up the state before doing DETACH FINALIZE:\nwhen they cancel the wait, the lock will be released.\n\nI think the right fix is to disallow any action on a partition which is\npending detach other than DETACH FINALIZE. (Didn't do that here.)\n\nHere's a rebase to current sources; there are no changes from v5.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"No hay ausente sin culpa ni presente sin disculpa\" (Prov. franc�s)", "msg_date": "Fri, 26 Feb 2021 17:32:36 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "Rebase to current sources, to appease CF bot; no other changes.\n\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W", "msg_date": "Thu, 11 Mar 2021 13:26:51 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On 2021-Feb-26, Alvaro Herrera wrote:\n\n> Hmm, but if we take this approach, then we're still vulnerable to the\n> problem that somebody can do DETACH CONCURRENTLY and cancel the wait (or\n> crash the server), then mess up the state before doing DETACH FINALIZE:\n> when they cancel the wait, the lock will be released.\n> \n> I think the right fix is to disallow any action on a partition which is\n> pending detach other than DETACH FINALIZE. (Didn't do that here.)\n\nHere's a fixup patch to do it that way. I tried running the commands\nyou showed and one of them immediately dies with the new error message;\nI can't cause the bogus constraint to show up anymore.\n\nI'll clean this up for a real submission tomorrow.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"The Gord often wonders why people threaten never to come back after they've\nbeen told never to return\" (www.actsofgord.com)", "msg_date": "Mon, 15 Mar 2021 20:04:37 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On 2021-Mar-15, Alvaro Herrera wrote:\n\n> Here's a fixup patch to do it that way. I tried running the commands\n> you showed and one of them immediately dies with the new error message;\n> I can't cause the bogus constraint to show up anymore.\n\nActually, that was a silly fix that didn't actually work correctly, as I\ndiscovered immediately after sending it. The right fix is to forbid all\ncommands other than DETACH PARTITION FINALIZE in a partition that's in\nthe process of being detached.\n\nIn the attached v8, I did that; I also added a ton more tests that\nhopefully show how the feature should work in concurrent cases,\nincluding one case in which the transaction doing the detach is\ncancelled. I also renamed \"inhdetached\" to \"inhdetachpending\", per\nprevious discussion, including changing how to looks in psql.\n\nI am not aware of any other loose end in this patch; I consider this\nversion final. Barring further problem reports, I'll get this pushed\ntomorrow morning.\n\npsql completion is missing. If somebody would like to contribute that,\nI'm grateful.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"If you have nothing to say, maybe you need just the right tool to help you\nnot say it.\" (New York Times, about Microsoft PowerPoint)", "msg_date": "Wed, 17 Mar 2021 14:48:43 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "The v8 patch has the \"broken constraint\" problem.\n\nAlso, it \"fails to avoid\" adding duplicate constraints:\n\nCheck constraints:\n \"c\" CHECK (i IS NOT NULL AND i > 1 AND i < 2)\n \"cc\" CHECK (i IS NOT NULL AND i >= 1 AND i < 2)\n \"p1_check\" CHECK (true)\n \"p1_i_check\" CHECK (i IS NOT NULL AND i >= 1 AND i < 2)\n\n> diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml\n> index 5c9f4af1d5..0cb846f408 100644\n> --- a/doc/src/sgml/catalogs.sgml\n> +++ b/doc/src/sgml/catalogs.sgml\n> @@ -4485,6 +4485,16 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l\n> when using declarative partitioning.\n> </para></entry>\n> </row>\n> +\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>inhdetachpending</structfield> <type>bool</type>\n> + </para>\n> + <para>\n> + Set to true for a partition that is in the process of being detached;\n> + false otherwise.\n> + </para></entry>\n> + </row>\n\nRemove \"Set to\" ?\nAnd say <literal>true</literal> and <literal>false</literal>\n\nProbably you'll hate the suggestion, but maybe it should be \"pendingdetach\".\nWe already have pg_settings.pending_restart.\n\n> + If <literal>CONCURRENTLY</literal> is specified, this process runs in two\n> + transactions in order to avoid blocking other sessions that might be accessing\n> + the partitioned table. During the first transaction, a\n> + <literal>SHARE UPDATE EXCLUSIVE</literal> lock is taken on both parent table and\n> + partition, and its partition is marked detached; at that point, the transaction\n> + is committed and all transactions using the partitioned table are waited for.\n> + Once all those transactions are gone, the second stage acquires\n\nInstead of \"gone\", say \"have completed\" ?\n\n> +/*\n> + * MarkInheritDetached\n> + *\n> + * When a partition is detached from its parent concurrently, we don't\n> + * remove the pg_inherits row until a second transaction; as a preparatory\n> + * step, this function marks the entry as 'detached', so that other\n\n*pending detached\n\n> + * The strategy for concurrency is to first modify the partition catalog\n> + * rows to make it visible to everyone that the partition is detached,\n\nthe inherits catalog?\n\n> +\t/*\n> +\t * In concurrent mode, the partition is locked with share-update-exclusive\n> +\t * in the first transaction. This allows concurrent transactions to be\n> +\t * doing DML to the partition.\n\n> +\t/*\n> +\t * Check inheritance conditions and either delete the pg_inherits row\n> +\t * (in non-concurrent mode) or just set the inhisdetached flag.\n\ndetachpending\n\n\n", "msg_date": "Wed, 17 Mar 2021 13:45:34 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On 2021-Mar-17, Justin Pryzby wrote:\n\n> The v8 patch has the \"broken constraint\" problem.\n\nYeah, I had misunderstood what the problem was. I think a good solution\nto this is to have get_partition_parent return the true parent even when\na detach is pending, for one particular callsite. (This means adjusting\nall other callsites.) Notpatch attached (applies on top of v8).\n\n> Also, it \"fails to avoid\" adding duplicate constraints:\n> \n> Check constraints:\n> \"c\" CHECK (i IS NOT NULL AND i > 1 AND i < 2)\n> \"cc\" CHECK (i IS NOT NULL AND i >= 1 AND i < 2)\n> \"p1_check\" CHECK (true)\n> \"p1_i_check\" CHECK (i IS NOT NULL AND i >= 1 AND i < 2)\n\nDo you mean the \"cc\" and \"p1_i_check\" one? I mean, if you already have\na constraint in the partition that duplicates the partition constraint,\nthen during attach we still create our new constraint? I guess a\nsolution to this would be to scan all constraints and see if any equals\nthe expression that the new one would have. Sounds easy enough now that\nwrite it out loud.\n\nThanks\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"Java is clearly an example of money oriented programming\" (A. Stepanov)", "msg_date": "Fri, 19 Mar 2021 10:57:37 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On Fri, Mar 19, 2021 at 10:57:37AM -0300, Alvaro Herrera wrote:\n> > Also, it \"fails to avoid\" adding duplicate constraints:\n> > \n> > Check constraints:\n> > \"c\" CHECK (i IS NOT NULL AND i > 1 AND i < 2)\n> > \"cc\" CHECK (i IS NOT NULL AND i >= 1 AND i < 2)\n> > \"p1_check\" CHECK (true)\n> > \"p1_i_check\" CHECK (i IS NOT NULL AND i >= 1 AND i < 2)\n> \n> Do you mean the \"cc\" and \"p1_i_check\" one? I mean, if you already have\n\nNo, I started with c and cc, and it added the broken constraint p1_check (which\nyou say you've fixed) and the redundant constraint p1_i_check. I guess that's\nwhat you meant.\n\n> a constraint in the partition that duplicates the partition constraint,\n> then during attach we still create our new constraint? I guess a\n> solution to this would be to scan all constraints and see if any equals\n> the expression that the new one would have. Sounds easy enough now that\n> write it out loud.\n\nBut it looks like DetachAddConstraintIfNeeded already intended to do that:\n\n+ if (equal(constraintExpr, thisconstr)) \n+ return; \n\nActually, it appears your latest notpatch resolves both these issues.\nBut note that it doesn't check if an existing constraint \"implies\" the new\nconstraint - maybe it should.\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 21 Mar 2021 12:54:53 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On 2021-Mar-21, Justin Pryzby wrote:\n\n> On Fri, Mar 19, 2021 at 10:57:37AM -0300, Alvaro Herrera wrote:\n> > > Also, it \"fails to avoid\" adding duplicate constraints:\n> > > \n> > > Check constraints:\n> > > \"c\" CHECK (i IS NOT NULL AND i > 1 AND i < 2)\n> > > \"cc\" CHECK (i IS NOT NULL AND i >= 1 AND i < 2)\n> > > \"p1_check\" CHECK (true)\n> > > \"p1_i_check\" CHECK (i IS NOT NULL AND i >= 1 AND i < 2)\n> > \n> > Do you mean the \"cc\" and \"p1_i_check\" one? I mean, if you already have\n> \n> No, I started with c and cc, and it added the broken constraint p1_check (which\n> you say you've fixed) and the redundant constraint p1_i_check. I guess that's\n> what you meant.\n\nYes, that's what I meant.\n\n> > a constraint in the partition that duplicates the partition constraint,\n> > then during attach we still create our new constraint? I guess a\n> > solution to this would be to scan all constraints and see if any equals\n> > the expression that the new one would have. Sounds easy enough now that\n> > write it out loud.\n> \n> But it looks like DetachAddConstraintIfNeeded already intended to do that:\n> \n> + if (equal(constraintExpr, thisconstr))\n> + return;\n\nHah, so I had already done it, but forgot.\n\n> Actually, it appears your latest notpatch resolves both these issues.\n\nGreat.\n\n> But note that it doesn't check if an existing constraint \"implies\" the new\n> constraint - maybe it should.\n\nHm, I'm not sure I want to do that, because that means that if I later\nhave to attach the partition again with the same partition bounds, then\nI might have to incur a scan to recheck the constraint. I think we want\nto make the new constraint be as tight as possible ...\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n", "msg_date": "Sun, 21 Mar 2021 15:01:15 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On 2021-Mar-19, Alvaro Herrera wrote:\n\n> diff --git a/src/backend/utils/cache/partcache.c b/src/backend/utils/cache/partcache.c\n> index 0fe4f55b04..6dfa3fb4a8 100644\n> --- a/src/backend/utils/cache/partcache.c\n> +++ b/src/backend/utils/cache/partcache.c\n> @@ -352,16 +352,9 @@ generate_partition_qual(Relation rel)\n> \t\treturn copyObject(rel->rd_partcheck);\n> \n> \t/*\n> -\t * Obtain parent relid; if it's invalid, then the partition is being\n> -\t * detached. The constraint is NIL in that case, and let's cache that.\n> +\t * Obtain parent relid. XXX explain why we need this\n> \t */\n> -\tparentrelid = get_partition_parent(RelationGetRelid(rel));\n> -\tif (parentrelid == InvalidOid)\n> -\t{\n> -\t\trel->rd_partcheckvalid = true;\n> -\t\trel->rd_partcheck = NIL;\n> -\t\treturn NIL;\n> -\t}\n> +\tparentrelid = get_partition_parent(RelationGetRelid(rel), true);\n\nOne thing that makes me uneasy about this, is that I don't understand\nhow does this happen with your test of two psqls doing attach/detach.\n(It is necessary for the case when the waiting concurrent detach is\ncanceled, and so this fix is necessary anyway). In your test, no\nwaiting transaction is ever cancelled; so what is the period during\nwhich the relation is not locked that causes this code to be hit? I\nfear that there's a bug in the lock protocol somewhere.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"In Europe they call me Niklaus Wirth; in the US they call me Nickel's worth.\n That's because in Europe they call me by name, and in the US by value!\"\n\n\n", "msg_date": "Sun, 21 Mar 2021 15:06:45 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On Sun, Mar 21, 2021 at 03:01:15PM -0300, Alvaro Herrera wrote:\n> > But note that it doesn't check if an existing constraint \"implies\" the new\n> > constraint - maybe it should.\n> \n> Hm, I'm not sure I want to do that, because that means that if I later\n> have to attach the partition again with the same partition bounds, then\n> I might have to incur a scan to recheck the constraint. I think we want\n> to make the new constraint be as tight as possible ...\n\nThe ATTACH PARTITION checks if any existing constraint impilies the (proposed)\npartition bounds, not just if constraints are equal. So I'm suggesting to do\nthe same here.\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 21 Mar 2021 13:14:20 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On 2021-Mar-21, Justin Pryzby wrote:\n\n> On Sun, Mar 21, 2021 at 03:01:15PM -0300, Alvaro Herrera wrote:\n> > > But note that it doesn't check if an existing constraint \"implies\" the new\n> > > constraint - maybe it should.\n> > \n> > Hm, I'm not sure I want to do that, because that means that if I later\n> > have to attach the partition again with the same partition bounds, then\n> > I might have to incur a scan to recheck the constraint. I think we want\n> > to make the new constraint be as tight as possible ...\n> \n> The ATTACH PARTITION checks if any existing constraint impilies the (proposed)\n> partition bounds, not just if constraints are equal. So I'm suggesting to do\n> the same here.\n\nSo if we do that on DETACH, what would happen on ATTACH?\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n", "msg_date": "Sun, 21 Mar 2021 15:22:00 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On Sun, Mar 21, 2021 at 03:22:00PM -0300, Alvaro Herrera wrote:\n> On 2021-Mar-21, Justin Pryzby wrote:\n> \n> > On Sun, Mar 21, 2021 at 03:01:15PM -0300, Alvaro Herrera wrote:\n> > > > But note that it doesn't check if an existing constraint \"implies\" the new\n> > > > constraint - maybe it should.\n> > > \n> > > Hm, I'm not sure I want to do that, because that means that if I later\n> > > have to attach the partition again with the same partition bounds, then\n> > > I might have to incur a scan to recheck the constraint. I think we want\n> > > to make the new constraint be as tight as possible ...\n> > \n> > The ATTACH PARTITION checks if any existing constraint impilies the (proposed)\n> > partition bounds, not just if constraints are equal. So I'm suggesting to do\n> > the same here.\n> \n> So if we do that on DETACH, what would happen on ATTACH?\n\nDo you mean what happens to the constraint that was already there ?\nNothing, since it's not ours to mess with. Checking ImpliedBy() rather than\nequal() doesn't change that.\n\nI proposed this a few years ago for DETACH (without concurrently), specifically\nto avoid the partition scans.\nhttps://www.postgresql.org/message-id/20180601221428.GU5164@telsasoft.com\n|The docs say: if detaching/re-attach a partition, should first ADD CHECK to\n|avoid a slow ATTACH operation. Perhaps DETACHing a partition could\n|implicitly CREATE a constraint which is usable when reATTACHing?\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 21 Mar 2021 13:29:03 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On 2021-Mar-21, Justin Pryzby wrote:\n\n> On Sun, Mar 21, 2021 at 03:22:00PM -0300, Alvaro Herrera wrote:\n>\n> > So if we do that on DETACH, what would happen on ATTACH?\n> \n> Do you mean what happens to the constraint that was already there ?\n> Nothing, since it's not ours to mess with. Checking ImpliedBy() rather than\n> equal() doesn't change that.\n\nNo, I meant what happens regarding checking existing values in the\ntable: is the table scanned even if the partition constraint is implied\nby existing table constraints?\n\n> I proposed this a few years ago for DETACH (without concurrently), specifically\n> to avoid the partition scans.\n> https://www.postgresql.org/message-id/20180601221428.GU5164@telsasoft.com\n> |The docs say: if detaching/re-attach a partition, should first ADD CHECK to\n> |avoid a slow ATTACH operation. Perhaps DETACHing a partition could\n> |implicitly CREATE a constraint which is usable when reATTACHing?\n\nWell, I agree with you that we should add such a constraint.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"The problem with the future is that it keeps turning into the present\"\n(Hobbes)\n\n\n", "msg_date": "Sun, 21 Mar 2021 16:07:12 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On Sun, Mar 21, 2021 at 01:14:20PM -0500, Justin Pryzby wrote:\n> On Sun, Mar 21, 2021 at 03:01:15PM -0300, Alvaro Herrera wrote:\n> > > But note that it doesn't check if an existing constraint \"implies\" the new\n> > > constraint - maybe it should.\n> > \n> > Hm, I'm not sure I want to do that, because that means that if I later\n> > have to attach the partition again with the same partition bounds, then\n> > I might have to incur a scan to recheck the constraint. I think we want\n> > to make the new constraint be as tight as possible ...\n> \n> The ATTACH PARTITION checks if any existing constraint impilies the (proposed)\n> partition bounds, not just if constraints are equal. So I'm suggesting to do\n> the same here.\n\nOn Sun, Mar 21, 2021 at 04:07:12PM -0300, Alvaro Herrera wrote:\n> On 2021-Mar-21, Justin Pryzby wrote:\n> > On Sun, Mar 21, 2021 at 03:22:00PM -0300, Alvaro Herrera wrote:\n> > > So if we do that on DETACH, what would happen on ATTACH?\n> > \n> > Do you mean what happens to the constraint that was already there ?\n> > Nothing, since it's not ours to mess with. Checking ImpliedBy() rather than\n> > equal() doesn't change that.\n> \n> No, I meant what happens regarding checking existing values in the\n> table: is the table scanned even if the partition constraint is implied\n> by existing table constraints?\n\nI'm still not sure we're talking about the same thing.\n\nYour patch adds a CHECK constraint during DETACH CONCURRENTLY, and I suggested\nthat it should avoid adding it if it's redundant with an existing constraint,\neven if not equal().\n\nThe current behavior (since v10) is this:\n\npostgres=# ALTER TABLE p ATTACH PARTITION p1 FOR VALUES FROM (1)TO(2);\nDEBUG: partition constraint for table \"p1\" is implied by existing constraints\nALTER TABLE\n\nAnd that wouldn't change, except the CHECK constraint would be added\nautomatically during detach (if it wasn't already implied). Maybe the CHECK\nconstraint should be added without CONCURRENTLY, too. One fewer difference in\nbehavior.\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 21 Mar 2021 14:15:19 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "So I was about ready to get these patches pushed, when I noticed that in\nREPEATABLE READ isolation mode it is possible to insert rows violating\nan FK referencing the partition that is being detached. I'm not sure\nwhat is a good solution to this problem.\n\nThe problem goes like this:\n\n/* setup */\n\tdrop table if exists d4_primary, d4_primary1, d4_fk;\n\tcreate table d4_primary (a int primary key) partition by list (a);\n\tcreate table d4_primary1 partition of d4_primary for values in (1);\n\tinsert into d4_primary values (1);\n\tcreate table d4_fk (a int references d4_primary);\n\n/* session 1 */\n\tbegin isolation level repeatable read;\n\tselect * from d4_primary;\n\n/* session 2 */\n\talter table d4_primary detach partition d4_primary1 concurrently;\n\t-- blocks\n\t-- Cancel wait: Ctrl-c\n\n/* session 1 */\n\tinsert into d4_fk values (1);\n\tcommit;\n\nAt this point, d4_fk contains the value (1) which is not present in\nd4_primary.\n\nThis doesn't happen in READ COMMITTED mode; the INSERT at the final step\nfails with \"insert or update in table f4_fk violates the foreign key\",\nwhich is what I expected to happen here too.\n\nI had the idea that the RI code, in REPEATABLE READ mode, used a\ndifferent snapshot for the RI queries than the transaction snapshot.\nMaybe I'm wrong about that.\n\nI'm looking into that now.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"Cuando ma�ana llegue pelearemos segun lo que ma�ana exija\" (Mowgli)\n\n\n", "msg_date": "Tue, 23 Mar 2021 11:18:26 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On 2021-Mar-23, Alvaro Herrera wrote:\n\n> So I was about ready to get these patches pushed, when I noticed that in\n> REPEATABLE READ isolation mode it is possible to insert rows violating\n> an FK referencing the partition that is being detached. I'm not sure\n> what is a good solution to this problem.\n\n...\n\n> I had the idea that the RI code, in REPEATABLE READ mode, used a\n> different snapshot for the RI queries than the transaction snapshot.\n\nI am definitely right about this. So why doesn't it work? The reason\nis that when SPI goes to execute the query, it obtains a new partition\ndirectory, and we tell it to include detached partitions precisely\nbecause we're in REPEATABLE READ mode.\n\nIn other words, the idea that we can blanket use the snapshot-isolation\ncondition to decide whether to include detached partitions or not, is\nbogus and needs at least the refinement that for any query that comes\nfrom the RI system, we need a partition directory that does not include\ndetached partitions.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"El sabio habla porque tiene algo que decir;\nel tonto, porque tiene que decir algo\" (Platon).\n\n\n", "msg_date": "Tue, 23 Mar 2021 11:55:54 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "I'm coming around to the idea that the fact that you can cancel the wait\nphase of DETACH CONCURRENTLY creates quite a disaster, and it's not easy\nto get away from it. The idea that REPEATABLE READ mode means that you\nnow see detached partitions as if they were in normal condition, is\ncompletely at odds with that behavior. \n\nI think a possible solution to this problem is that the \"detach\" flag in\npg_inherits is not a boolean anymore, but an Xid (or maybe two Xids).\nNot sure exactly which Xid(s) yet, and I'm not sure what are the exact\nrules, but the Xid becomes a marker that indicates an horizon past which\nthe partition is no longer visible. Then, REPEATABLE READ can see the\npartition, but only if its snapshot is older than the Xid.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"La persona que no quer�a pecar / estaba obligada a sentarse\n en duras y empinadas sillas / desprovistas, por cierto\n de blandos atenuantes\" (Patricio Vogel)\n\n\n", "msg_date": "Tue, 23 Mar 2021 12:25:23 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On 2021-Mar-23, Alvaro Herrera wrote:\n\n> I think a possible solution to this problem is that the \"detach\" flag in\n> pg_inherits is not a boolean anymore, but an Xid (or maybe two Xids).\n> Not sure exactly which Xid(s) yet, and I'm not sure what are the exact\n> rules, but the Xid becomes a marker that indicates an horizon past which\n> the partition is no longer visible. Then, REPEATABLE READ can see the\n> partition, but only if its snapshot is older than the Xid.\n\nSo a solution to this problem seems similar (but not quite the same) as\npg_index.indcheckxmin: the partition is included in the partition\ndirectory, or not, depending on the pg_inherits tuple visibility for the\nactive snapshot. This solves the problem because the RI query uses a\nfresh snapshot, for which the partition has already been detached, while\nthe normal REPEATABLE READ query is using the old snapshot for which the\n'detach-pending' row is still seen as in progress. With this, the weird\nhack in a couple of places that needed to check the isolation level is\ngone, which makes me a bit more comfortable.\n\nSo attached is v9 with this problem solved.\n\nI'll add one more torture test, and if it works correctly I'll push it:\nhave a cursor in the repeatable read transaction, which can read the\nreferenced partition table and see the row in the detached partition,\nbut the RI query must not see that row. Bonus: the RI query is run from\nanother cursor that is doing UPDATE WHERE CURRENT OF that cursor.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"Estoy de acuerdo contigo en que la verdad absoluta no existe...\nEl problema es que la mentira s� existe y tu est�s mintiendo\" (G. Lama)", "msg_date": "Thu, 25 Mar 2021 12:50:39 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "I added that test as promised, and I couldn't find any problems, so I\nhave pushed it.\n\nThanks!\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n", "msg_date": "Thu, 25 Mar 2021 18:03:36 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On 2020-Nov-30, Justin Pryzby wrote:\n\n> On Tue, Nov 03, 2020 at 08:56:06PM -0300, Alvaro Herrera wrote:\n\n> > * On the first transaction (the one that marks the partition as\n> > detached), the partition is locked with ShareLock rather than\n> > ShareUpdateExclusiveLock. This means that DML is not allowed anymore.\n> > This seems a reasonable restriction to me; surely by the time you're\n> > detaching the partition you're not inserting data into it anymore.\n> \n> I don't think it's an issue with your patch, but FYI that sounds like something\n> I had to do recently: detach *all* partitions of various tabls to promote their\n> partition key column from timestamp to timestamptz. And we insert directly\n> into child tables, not routed via parent.\n> \n> I don't your patch is still useful, but not to us. So the documentation should\n> be clear about that.\n\nFWIW since you mentioned this detail specifically: I backed away from\ndoing this (and use ShareUpdateExclusive), because it wasn't buying us\nanything anyway. The reason for it is that I wanted to close the hole\nfor RI queries, and this seemed the simplest fix; but it really *wasn't*\na fix anyway. My later games with the active snapshot (which are\npresent in the version I pushed) better close this problem. So I don't\nthink this would be a problem.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n", "msg_date": "Thu, 25 Mar 2021 18:08:27 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> I added that test as promised, and I couldn't find any problems, so I\n> have pushed it.\n\nBuildfarm testing suggests there's an issue under CLOBBER_CACHE_ALWAYS:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=trilobite&dt=2021-03-29%2018%3A14%3A24\n\nspecifically\n\ndiff -U3 /home/buildfarm/trilobite/buildroot/HEAD/pgsql.build/src/test/isolation/expected/detach-partition-concurrently-4.out /home/buildfarm/trilobite/buildroot/HEAD/pgsql.build/src/test/isolation/output_iso/results/detach-partition-concurrently-4.out\n--- /home/buildfarm/trilobite/buildroot/HEAD/pgsql.build/src/test/isolation/expected/detach-partition-concurrently-4.out\t2021-03-29 20:14:21.258199311 +0200\n+++ /home/buildfarm/trilobite/buildroot/HEAD/pgsql.build/src/test/isolation/output_iso/results/detach-partition-concurrently-4.out\t2021-03-30 18:54:34.272839428 +0200\n@@ -324,6 +324,7 @@\n 1 \n 2 \n step s1insert: insert into d4_fk values (1);\n+ERROR: insert or update on table \"d4_fk\" violates foreign key constraint \"d4_fk_a_fkey\"\n step s1c: commit;\n \n starting permutation: s2snitch s1b s1s s2detach s1cancel s3vacfreeze s1s s1insert s1c\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 31 Mar 2021 14:30:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On Sun, Mar 21, 2021 at 03:01:15PM -0300, Alvaro Herrera wrote:\n> > But note that it doesn't check if an existing constraint \"implies\" the new\n> > constraint - maybe it should.\n> \n> Hm, I'm not sure I want to do that, because that means that if I later\n> have to attach the partition again with the same partition bounds, then\n> I might have to incur a scan to recheck the constraint. I think we want\n> to make the new constraint be as tight as possible ...\n\nIf it *implies* the partition constraint, then it's at least as tight (and\nmaybe tighter), yes ?\n\nI think you're concerned with the case that someone has a partition with\n\"tight\" bounds like (a>=200 and a<300) and a check constraint that's \"less\ntight\" like (a>=100 AND a<400). In that case, the loose check constraint\ndoesn't imply the tighter partition constraint, so your patch would add a\nnon-redundant constraint.\n\nI'm interested in the case that someone has a check constraint that almost but\nnot exactly matches the partition constraint, like (a<300 AND a>=200). In that\ncase, your patch adds a redundant constraint.\n\nI wrote a patch which seems to effect my preferred behavior - please check.\n\n-- \nJustin", "msg_date": "Sat, 10 Apr 2021 13:42:26 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On 2021-Mar-31, Tom Lane wrote:\n\n> diff -U3 /home/buildfarm/trilobite/buildroot/HEAD/pgsql.build/src/test/isolation/expected/detach-partition-concurrently-4.out /home/buildfarm/trilobite/buildroot/HEAD/pgsql.build/src/test/isolation/output_iso/results/detach-partition-concurrently-4.out\n> --- /home/buildfarm/trilobite/buildroot/HEAD/pgsql.build/src/test/isolation/expected/detach-partition-concurrently-4.out\t2021-03-29 20:14:21.258199311 +0200\n> +++ /home/buildfarm/trilobite/buildroot/HEAD/pgsql.build/src/test/isolation/output_iso/results/detach-partition-concurrently-4.out\t2021-03-30 18:54:34.272839428 +0200\n> @@ -324,6 +324,7 @@\n> 1 \n> 2 \n> step s1insert: insert into d4_fk values (1);\n> +ERROR: insert or update on table \"d4_fk\" violates foreign key constraint \"d4_fk_a_fkey\"\n> step s1c: commit;\n> \n> starting permutation: s2snitch s1b s1s s2detach s1cancel s3vacfreeze s1s s1insert s1c\n\nHmm, actually, looking at this closely, I think the expected output is\nbogus and trilobite is doing the right thing by throwing this error\nhere. The real question is why isn't this case behaving in that way in\nevery *other* animal.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"Puedes vivir s�lo una vez, pero si lo haces bien, una vez es suficiente\"\n\n\n", "msg_date": "Sun, 11 Apr 2021 17:20:35 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On Sun, Apr 11, 2021 at 05:20:35PM -0400, Alvaro Herrera wrote:\n> On 2021-Mar-31, Tom Lane wrote:\n> \n> > diff -U3 /home/buildfarm/trilobite/buildroot/HEAD/pgsql.build/src/test/isolation/expected/detach-partition-concurrently-4.out /home/buildfarm/trilobite/buildroot/HEAD/pgsql.build/src/test/isolation/output_iso/results/detach-partition-concurrently-4.out\n> > --- /home/buildfarm/trilobite/buildroot/HEAD/pgsql.build/src/test/isolation/expected/detach-partition-concurrently-4.out\t2021-03-29 20:14:21.258199311 +0200\n> > +++ /home/buildfarm/trilobite/buildroot/HEAD/pgsql.build/src/test/isolation/output_iso/results/detach-partition-concurrently-4.out\t2021-03-30 18:54:34.272839428 +0200\n> > @@ -324,6 +324,7 @@\n> > 1 \n> > 2 \n> > step s1insert: insert into d4_fk values (1);\n> > +ERROR: insert or update on table \"d4_fk\" violates foreign key constraint \"d4_fk_a_fkey\"\n> > step s1c: commit;\n> > \n> > starting permutation: s2snitch s1b s1s s2detach s1cancel s3vacfreeze s1s s1insert s1c\n> \n> Hmm, actually, looking at this closely, I think the expected output is\n> bogus and trilobite is doing the right thing by throwing this error\n> here. The real question is why isn't this case behaving in that way in\n> every *other* animal.\n\nI was looking/thinking at it, and wondered whether it could be a race condition\ninvolving pg_cancel_backend()\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 11 Apr 2021 16:23:00 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On Mon, Apr 12, 2021 at 6:20 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2021-Mar-31, Tom Lane wrote:\n>\n> > diff -U3 /home/buildfarm/trilobite/buildroot/HEAD/pgsql.build/src/test/isolation/expected/detach-partition-concurrently-4.out /home/buildfarm/trilobite/buildroot/HEAD/pgsql.build/src/test/isolation/output_iso/results/detach-partition-concurrently-4.out\n> > --- /home/buildfarm/trilobite/buildroot/HEAD/pgsql.build/src/test/isolation/expected/detach-partition-concurrently-4.out 2021-03-29 20:14:21.258199311 +0200\n> > +++ /home/buildfarm/trilobite/buildroot/HEAD/pgsql.build/src/test/isolation/output_iso/results/detach-partition-concurrently-4.out 2021-03-30 18:54:34.272839428 +0200\n> > @@ -324,6 +324,7 @@\n> > 1\n> > 2\n> > step s1insert: insert into d4_fk values (1);\n> > +ERROR: insert or update on table \"d4_fk\" violates foreign key constraint \"d4_fk_a_fkey\"\n> > step s1c: commit;\n> >\n> > starting permutation: s2snitch s1b s1s s2detach s1cancel s3vacfreeze s1s s1insert s1c\n>\n> Hmm, actually, looking at this closely, I think the expected output is\n> bogus and trilobite is doing the right thing by throwing this error\n> here. The real question is why isn't this case behaving in that way in\n> every *other* animal.\n\nIndeed.\n\nI can see a wrong behavior of RelationGetPartitionDesc() in a case\nthat resembles the above test case.\n\ndrop table if exists d4_primary, d4_primary1, d4_fk, d4_pid;\ncreate table d4_primary (a int primary key) partition by list (a);\ncreate table d4_primary1 partition of d4_primary for values in (1);\ncreate table d4_primary2 partition of d4_primary for values in (2);\ninsert into d4_primary values (1);\ninsert into d4_primary values (2);\ncreate table d4_fk (a int references d4_primary);\ninsert into d4_fk values (2);\ncreate table d4_pid (pid int);\n\nSession 1:\nbegin isolation level serializable;\nselect * from d4_primary;\n a\n---\n 1\n 2\n(2 rows)\n\nSession 2:\nalter table d4_primary detach partition d4_primary1 concurrently;\n<waits>\n\nSession 1:\n-- can see d4_primary1 as detached at this point, though still scans!\nselect * from d4_primary;\n a\n---\n 1\n 2\n(2 rows)\ninsert into d4_fk values (1);\nINSERT 0 1\nend;\n\nSession 2:\n<alter-finishes>\nALTER TABLE\n\nSession 1:\n-- FK violated\nselect * from d4_primary;\n a\n---\n 2\n(1 row)\nselect * from d4_fk;\n a\n---\n 1\n(1 row)\n\nThe 2nd select that session 1 performs adds d4_primary1, whose detach\nit *sees* is pending, to the PartitionDesc, but without setting its\nincludes_detached. The subsequent insert's RI query, because it uses\nthat PartitionDesc as-is, returns 1 as being present in d4_primary,\nleading to the insert succeeding. When session 1's transaction\ncommits, the waiting ALTER proceeds with committing the 2nd part of\nthe DETACH, without having a chance again to check if the DETACH would\nlead to the foreign key of d4_fk being violated.\n\nIt seems problematic to me that the logic of setting includes_detached\nis oblivious of the special check in find_inheritance_children() to\ndecide whether \"force\"-include a detach-pending child based on\ncross-checking its pg_inherit tuple's xmin against the active\nsnapshot. Maybe fixing that would help, although I haven't tried that\nmyself yet.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 12 Apr 2021 16:42:50 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On Mon, Apr 12, 2021 at 4:42 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Mon, Apr 12, 2021 at 6:20 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > On 2021-Mar-31, Tom Lane wrote:\n> >\n> > > diff -U3 /home/buildfarm/trilobite/buildroot/HEAD/pgsql.build/src/test/isolation/expected/detach-partition-concurrently-4.out /home/buildfarm/trilobite/buildroot/HEAD/pgsql.build/src/test/isolation/output_iso/results/detach-partition-concurrently-4.out\n> > > --- /home/buildfarm/trilobite/buildroot/HEAD/pgsql.build/src/test/isolation/expected/detach-partition-concurrently-4.out 2021-03-29 20:14:21.258199311 +0200\n> > > +++ /home/buildfarm/trilobite/buildroot/HEAD/pgsql.build/src/test/isolation/output_iso/results/detach-partition-concurrently-4.out 2021-03-30 18:54:34.272839428 +0200\n> > > @@ -324,6 +324,7 @@\n> > > 1\n> > > 2\n> > > step s1insert: insert into d4_fk values (1);\n> > > +ERROR: insert or update on table \"d4_fk\" violates foreign key constraint \"d4_fk_a_fkey\"\n> > > step s1c: commit;\n> > >\n> > > starting permutation: s2snitch s1b s1s s2detach s1cancel s3vacfreeze s1s s1insert s1c\n> >\n> > Hmm, actually, looking at this closely, I think the expected output is\n> > bogus and trilobite is doing the right thing by throwing this error\n> > here. The real question is why isn't this case behaving in that way in\n> > every *other* animal.\n>\n> Indeed.\n>\n> I can see a wrong behavior of RelationGetPartitionDesc() in a case\n> that resembles the above test case.\n>\n> drop table if exists d4_primary, d4_primary1, d4_fk, d4_pid;\n> create table d4_primary (a int primary key) partition by list (a);\n> create table d4_primary1 partition of d4_primary for values in (1);\n> create table d4_primary2 partition of d4_primary for values in (2);\n> insert into d4_primary values (1);\n> insert into d4_primary values (2);\n> create table d4_fk (a int references d4_primary);\n> insert into d4_fk values (2);\n> create table d4_pid (pid int);\n>\n> Session 1:\n> begin isolation level serializable;\n> select * from d4_primary;\n> a\n> ---\n> 1\n> 2\n> (2 rows)\n>\n> Session 2:\n> alter table d4_primary detach partition d4_primary1 concurrently;\n> <waits>\n>\n> Session 1:\n> -- can see d4_primary1 as detached at this point, though still scans!\n> select * from d4_primary;\n> a\n> ---\n> 1\n> 2\n> (2 rows)\n> insert into d4_fk values (1);\n> INSERT 0 1\n> end;\n>\n> Session 2:\n> <alter-finishes>\n> ALTER TABLE\n>\n> Session 1:\n> -- FK violated\n> select * from d4_primary;\n> a\n> ---\n> 2\n> (1 row)\n> select * from d4_fk;\n> a\n> ---\n> 1\n> (1 row)\n>\n> The 2nd select that session 1 performs adds d4_primary1, whose detach\n> it *sees* is pending, to the PartitionDesc, but without setting its\n> includes_detached. The subsequent insert's RI query, because it uses\n> that PartitionDesc as-is, returns 1 as being present in d4_primary,\n> leading to the insert succeeding. When session 1's transaction\n> commits, the waiting ALTER proceeds with committing the 2nd part of\n> the DETACH, without having a chance again to check if the DETACH would\n> lead to the foreign key of d4_fk being violated.\n>\n> It seems problematic to me that the logic of setting includes_detached\n> is oblivious of the special check in find_inheritance_children() to\n> decide whether \"force\"-include a detach-pending child based on\n> cross-checking its pg_inherit tuple's xmin against the active\n> snapshot. Maybe fixing that would help, although I haven't tried that\n> myself yet.\n\nI tried that in the attached. It is indeed the above failing\nisolation test whose output needed to be adjusted.\n\nWhile at it, I tried rewording the comment around that special\nvisibility check done to force-include detached partitions, as I got\nconfused by the way it's worded currently. Actually, it may be a good\nidea to add some comments around the intended include_detached\nbehavior in the places where PartitionDesc is used; e.g.\nset_relation_partition_info() lacks a one-liner on why it's okay for\nthe planner to not see detached partitions. Or perhaps, a comment for\nincludes_detached of PartitionDesc should describe the various cases\nin which it is true and the cases in which it is not.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 12 Apr 2021 21:32:40 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On Mon, Apr 12, 2021 at 6:23 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Sun, Apr 11, 2021 at 05:20:35PM -0400, Alvaro Herrera wrote:\n> > On 2021-Mar-31, Tom Lane wrote:\n> >\n> > > diff -U3 /home/buildfarm/trilobite/buildroot/HEAD/pgsql.build/src/test/isolation/expected/detach-partition-concurrently-4.out /home/buildfarm/trilobite/buildroot/HEAD/pgsql.build/src/test/isolation/output_iso/results/detach-partition-concurrently-4.out\n> > > --- /home/buildfarm/trilobite/buildroot/HEAD/pgsql.build/src/test/isolation/expected/detach-partition-concurrently-4.out 2021-03-29 20:14:21.258199311 +0200\n> > > +++ /home/buildfarm/trilobite/buildroot/HEAD/pgsql.build/src/test/isolation/output_iso/results/detach-partition-concurrently-4.out 2021-03-30 18:54:34.272839428 +0200\n> > > @@ -324,6 +324,7 @@\n> > > 1\n> > > 2\n> > > step s1insert: insert into d4_fk values (1);\n> > > +ERROR: insert or update on table \"d4_fk\" violates foreign key constraint \"d4_fk_a_fkey\"\n> > > step s1c: commit;\n> > >\n> > > starting permutation: s2snitch s1b s1s s2detach s1cancel s3vacfreeze s1s s1insert s1c\n> >\n> > Hmm, actually, looking at this closely, I think the expected output is\n> > bogus and trilobite is doing the right thing by throwing this error\n> > here. The real question is why isn't this case behaving in that way in\n> > every *other* animal.\n>\n> I was looking/thinking at it, and wondered whether it could be a race condition\n> involving pg_cancel_backend()\n\nI thought about it some and couldn't come up with an explanation as to\nwhy pg_cancel_backend() race might be a problem.\n\nActually it occurred to me this morning that CLOBBER_CACHE_ALWAYS is\nwhat exposed this problem on this animal (not sure if other such\nanimals did too though). With CLOBBER_CACHE_ALWAYS, a PartitionDesc\nwill be built afresh on most uses. In this particular case, the RI\nquery executed by the insert has to build a new one (for d4_primary),\ncorrectly excluding the detach-pending partition (d4_primary1) per the\nsnapshot with which it is run. In normal builds, it would reuse the\none built by an earlier query in the transaction, which does include\nthe detach-pending partition, thus allowing the insert trying to\ninsert a row referencing that partition to succeed. There is a\nprovision in RelationGetPartitionDesc() to rebuild if any\ndetach-pending partitions in the existing copy of PartitionDesc are\nnot to be seen by the current query, but a bug mentioned in my earlier\nreply prevents that from kicking in.\n\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 13 Apr 2021 11:13:34 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On 2021-Apr-13, Amit Langote wrote:\n\n> Actually it occurred to me this morning that CLOBBER_CACHE_ALWAYS is\n> what exposed this problem on this animal (not sure if other such\n> animals did too though). With CLOBBER_CACHE_ALWAYS, a PartitionDesc\n> will be built afresh on most uses. In this particular case, the RI\n> query executed by the insert has to build a new one (for d4_primary),\n> correctly excluding the detach-pending partition (d4_primary1) per the\n> snapshot with which it is run. In normal builds, it would reuse the\n> one built by an earlier query in the transaction, which does include\n> the detach-pending partition, thus allowing the insert trying to\n> insert a row referencing that partition to succeed. There is a\n> provision in RelationGetPartitionDesc() to rebuild if any\n> detach-pending partitions in the existing copy of PartitionDesc are\n> not to be seen by the current query, but a bug mentioned in my earlier\n> reply prevents that from kicking in.\n\nRight -- that explanation makes perfect sense: the problem stems from\nthe fact that the cached copy of the partition descriptor is not valid\ndepending on the visibility of detached partitions for the operation\nthat requests the descriptor. I think your patch is a critical part to\na complete solution, but one thing is missing: we don't actually know\nthat the detached partitions we see now are the same detached partitions\nwe saw a moment ago. After all, a partitioned table can have several\npartitions in the process of being detached; so if you just go with the\nboolean \"does it have any detached or not\" bit, you could end up with a\ndescriptor that doesn't include/ignore *all* detached partitions, just\nthe older one(s).\n\nI think you could fix this aspect easily by decreeing that you can only\nhave only one partition-being-detached at one point. So if you try to\nDETACH CONCURRENTLY and there's another one in that state, raise an\nerror. Maybe for simplicity we should do that anyway.\n\nBut I think there's another hidden assumption in your patch, which is\nthat the descriptor is rebuilt every now and then *anyway* because the\nflag for detached flips between parser and executor, and because we send\ninvalidation messages for each detach. I don't think we would ever\nchange things that would break this flipping (it sounds like planner and\nexecutor necessarily have to be doing things differently all the time),\nbut it seems fragile as heck. I would feel much safer if we just\navoided caching the wrong thing ... or perhaps keep a separate cache\nentry (one descriptor including detached, another one not), to avoid\npointless rebuilds.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n", "msg_date": "Tue, 13 Apr 2021 12:10:30 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "OK so after mulling this over for a long time, here's a patch for this.\nIt solves the problem by making the partition descriptor no longer be\ncached if a detached partition is omitted. Any transaction that needs a\npartition descriptor that excludes detached partitions, will have to\nrecreate the partdesc from scratch. To support this, I changed the\noutput boolean semantics: instead of \"does this partdesc include an\ndetached partitions\" as in your patch, it now is \"are there any detached\npartitions\". But whenever no detached partitions exist, or when all\npartitions including detached are requested, then the cached descriptor\nis returned (because that necessarily has to be correct). The main\ndifference this has to your patch is that we always keep the descriptor\nin the cache and don't rebuild anything, unless a detached partition is\npresent.\n\nI flipped the find_inheritance_children() input boolean, from\n\"include_detached\" to \"omit_detached\". This is more natural, given the\ninternal behavior. You could argue to propagate that naming change to\nthe partdesc.h API and PartitionDirectory, but I don't think there's a\nneed for that.\n\nI ran all the detach-partition-concurrently tests under\ndebug_invalidate_system_caches_always=1 and everything passes.\n\nI experimented with keeping a separate cached partition descriptor that\nomits the detached partition, but that brings back some trouble; I\ncouldn't find a way to invalidate such a cached entry in a reasonable\nway. I have the patch for that, if somebody wants to play with it.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"That sort of implies that there are Emacs keystrokes which aren't obscure.\nI've been using it daily for 2 years now and have yet to discover any key\nsequence which makes any sense.\" (Paul Thomas)", "msg_date": "Tue, 20 Apr 2021 18:41:07 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "Actually I had a silly bug in the version that attempted to cache a\npartdesc that omits detached partitions. This one, while not fully\nbaked, seems to work correctly (on top of the previous one).\n\nThe thing that I don't fully understand is why we have to require to\nhave built the regular one first.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"This is what I like so much about PostgreSQL. Most of the surprises\nare of the \"oh wow! That's cool\" Not the \"oh shit!\" kind. :)\"\nScott Marlowe, http://archives.postgresql.org/pgsql-admin/2008-10/msg00152.php", "msg_date": "Tue, 20 Apr 2021 20:46:49 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "While the approach in the previous email does pass the tests, I think\n(but couldn't find a test case to prove) it does so coincidentally, not\nbecause it is correct. If I make the test for \"detached exist\" use the\ncached omits-partitions-partdesc, it does fail, because we had\npreviously cached one that was not yet omitting the partition. So what\nI said earlier in the thread stands: the set of partitions that are\nconsidered detached changes depending on what the active snapshot is,\nand therefore we *must not* cache any such descriptor.\n\nSo I backtracked to my previous proposal, which saves in relcache only\nthe partdesc that includes all partitions. If any partdesc is built\nthat omits partitions being detached, that one must be rebuilt afresh\neach time. And to avoid potentially saving a lot of single-use\npartdescs in CacheMemoryContext, in the attached second patch (which I\nattach separately only to make it more obvious to review) I store such\npartdescs in PortalContext.\n\nBarring objections, I will get this pushed early tomorrow.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"Just treat us the way you want to be treated + some extra allowance\n for ignorance.\" (Michael Brusser)", "msg_date": "Wed, 21 Apr 2021 16:38:55 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On 2021-Apr-10, Justin Pryzby wrote:\n\n> If it *implies* the partition constraint, then it's at least as tight (and\n> maybe tighter), yes ?\n> \n> I think you're concerned with the case that someone has a partition with\n> \"tight\" bounds like (a>=200 and a<300) and a check constraint that's \"less\n> tight\" like (a>=100 AND a<400). In that case, the loose check constraint\n> doesn't imply the tighter partition constraint, so your patch would add a\n> non-redundant constraint.\n\n... yeah, you're right, we can do as you suggest and it seems an\nimprovement. I verified, as is obvious in hindsight, that the existing\nconstraint makes a future ATTACH of the partition with the same bounds\nas before not scan the partition.\n\nI pushed the patch with a small change:\nPartConstraintImpliedByRelConstraint wants the constraint in\nimplicit-AND form (that is, a list) which is what we already have, so we\ncan postpone make_ands_explicit() until later.\n\nPushed, thanks,\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n", "msg_date": "Wed, 21 Apr 2021 18:12:48 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "(Sorry about being away from this for over a week.)\n\nOn Thu, Apr 22, 2021 at 5:39 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> While the approach in the previous email does pass the tests, I think\n> (but couldn't find a test case to prove) it does so coincidentally, not\n> because it is correct. If I make the test for \"detached exist\" use the\n> cached omits-partitions-partdesc, it does fail, because we had\n> previously cached one that was not yet omitting the partition. So what\n> I said earlier in the thread stands: the set of partitions that are\n> considered detached changes depending on what the active snapshot is,\n> and therefore we *must not* cache any such descriptor.\n>\n> So I backtracked to my previous proposal, which saves in relcache only\n> the partdesc that includes all partitions. If any partdesc is built\n> that omits partitions being detached, that one must be rebuilt afresh\n> each time. And to avoid potentially saving a lot of single-use\n> partdescs in CacheMemoryContext, in the attached second patch (which I\n> attach separately only to make it more obvious to review) I store such\n> partdescs in PortalContext.\n>\n> Barring objections, I will get this pushed early tomorrow.\n\nThanks for updating the patch. I have mostly cosmetic comments.\n\nReading through the latest one, seeing both include_detached and\nomit_detached being used in different parts of the code makes it a bit\nhard to keep in mind what a given code path is doing wrt detached\npartitions. How about making it all omit_detached?\n\n * Cope with partitions concurrently being detached. When we see a\n- * partition marked \"detach pending\", we only include it in the set of\n- * visible partitions if caller requested all detached partitions, or\n- * if its pg_inherits tuple's xmin is still visible to the active\n- * snapshot.\n+ * partition marked \"detach pending\", we omit it from the returned\n+ * descriptor if caller requested that and the tuple's xmin does not\n+ * appear in progress to the active snapshot.\n\nIt seems odd for a comment in find_inheritance_children() to talk\nabout the \"descriptor\". Maybe the earlier \"set of visible\npartitions\" wording was fine?\n\n- * The reason for this check is that we want to avoid seeing the\n+ * The reason for this hack is that we want to avoid seeing the\n * partition as alive in RI queries during REPEATABLE READ or\n<snip>\n+ * SERIALIZABLE transactions.\n\nThe comment doesn't quite make it clear why it is the RI query case\nthat necessitates this hack in the first case. Maybe the relation to\nwhat's going on with the partdesc\n\n+ if (likely(rel->rd_partdesc &&\n+ (!rel->rd_partdesc->detached_exist || include_detached)))\n+ return rel->rd_partdesc;\n\nI think it would help to have a comment about what's going here, in\naddition to the description you already wrote for\nPartitionDescData.detached_exist. Maybe something along the lines of:\n\n===\nUnder normal circumstances, we just return the partdesc that was\nalready built. However, if the partdesc was built at a time when\nthere existed detach-pending partition(s), which the current caller\nwould rather not see (omit_detached), then we build one afresh\nomitting any such partitions and return that one.\nRelationBuildPartitionDesc() makes sure that any such partdescs will\ndisappear when the query finishes.\n===\n\nThat's maybe a bit verbose but I am sure you will find a way to write\nit more succinctly.\n\nBTW, I do feel a bit alarmed by the potential performance impact of\nthis. If detached_exist of a cached partdesc is true, then RI queries\ninvoked during a bulk DML operation would have to rebuild one for\nevery tuple to be checked, right? I haven't tried an actual example\nyet though.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 22 Apr 2021 18:56:25 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On 2021-Apr-22, Amit Langote wrote:\n\n> On Thu, Apr 22, 2021 at 5:39 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> Reading through the latest one, seeing both include_detached and\n> omit_detached being used in different parts of the code makes it a bit\n> hard to keep in mind what a given code path is doing wrt detached\n> partitions. How about making it all omit_detached?\n\nYeah, I hesitated but wanted to do that too. Done.\n\n> * Cope with partitions concurrently being detached. When we see a\n> - * partition marked \"detach pending\", we only include it in the set of\n> - * visible partitions if caller requested all detached partitions, or\n> - * if its pg_inherits tuple's xmin is still visible to the active\n> - * snapshot.\n> + * partition marked \"detach pending\", we omit it from the returned\n> + * descriptor if caller requested that and the tuple's xmin does not\n> + * appear in progress to the active snapshot.\n> \n> It seems odd for a comment in find_inheritance_children() to talk\n> about the \"descriptor\". Maybe the earlier \"set of visible\n> partitions\" wording was fine?\n\nAbsolutely -- done that way.\n\n> - * The reason for this check is that we want to avoid seeing the\n> + * The reason for this hack is that we want to avoid seeing the\n> * partition as alive in RI queries during REPEATABLE READ or\n> <snip>\n> + * SERIALIZABLE transactions.\n> \n> The comment doesn't quite make it clear why it is the RI query case\n> that necessitates this hack in the first case.\n\nI added \"such queries use a different snapshot than the one used by\nregular (user) queries.\" I hope that's sufficient.\n\n> Maybe the relation to what's going on with the partdesc\n> \n> + if (likely(rel->rd_partdesc &&\n> + (!rel->rd_partdesc->detached_exist || include_detached)))\n> + return rel->rd_partdesc;\n> \n> I think it would help to have a comment about what's going here, in\n> addition to the description you already wrote for\n> PartitionDescData.detached_exist. Maybe something along the lines of:\n> \n> ===\n> Under normal circumstances, we just return the partdesc that was\n> already built. However, if the partdesc was built at a time when\n> there existed detach-pending partition(s), which the current caller\n> would rather not see (omit_detached), then we build one afresh\n> omitting any such partitions and return that one.\n> RelationBuildPartitionDesc() makes sure that any such partdescs will\n> disappear when the query finishes.\n> ===\n> \n> That's maybe a bit verbose but I am sure you will find a way to write\n> it more succinctly.\n\nI added some text in this spot, and also wrote some more in the comment\nabove RelationGetPartitionDesc and RelationBuildPartitionDesc.\n\n> BTW, I do feel a bit alarmed by the potential performance impact of\n> this. If detached_exist of a cached partdesc is true, then RI queries\n> invoked during a bulk DML operation would have to rebuild one for\n> every tuple to be checked, right? I haven't tried an actual example\n> yet though.\n\nYeah, I was scared about that too (which is why I insisted on trying to\nadd a cached copy of the partdesc omitting detached partitions). But\nAFAICS what happens is that the plan for the RI query gets cached after\na few tries; so we do build the partdesc a few times at first, but later\nwe use the cached plan and so we no longer use that one. So at least in\nthe normal cases this isn't a serious problem that I can see.\n\nI pushed it now. Thanks for your help,\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"How strange it is to find the words \"Perl\" and \"saner\" in such close\nproximity, with no apparent sense of irony. I doubt that Larry himself\ncould have managed it.\" (ncm, http://lwn.net/Articles/174769/)\n\n\n", "msg_date": "Thu, 22 Apr 2021 15:26:02 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On Fri, Apr 23, 2021 at 4:26 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2021-Apr-22, Amit Langote wrote:\n> > - * The reason for this check is that we want to avoid seeing the\n> > + * The reason for this hack is that we want to avoid seeing the\n> > * partition as alive in RI queries during REPEATABLE READ or\n> > <snip>\n> > + * SERIALIZABLE transactions.\n> >\n> > The comment doesn't quite make it clear why it is the RI query case\n> > that necessitates this hack in the first case.\n>\n> I added \"such queries use a different snapshot than the one used by\n> regular (user) queries.\" I hope that's sufficient.\n\nYeah, that makes sense.\n\n> > Maybe the relation to what's going on with the partdesc\n\n(I had to leave my desk while in the middle of typing this, but I\nforget what I was going to add :()\n\n> > BTW, I do feel a bit alarmed by the potential performance impact of\n> > this. If detached_exist of a cached partdesc is true, then RI queries\n> > invoked during a bulk DML operation would have to rebuild one for\n> > every tuple to be checked, right? I haven't tried an actual example\n> > yet though.\n>\n> Yeah, I was scared about that too (which is why I insisted on trying to\n> add a cached copy of the partdesc omitting detached partitions). But\n> AFAICS what happens is that the plan for the RI query gets cached after\n> a few tries; so we do build the partdesc a few times at first, but later\n> we use the cached plan and so we no longer use that one. So at least in\n> the normal cases this isn't a serious problem that I can see.\n\nActually, ri_trigger.c (or really plancache.c) is not very good at\ncaching the plan when querying partitioned tables; it always chooses\nto replan because a generic plan, even with runtime pruning built into\nit, looks very expensive compared to a custom one. Now that's a\nproblem we will have to fix sooner than later, but until then we have\nto work around it.\n\nHere is an example that shows the problem:\n\ncreate unlogged table pk_parted (a int primary key) partition by range (a);\nselect 'create unlogged table pk_parted_' || i || ' partition of\npk_parted for values from (' || (i-1) * 1000 + 1 || ') to (' || i *\n1000 + 1 || ');' from generate_series(1, 1000) i;\n\\gexec\ncreate unlogged table fk (a int references pk_parted);\ninsert into pk_parted select generate_series(1, 10000);\nbegin;\nselect * from fk_parted where a = 1;\n\nIn another session:\n\nalter table pk_parted detach partition pk_parted_1000 concurrently;\n<blocks; cancel using ctrl-c>\n\nBack in the 1st session:\n\nend;\ninsert into fk select generate_series(1, 10000);\nINSERT 0 10000\nTime: 47400.792 ms (00:47.401)\n\nThe insert took unusually long, because the PartitionDesc for\npk_parted had to be built exactly 10000 times, because there's a\ndetach-pending partition lying around. There is also a danger of an\nOOM with such an insert because of leaking into PortalContext the\nmemory of every PartitionDesc thus built, especially with larger\ncounts of PK partitions and rows inserted into the FK table.\n\nAlso, I noticed that all queries on pk_parted, not just the RI\nqueries, have to build the PartitionDesc every time, so take that much\nlonger:\n\n-- note the planning time\nexplain analyze select * from pk_parted where a = 1;\n QUERY\nPLAN\n\n-------------------------------------------------------------------------------------------------------------------------------\n--------------\n Index Only Scan using pk_parted_1_pkey on pk_parted_1 pk_parted\n(cost=0.28..8.29 rows=1 width=4) (actual time=0.016..0.017 ro\nws=1 loops=1)\n Index Cond: (a = 1)\n Heap Fetches: 1\n Planning Time: 7.543 ms\n Execution Time: 0.044 ms\n(5 rows)\n\nFinalizing the detach makes the insert and the query finish in normal\ntime, because the PartitionDesc can be cached again:\n\nalter table pk_parted detach partition pk_parted_1000 finalize;\ninsert into fk select generate_series(1, 10000);\nINSERT 0 10000\nTime: 855.336 ms\n\nexplain analyze select * from pk_parted where a = 1;\n QUERY\nPLAN\n\n-------------------------------------------------------------------------------------------------------------------------------\n--------------\n Index Only Scan using pk_parted_1_pkey on pk_parted_1 pk_parted\n(cost=0.28..8.29 rows=1 width=4) (actual time=0.033..0.036 ro\nws=1 loops=1)\n Index Cond: (a = 1)\n Heap Fetches: 1\n Planning Time: 0.202 ms\n Execution Time: 0.075 ms\n(5 rows)\n\nI am afraid we may have to fix this in the code after all, because\nthere does not seem a good way to explain this away in the\ndocumentation. If I read correctly, you did try an approach of\ncaching the PartitionDesc that we currently don't, no?\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 23 Apr 2021 18:33:09 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On 2021-Apr-23, Amit Langote wrote:\n\n> Back in the 1st session:\n> \n> end;\n> insert into fk select generate_series(1, 10000);\n> INSERT 0 10000\n> Time: 47400.792 ms (00:47.401)\n\nI guess I was wrong about that ... the example I tried didn't have 1000s\nof partitions, and I used debug print-outs to show when a new partdesc\nwas being rebuilt, and only six were occurring. I'm not sure why my\ncase behaves so differently from yours, but clearly from the timing this\nis not good.\n\n> I am afraid we may have to fix this in the code after all, because\n> there does not seem a good way to explain this away in the\n> documentation. \n\nYeah, looking at this case, I agree that it needs a fix of some kind.\n\n> If I read correctly, you did try an approach of caching the\n> PartitionDesc that we currently don't, no?\n\nI think the patch I posted was too simple. I think a real fix requires\nus to keep track of exactly in what way the partdesc is outdated, so\nthat we can compare to the current situation in deciding to use that\npartdesc or build a new one. For example, we could keep track of a list\nof OIDs of detached partitions (and it simplifies things a lot if allow\nonly a single partition in this situation, because it's either zero OIDs\nor one OID); or we can save the Xmin of the pg_inherits tuple for the\ndetached partition (and we can compare that Xmin to our current active\nsnapshot and decide whether to use that partdesc or not).\n\nI'll experiment with this a little more and propose a patch later today.\n\nI don't think it's too much of a problem to state that you need to\nfinalize one detach before you can do the next one. After all, with\nregular detach, you'd have the tables exclusively locked so you can't do\nthem in parallel anyway. (It also increases the chances that people\nwill finalize detach operations that went unnoticed.)\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n", "msg_date": "Fri, 23 Apr 2021 13:12:19 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On 2021-Apr-23, Alvaro Herrera wrote:\n\n> I think the patch I posted was too simple. I think a real fix requires\n> us to keep track of exactly in what way the partdesc is outdated, so\n> that we can compare to the current situation in deciding to use that\n> partdesc or build a new one. For example, we could keep track of a list\n> of OIDs of detached partitions (and it simplifies things a lot if allow\n> only a single partition in this situation, because it's either zero OIDs\n> or one OID); or we can save the Xmin of the pg_inherits tuple for the\n> detached partition (and we can compare that Xmin to our current active\n> snapshot and decide whether to use that partdesc or not).\n> \n> I'll experiment with this a little more and propose a patch later today.\n\nThis (POC-quality) seems to do the trick.\n\n(I restored the API of find_inheritance_children, because it was getting\na little obnoxious. I haven't thought this through but I think we\nshould do something like it.)\n\n> I don't think it's too much of a problem to state that you need to\n> finalize one detach before you can do the next one. After all, with\n> regular detach, you'd have the tables exclusively locked so you can't do\n> them in parallel anyway. (It also increases the chances that people\n> will finalize detach operations that went unnoticed.)\n\nI haven't added a mechanism to verify this; but with asserts on, this\npatch will crash if you have more than one. I think the behavior is not\nnecessarily sane with asserts off, since you'll get an arbitrary\ndetach-Xmin assigned to the partdesc, depending on catalog scan order.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"El hombre nunca sabe de lo que es capaz hasta que lo intenta\" (C. Dickens)", "msg_date": "Fri, 23 Apr 2021 19:31:44 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "Hi Alvaro,\n\nOn Sat, Apr 24, 2021 at 8:31 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2021-Apr-23, Alvaro Herrera wrote:\n> > I think the patch I posted was too simple. I think a real fix requires\n> > us to keep track of exactly in what way the partdesc is outdated, so\n> > that we can compare to the current situation in deciding to use that\n> > partdesc or build a new one. For example, we could keep track of a list\n> > of OIDs of detached partitions (and it simplifies things a lot if allow\n> > only a single partition in this situation, because it's either zero OIDs\n> > or one OID); or we can save the Xmin of the pg_inherits tuple for the\n> > detached partition (and we can compare that Xmin to our current active\n> > snapshot and decide whether to use that partdesc or not).\n> >\n> > I'll experiment with this a little more and propose a patch later today.\n>\n> This (POC-quality) seems to do the trick.\n\nThanks for the patch.\n\n> (I restored the API of find_inheritance_children, because it was getting\n> a little obnoxious. I haven't thought this through but I think we\n> should do something like it.)\n\n+1.\n\n> > I don't think it's too much of a problem to state that you need to\n> > finalize one detach before you can do the next one. After all, with\n> > regular detach, you'd have the tables exclusively locked so you can't do\n> > them in parallel anyway. (It also increases the chances that people\n> > will finalize detach operations that went unnoticed.)\n\nThat sounds reasonable.\n\n> I haven't added a mechanism to verify this; but with asserts on, this\n> patch will crash if you have more than one. I think the behavior is not\n> necessarily sane with asserts off, since you'll get an arbitrary\n> detach-Xmin assigned to the partdesc, depending on catalog scan order.\n\nMaybe this is an ignorant question but is the plan to add an elog() in\nthis code path or a check (and an ereport()) somewhere in\nATExecDetachPartition() to prevent more than one partition ending up\nin detach-pending state?\n\nPlease allow me to study the patch a bit more closely and get back tomorrow.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 26 Apr 2021 21:04:55 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "Hello Amit,\n\nOn 2021-Apr-26, Amit Langote wrote:\n\n> On Sat, Apr 24, 2021 at 8:31 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> > I haven't added a mechanism to verify this; but with asserts on, this\n> > patch will crash if you have more than one. I think the behavior is not\n> > necessarily sane with asserts off, since you'll get an arbitrary\n> > detach-Xmin assigned to the partdesc, depending on catalog scan order.\n> \n> Maybe this is an ignorant question but is the plan to add an elog() in\n> this code path or a check (and an ereport()) somewhere in\n> ATExecDetachPartition() to prevent more than one partition ending up\n> in detach-pending state?\n\nYeah, that's what I'm planning to do.\n\n> Please allow me to study the patch a bit more closely and get back tomorrow.\n\nSure, thanks!\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"But static content is just dynamic content that isn't moving!\"\n http://smylers.hates-software.com/2007/08/15/fe244d0c.html\n\n\n", "msg_date": "Mon, 26 Apr 2021 08:40:54 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On 2021-Apr-26, Alvaro Herrera wrote:\n\n> > Please allow me to study the patch a bit more closely and get back tomorrow.\n> \n> Sure, thanks!\n\nHere's a more polished version.\n\nAfter trying the version with the elog(ERROR) when two detached\npartitions are present, I decided against it; it is unhelpful because\nit doesn't let you build partition descriptors for anything. So I made\nit an elog(WARNING) (not an ereport, note), and keep the most recent\npg_inherits.xmin value. This is not great, but it lets you out of the\nsituation by finalizing one of the detaches.\n\nThe other check (at ALTER TABLE .. DETACH time) is an ereport(ERROR) and\nshould make the first one unreachable.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W", "msg_date": "Mon, 26 Apr 2021 15:44:46 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "Sorry, I forgot to update some comments in that version. Fixed here.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W", "msg_date": "Mon, 26 Apr 2021 20:04:21 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "Thanks for the updated patch. I've been reading it, but I noticed a\nbug in 8aba9322511f, which I thought you'd want to know to make a note\nof when committing this one.\n\nSo we decided in 8aba9322511f that it is okay to make the memory\ncontext in which a transient partdesc is allocated a child of\nPortalContext so that it disappears when the portal does. But\nPortalContext is apparently NULL when the planner runs, at least in\nthe \"simple\" query protocol, so any transient partdescs built by the\nplanner would effectively leak. Not good.\n\nWith this patch, partdesc_nodetached is no longer transient, so the\nproblem doesn't exist.\n\nI will write more about the updated patch in a bit...\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 27 Apr 2021 16:34:30 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On Tue, Apr 27, 2021 at 4:34 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> Thanks for the updated patch. I've been reading it, but I noticed a\n> bug in 8aba9322511f, which I thought you'd want to know to make a note\n> of when committing this one.\n>\n> So we decided in 8aba9322511f that it is okay to make the memory\n> context in which a transient partdesc is allocated a child of\n> PortalContext so that it disappears when the portal does. But\n> PortalContext is apparently NULL when the planner runs, at least in\n> the \"simple\" query protocol, so any transient partdescs built by the\n> planner would effectively leak. Not good.\n>\n> With this patch, partdesc_nodetached is no longer transient, so the\n> problem doesn't exist.\n>\n> I will write more about the updated patch in a bit...\n\nThe first thing that struck me about partdesc_nodetached is that it's\nnot handled in RelationClearRelation(), where we (re)set a regular\npartdesc to NULL so that the next RelationGetPartitionDesc() has to\nbuild it from scratch. I think partdesc_nodetached and the xmin\nshould likewise be reset. Also in load_relcache_init_file(), although\nnothing serious there.\n\nThat said, I think I may have found a couple of practical problems\nwith partdesc_nodetached, or more precisely with having it\nside-by-side with regular partdesc. Maybe they can be fixed, so the\nproblems are not as such deal breakers for the patch's main idea. The\nproblems can be seen when different queries in a serializable\ntransaction have to use both the regular partdesc and\npartdesc_detached in a given relcache. For example, try the following\nafter first creating a situation where the table p has a\ndetach-pending partition p2 (for values in (2) and a live partition p1\n(for values in (1)).\n\nbegin isolation level serializable;\ninsert into p values (1);\nselect * from p where a = 1;\ninsert into p values (1);\n\nThe 1st insert succeeds but the 2nd fails with:\n\nERROR: no partition of relation \"p\" found for row\nDETAIL: Partition key of the failing row contains (a) = (1).\n\nI haven't analyzed this error very closely but there is another\nsituation which causes a crash due to what appears to be a conflict\nwith rd_pdcxt's design:\n\n-- run in a new session\nbegin isolation level serializable;\nselect * from p where a = 1;\ninsert into p values (1);\nselect * from p where a = 1;\n\nThe 2nd select crashes:\n\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n\nThe crash occurs because the planner gets handed a stale copy of\npartdesc_nodetached for the 2nd select. It gets stale, because the\ncontext it's allocated in gets made a child of rd_pdcxt, which is in\nturn assigned the context of the regular partdesc when it is built for\nthe insert query. Any child contexts of rd_pdcxt are deleted as soon\nas the Relation's refcount goes to zero, taking it with\npartdesc_nodetached. Note it is this code in\nRelationBuildPartitionDesc():\n\n /*\n * But first, a kluge: if there's an old rd_pdcxt, it contains an old\n * partition descriptor that may still be referenced somewhere.\n * Preserve it, while not leaking it, by reattaching it as a child\n * context of the new rd_pdcxt. Eventually it will get dropped by\n * either RelationClose or RelationClearRelation.\n */\n if (rel->rd_pdcxt != NULL)\n MemoryContextSetParent(rel->rd_pdcxt, new_pdcxt);\n rel->rd_pdcxt = new_pdcxt;\n\nI think we may need a separate context for partdesc_nodetached, likely\nwith the same kludges as rd_pdcxt. Maybe the first problem will go\naway with that as well.\n\nFew other minor things I noticed:\n\n+ * Store it into relcache. For snapshots built excluding detached\n+ * partitions, which we save separately, we also record the\n+ * pg_inherits.xmin of the detached partition that was omitted; this\n+ * informs a future potential user of such a cached snapshot.\n\nThe \"snapshot\" in the 1st and the last sentence should be \"partdesc\"?\n\n+ * We keep two partdescs in relcache: rd_partdesc_nodetached excludes\n+ * partitions marked concurrently being detached, while rd_partdesc includes\n+ * them.\n\nIMHO, describing rd_partdesc first in the sentence would be better.\nLike: rd_partdesc includes all partitions including any that are being\nconcurrently detached, while rd_partdesc_nodetached excludes them.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 27 Apr 2021 23:06:09 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On 2021-Apr-27, Amit Langote wrote:\n\n> On Tue, Apr 27, 2021 at 4:34 PM Amit Langote <amitlangote09@gmail.com> wrote:\n\n> I think we may need a separate context for partdesc_nodetached, likely\n> with the same kludges as rd_pdcxt. Maybe the first problem will go\n> away with that as well.\n\nOoh, seems I completely misunderstood what RelationClose was doing. I\nthought it was deleting the whole rd_pdcxt, *including* the \"current\"\npartdesc. But that's not at all what it does: it only deletes the\n*children* memcontexts, so the partdesc that is currently valid remains\nvalid. I agree that your proposed fix appears to be promising, in that\na separate \"context tree\" rd_pddcxt (?) can be used for this. I'll try\nit out now.\n\n> Few other minor things I noticed:\n> \n> + * Store it into relcache. For snapshots built excluding detached\n> + * partitions, which we save separately, we also record the\n> + * pg_inherits.xmin of the detached partition that was omitted; this\n> + * informs a future potential user of such a cached snapshot.\n> \n> The \"snapshot\" in the 1st and the last sentence should be \"partdesc\"?\n\nDoh, yeah.\n\n> + * We keep two partdescs in relcache: rd_partdesc_nodetached excludes\n> + * partitions marked concurrently being detached, while rd_partdesc includes\n> + * them.\n> \n> IMHO, describing rd_partdesc first in the sentence would be better.\n> Like: rd_partdesc includes all partitions including any that are being\n> concurrently detached, while rd_partdesc_nodetached excludes them.\n\nMakes sense.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n", "msg_date": "Tue, 27 Apr 2021 11:47:33 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "This v3 handles things as you suggested and works correctly AFAICT. I'm\ngoing to add some more tests cases to verify the behavior in the\nscenarios you showed, and get them to run under cache-clobber options to\nmake sure it's good.\n\nThanks!\n\n-- \n�lvaro Herrera Valdivia, Chile", "msg_date": "Tue, 27 Apr 2021 12:32:26 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On 2021-Apr-27, Alvaro Herrera wrote:\n\n> This v3 handles things as you suggested and works correctly AFAICT. I'm\n> going to add some more tests cases to verify the behavior in the\n> scenarios you showed, and get them to run under cache-clobber options to\n> make sure it's good.\n\nYep, it seems to work. Strangely, the new isolation case doesn't\nactually crash before the fix -- it merely throws a memory allocation\nerror.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"Linux transform� mi computadora, de una `m�quina para hacer cosas',\nen un aparato realmente entretenido, sobre el cual cada d�a aprendo\nalgo nuevo\" (Jaime Salinas)", "msg_date": "Tue, 27 Apr 2021 19:32:11 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On Wed, Apr 28, 2021 at 8:32 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2021-Apr-27, Alvaro Herrera wrote:\n>\n> > This v3 handles things as you suggested and works correctly AFAICT. I'm\n> > going to add some more tests cases to verify the behavior in the\n> > scenarios you showed, and get them to run under cache-clobber options to\n> > make sure it's good.\n>\n> Yep, it seems to work. Strangely, the new isolation case doesn't\n> actually crash before the fix -- it merely throws a memory allocation\n> error.\n\nThanks. Yeah, it does seem to work.\n\nI noticed that rd_partdesc_nodetached_xmin can sometimes end up with\nvalue 0. While you seem to be already aware of that, because otherwise\nyou wouldn't have added TransactionIdIsValid(...) in condition in\nRelationGetPartitionDesc(), the comments nearby don't mention why such\na thing might happen. Also, I guess it can't be helped that the\npartdesc_nodetached will have to be leaked when the xmin is 0, but\nthat shouldn't be as problematic as the case we discussed earlier.\n\n+ /*\n+ * But first, a kluge: if there's an old context for this type of\n+ * descriptor, it contains an old partition descriptor that may still be\n+ * referenced somewhere. Preserve it, while not leaking it, by\n+ * reattaching it as a child context of the new one. Eventually it will\n+ * get dropped by either RelationClose or RelationClearRelation.\n+ *\n+ * We keep the regular partdesc in rd_pdcxt, and the partdesc-excluding-\n+ * detached-partitions in rd_pddcxt.\n+ */\n+ context = is_omit ? &rel->rd_pddcxt : &rel->rd_pdcxt;\n+ if (*context != NULL)\n+ MemoryContextSetParent(*context, new_pdcxt);\n+ *context = new_pdcxt;\n\nWould it be a bit more readable to just duplicate this stanza in the\nblocks that assign to rd_partdesc_nodetached and rd_partdesc,\nrespectively? That's not much code to duplicate and it'd be easier to\nsee which context is for which partdesc.\n\n+ TransactionId rd_partdesc_nodetached_xmin; /* xmin for the above */\n\nCould you please expand this description a bit?\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 28 Apr 2021 23:21:04 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "Thanks for re-reviewing! This one I hope is the last version.\n\nOn Wed, Apr 28, 2021, at 10:21 AM, Amit Langote wrote:\n> I noticed that rd_partdesc_nodetached_xmin can sometimes end up with\n> value 0. While you seem to be already aware of that, because otherwise\n> you wouldn't have added TransactionIdIsValid(...) in condition in\n> RelationGetPartitionDesc(), the comments nearby don't mention why such\n> a thing might happen. Also, I guess it can't be helped that the\n> partdesc_nodetached will have to be leaked when the xmin is 0, but\n> that shouldn't be as problematic as the case we discussed earlier.\n\nThe only case I am aware where that can happen is if the pg_inherits tuple is frozen. (That's exactly what the affected test case was testing, note the \"VACUUM FREEZE pg_inherits\" there). So that test case blew up immediately; but I think the real-world chances that people are going to be doing that are pretty low, so I'm not really concerned about the leak.\n\n> Would it be a bit more readable to just duplicate this stanza in the\n> blocks that assign to rd_partdesc_nodetached and rd_partdesc,\n> respectively? That's not much code to duplicate and it'd be easier to\n> see which context is for which partdesc.\n\nSure .. that's how I first wrote this code. We don't use that style much, so I'm OK with backing out of it.\n\n> + TransactionId rd_partdesc_nodetached_xmin; /* xmin for the above */\n> \n> Could you please expand this description a bit?\n\nDone.", "msg_date": "Wed, 28 Apr 2021 12:11:12 -0400", "msg_from": "=?UTF-8?Q?=C3=81lvaro_Herrera?= <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "Pushed that now, with a one-line addition to the docs that only one\npartition can be marked detached.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"That sort of implies that there are Emacs keystrokes which aren't obscure.\nI've been using it daily for 2 years now and have yet to discover any key\nsequence which makes any sense.\" (Paul Thomas)\n\n\n", "msg_date": "Wed, 28 Apr 2021 15:49:47 -0400", "msg_from": "=?iso-8859-1?Q?=C1lvaro?= Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "(Thanks for committing the fix.)\n\nOn Thu, Apr 29, 2021 at 1:11 AM Álvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On Wed, Apr 28, 2021, at 10:21 AM, Amit Langote wrote:\n> I noticed that rd_partdesc_nodetached_xmin can sometimes end up with\n> value 0. While you seem to be already aware of that, because otherwise\n> you wouldn't have added TransactionIdIsValid(...) in condition in\n> RelationGetPartitionDesc(), the comments nearby don't mention why such\n> a thing might happen. Also, I guess it can't be helped that the\n> partdesc_nodetached will have to be leaked when the xmin is 0, but\n> that shouldn't be as problematic as the case we discussed earlier.\n>\n>\n> The only case I am aware where that can happen is if the pg_inherits tuple is frozen. (That's exactly what the affected test case was testing, note the \"VACUUM FREEZE pg_inherits\" there). So that test case blew up immediately; but I think the real-world chances that people are going to be doing that are pretty low, so I'm not really concerned about the leak.\n\nThe case I was looking at is when a partition detach appears as\nin-progress to a serializable transaction. If the caller wants to\nomit detached partitions, such a partition ends up in\nrd_partdesc_nodetached, with the corresponding xmin being set to 0 due\nto the way find_inheritance_children_extended() sets *detached_xmin.\nThe next query in the transaction that wants to omit detached\npartitions, seeing rd_partdesc_nodetached_xmin being invalid, rebuilds\nthe partdesc, again including that partition because the snapshot\nwouldn't have changed, and so on until the transaction ends. Now,\nthis can perhaps be \"fixed\" by making\nfind_inheritance_children_extended() set the xmin outside the\nsnapshot-checking block, but maybe there's no need to address this on\npriority.\n\nRather, a point that bothers me a bit is that we're including a\ndetached partition in the partdesc labeled \"nodetached\" in this\nparticular case. Maybe we should avoid that by considering in this\nscenario that no detached partitions exist for this transactions and\nso initialize rd_partdesc, instead of rd_partdesc_nodetached. That\nwill let us avoid the situations where the xmin is left in invalid\nstate. Maybe like the attached (it also fixes a couple of\ntypos/thinkos in the previous commit).\n\nNote that we still end up in the same situation as before where each\nquery in the serializable transaction that sees the detach as\nin-progress to have to rebuild the partition descriptor omitting the\ndetached partitions, even when it's clear that the detach-in-progress\npartition will be included every time.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 30 Apr 2021 22:57:02 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "Hello,\n\nI found this in the documentation, section '5.11.3. Partitioning Using \nInheritance'[1]:\n\"Some operations require a stronger lock when using declarative \npartitioning than when using table inheritance. For example, removing a \npartition from a partitioned table requires taking an ACCESS EXCLUSIVE \nlock on the parent table, whereas a SHARE UPDATE EXCLUSIVE lock is \nenough in the case of regular inheritance.\"\n\nThis point is no longer valid with some restrictions. If the table has a \ndefault partition, then removing a partition still requires taking an \nACCESS EXCLUSIVE lock.\n\nMay be make sense to add some details about DETACH CONCURRENTLY to the \nsection '5.11.2.2. Partition Maintenance' and completely remove this point?\n\n1. \nhttps://www.postgresql.org/docs/devel/ddl-partitioning.html#DDL-PARTITIONING-USING-INHERITANCE\n\nPavel Luzanov\nPostgres Professional: https://postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\nHello,\n\n I found this in the documentation, section '5.11.3. Partitioning\n Using Inheritance'[1]:\n \"Some operations require a stronger lock when using declarative\n partitioning than when using table inheritance. For example,\n removing a partition from a partitioned table requires taking an\n ACCESS EXCLUSIVE lock on the parent table, whereas a SHARE UPDATE\n EXCLUSIVE lock is enough in the case of regular inheritance.\"\n\n This point is no longer valid with\n some restrictions. If the table has a\n default partition, then removing a partition still requires taking\n an ACCESS EXCLUSIVE lock.\n\n May be make sense to add some details about DETACH CONCURRENTLY to\n the section '5.11.2.2. Partition Maintenance' and completely\n remove this point?\n\n 1.\nhttps://www.postgresql.org/docs/devel/ddl-partitioning.html#DDL-PARTITIONING-USING-INHERITANCE\n\nPavel Luzanov\nPostgres Professional: https://postgrespro.com\nThe Russian Postgres Company", "msg_date": "Wed, 5 May 2021 13:58:59 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On Wed, May 5, 2021 at 7:59 PM Pavel Luzanov <p.luzanov@postgrespro.ru> wrote:\n> I found this in the documentation, section '5.11.3. Partitioning Using Inheritance'[1]:\n> \"Some operations require a stronger lock when using declarative partitioning than when using table inheritance. For example, removing a partition from a partitioned table requires taking an ACCESS EXCLUSIVE lock on the parent table, whereas a SHARE UPDATE EXCLUSIVE lock is enough in the case of regular inheritance.\"\n>\n> This point is no longer valid with some restrictions. If the table has a default partition, then removing a partition still requires taking an ACCESS EXCLUSIVE lock.\n>\n> May be make sense to add some details about DETACH CONCURRENTLY to the section '5.11.2.2. Partition Maintenance' and completely remove this point?\n>\n> 1. https://www.postgresql.org/docs/devel/ddl-partitioning.html#DDL-PARTITIONING-USING-INHERITANCE\n\nThat makes sense, thanks for noticing.\n\nHow about the attached?\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 6 May 2021 14:35:34 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On 06.05.2021 08:35, Amit Langote wrote:\n> On Wed, May 5, 2021 at 7:59 PM Pavel Luzanov \n> <p.luzanov@postgrespro.ru> wrote:\n>> I found this in the documentation, section '5.11.3. Partitioning \n>> Using Inheritance'[1]: \"Some operations require a stronger lock when \n>> using declarative partitioning than when using table inheritance. For \n>> example, removing a partition from a partitioned table requires \n>> taking an ACCESS EXCLUSIVE lock on the parent table, whereas a SHARE \n>> UPDATE EXCLUSIVE lock is enough in the case of regular inheritance.\" \n>> This point is no longer valid with some restrictions. If the table \n>> has a default partition, then removing a partition still requires \n>> taking an ACCESS EXCLUSIVE lock. May be make sense to add some \n>> details about DETACH CONCURRENTLY to the section '5.11.2.2. Partition \n>> Maintenance' and completely remove this point? 1. \n>> https://www.postgresql.org/docs/devel/ddl-partitioning.html#DDL-PARTITIONING-USING-INHERITANCE\n> That makes sense, thanks for noticing. How about the attached? \n\nI like it.\nEspecially the link to the ALTER TABLE, this avoids duplication of all \nthe nuances of the the DETACH .. CONCURRENTLY.\n\n--\nPavel Luzanov\nPostgres Professional: https://postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Thu, 6 May 2021 14:22:56 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On 2021-Apr-30, Amit Langote wrote:\n\n> The case I was looking at is when a partition detach appears as\n> in-progress to a serializable transaction.\n\nYeah, I was exceedingly sloppy on my reasoning about this, and you're\nright that that's what actually happens rather than what I said.\n\n> If the caller wants to omit detached partitions, such a partition ends\n> up in rd_partdesc_nodetached, with the corresponding xmin being set to\n> 0 due to the way find_inheritance_children_extended() sets\n> *detached_xmin. The next query in the transaction that wants to omit\n> detached partitions, seeing rd_partdesc_nodetached_xmin being invalid,\n> rebuilds the partdesc, again including that partition because the\n> snapshot wouldn't have changed, and so on until the transaction ends.\n> Now, this can perhaps be \"fixed\" by making\n> find_inheritance_children_extended() set the xmin outside the\n> snapshot-checking block, but maybe there's no need to address this on\n> priority.\n\nHmm. See below.\n\n> Rather, a point that bothers me a bit is that we're including a\n> detached partition in the partdesc labeled \"nodetached\" in this\n> particular case. Maybe we should avoid that by considering in this\n> scenario that no detached partitions exist for this transactions and\n> so initialize rd_partdesc, instead of rd_partdesc_nodetached. That\n> will let us avoid the situations where the xmin is left in invalid\n> state. Maybe like the attached (it also fixes a couple of\n> typos/thinkos in the previous commit).\n\nMakes sense -- applied, thanks.\n\n> Note that we still end up in the same situation as before where each\n> query in the serializable transaction that sees the detach as\n> in-progress to have to rebuild the partition descriptor omitting the\n> detached partitions, even when it's clear that the detach-in-progress\n> partition will be included every time.\n\nYeah, you're right that there is a performance hole in the case where a\npartition pending detach exists and you're using repeatable read\ntransactions. I didn't see it as terribly critical since it's supposed\nto be very transient, but I may be wrong.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"Hay quien adquiere la mala costumbre de ser infeliz\" (M. A. Evans)\n\n\n", "msg_date": "Thu, 6 May 2021 13:13:47 -0400", "msg_from": "=?iso-8859-1?Q?=C1lvaro?= Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On 2021-May-05, Pavel Luzanov wrote:\n\n> Hello,\n> \n> I found this in the documentation, section '5.11.3. Partitioning Using\n> Inheritance'[1]:\n> \"Some operations require a stronger lock when using declarative partitioning\n> than when using table inheritance. For example, removing a partition from a\n> partitioned table requires taking an ACCESS EXCLUSIVE lock on the parent\n> table, whereas a SHARE UPDATE EXCLUSIVE lock is enough in the case of\n> regular inheritance.\"\n> \n> This point is no longer valid with some restrictions. If the table has a\n> default partition, then removing a partition still requires taking an ACCESS\n> EXCLUSIVE lock.\n\nHmm, are there any other operations for which the partitioning command\ntakes a strong lock than the legacy inheritance corresponding command?\nIf there aren't any, then it's okay to delete this paragraph as in the\nproposed patch. But if there are any, then I think we should change the\nexample to mention that other operation. I'm not sure what's a good way\nto verify that, though.\n\nAlso, it remains true that without CONCURRENTLY the DETACH operation\ntakes AEL. I'm not sure it's worth pointing this out in this paragraph.\n\n> May be make sense to add some details about DETACH CONCURRENTLY to the\n> section '5.11.2.2. Partition Maintenance' and completely remove this point?\n\nMaybe you're right, though.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"Las mujeres son como hondas: mientras m�s resistencia tienen,\n m�s lejos puedes llegar con ellas\" (Jonas Nightingale, Leap of Faith)\n\n\n", "msg_date": "Thu, 6 May 2021 13:32:08 -0400", "msg_from": "=?iso-8859-1?Q?=C1lvaro?= Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On 2021-May-06, Amit Langote wrote:\n\n> That makes sense, thanks for noticing.\n> \n> How about the attached?\n\nI tweaked the linkage; as submitted, the text in the link contained what\nis in the <term> tag, so literally it had:\n\n ... see DETACH PARTITION partition_name [ CONCURRENTLY | FINALIZE ] for\n details ...\n\nwhich didn't look very nice. So I made it use <link> instead of xref\nand wrote the \"ALTER TABLE .. DETACH PARTITION\" text. I first tried to\nfix it by adding an \"xreflabel\" attrib, but I didn't like it because the\ntext was not set in fixed width font.\n\nI also tweaked the wording to match the surrounding text a bit better,\nat least IMO. Feel free to suggest improvements.\n\nThanks!\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"That sort of implies that there are Emacs keystrokes which aren't obscure.\nI've been using it daily for 2 years now and have yet to discover any key\nsequence which makes any sense.\" (Paul Thomas)\n\n\n", "msg_date": "Thu, 6 May 2021 16:48:29 -0400", "msg_from": "=?iso-8859-1?Q?=C1lvaro?= Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On Fri, May 7, 2021 at 2:13 AM Álvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2021-Apr-30, Amit Langote wrote:\n>\n> > The case I was looking at is when a partition detach appears as\n> > in-progress to a serializable transaction.\n>\n> Yeah, I was exceedingly sloppy on my reasoning about this, and you're\n> right that that's what actually happens rather than what I said.\n>\n> > If the caller wants to omit detached partitions, such a partition ends\n> > up in rd_partdesc_nodetached, with the corresponding xmin being set to\n> > 0 due to the way find_inheritance_children_extended() sets\n> > *detached_xmin. The next query in the transaction that wants to omit\n> > detached partitions, seeing rd_partdesc_nodetached_xmin being invalid,\n> > rebuilds the partdesc, again including that partition because the\n> > snapshot wouldn't have changed, and so on until the transaction ends.\n> > Now, this can perhaps be \"fixed\" by making\n> > find_inheritance_children_extended() set the xmin outside the\n> > snapshot-checking block, but maybe there's no need to address this on\n> > priority.\n>\n> Hmm. See below.\n>\n> > Rather, a point that bothers me a bit is that we're including a\n> > detached partition in the partdesc labeled \"nodetached\" in this\n> > particular case. Maybe we should avoid that by considering in this\n> > scenario that no detached partitions exist for this transactions and\n> > so initialize rd_partdesc, instead of rd_partdesc_nodetached. That\n> > will let us avoid the situations where the xmin is left in invalid\n> > state. Maybe like the attached (it also fixes a couple of\n> > typos/thinkos in the previous commit).\n>\n> Makes sense -- applied, thanks.\n\nThank you.\n\n> > Note that we still end up in the same situation as before where each\n> > query in the serializable transaction that sees the detach as\n> > in-progress to have to rebuild the partition descriptor omitting the\n> > detached partitions, even when it's clear that the detach-in-progress\n> > partition will be included every time.\n>\n> Yeah, you're right that there is a performance hole in the case where a\n> partition pending detach exists and you're using repeatable read\n> transactions. I didn't see it as terribly critical since it's supposed\n> to be very transient, but I may be wrong.\n\nYeah, I'd hope so too. I think RR transactions would have to be\nconcurrent with an interrupted DETACH CONCURRENTLY to suffer the\nperformance hit and that does kind of make this a rarely occurring\ncase.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 May 2021 13:24:36 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On Wed, Apr 21, 2021 at 04:38:55PM -0400, Alvaro Herrera wrote:\n\n[fix to let CLOBBER_CACHE_ALWAYS pass]\n\n> Barring objections, I will get this pushed early tomorrow.\n\nprairiedog and wrasse failed a $SUBJECT test after this (commit 8aba932).\nAlso, some non-CLOBBER_CACHE_ALWAYS animals failed a test before the fix.\nThese IsolationCheck failures match detach-partition-concurrently[^\\n]*FAILED:\n\n sysname │ snapshot │ branch │ bfurl \n────────────┼─────────────────────┼────────┼────────────────────────────────────────────────────────────────────────────────────────────────\n hyrax │ 2021-03-27 07:29:34 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hyrax&dt=2021-03-27%2007%3A29%3A34\n topminnow │ 2021-03-28 20:37:38 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=topminnow&dt=2021-03-28%2020%3A37%3A38\n trilobite │ 2021-03-29 18:14:24 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=trilobite&dt=2021-03-29%2018%3A14%3A24\n hyrax │ 2021-04-01 07:21:10 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hyrax&dt=2021-04-01%2007%3A21%3A10\n dragonet │ 2021-04-01 19:48:03 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dragonet&dt=2021-04-01%2019%3A48%3A03\n avocet │ 2021-04-05 15:45:56 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=avocet&dt=2021-04-05%2015%3A45%3A56\n hyrax │ 2021-04-06 07:15:16 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hyrax&dt=2021-04-06%2007%3A15%3A16\n hyrax │ 2021-04-11 07:25:50 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hyrax&dt=2021-04-11%2007%3A25%3A50\n hyrax │ 2021-04-20 18:25:37 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hyrax&dt=2021-04-20%2018%3A25%3A37\n wrasse │ 2021-04-21 10:38:32 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2021-04-21%2010%3A38%3A32\n prairiedog │ 2021-04-25 22:05:48 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prairiedog&dt=2021-04-25%2022%3A05%3A48\n wrasse │ 2021-05-11 03:43:40 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2021-05-11%2003%3A43%3A40\n(12 rows)\n\n\n", "msg_date": "Mon, 24 May 2021 02:07:12 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On Mon, May 24, 2021 at 6:07 PM Noah Misch <noah@leadboat.com> wrote:\n> On Wed, Apr 21, 2021 at 04:38:55PM -0400, Alvaro Herrera wrote:\n>\n> [fix to let CLOBBER_CACHE_ALWAYS pass]\n>\n> > Barring objections, I will get this pushed early tomorrow.\n>\n> prairiedog and wrasse failed a $SUBJECT test after this (commit 8aba932).\n> Also, some non-CLOBBER_CACHE_ALWAYS animals failed a test before the fix.\n> These IsolationCheck failures match detach-partition-concurrently[^\\n]*FAILED:\n\nFWIW, all 4 detach-partition-concurrently suites passed for me on a\nbuild of the latest HEAD, with CPPFLAGS = -DRELCACHE_FORCE_RELEASE\n-DCATCACHE_FORCE_RELEASE -DCLOBBER_CACHE_ALWAYS -D_GNU_SOURCE\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 24 May 2021 20:39:16 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On 2021-May-24, Noah Misch wrote:\n\n> prairiedog and wrasse failed a $SUBJECT test after this (commit 8aba932).\n> Also, some non-CLOBBER_CACHE_ALWAYS animals failed a test before the fix.\n> These IsolationCheck failures match detach-partition-concurrently[^\\n]*FAILED:\n> \n> sysname │ snapshot │ branch │ bfurl \n> ────────────┼─────────────────────┼────────┼────────────────────────────────────────────────────────────────────────────────────────────────\n\nChecking this list, these three failures can be explained by the\ndetach-partition-concurrently-3 that was just patched.\n\n> wrasse │ 2021-04-21 10:38:32 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2021-04-21%2010%3A38%3A32\n> prairiedog │ 2021-04-25 22:05:48 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prairiedog&dt=2021-04-25%2022%3A05%3A48\n> wrasse │ 2021-05-11 03:43:40 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2021-05-11%2003%3A43%3A40\n\nNext there's a bunch whose error message is the same that we had seen\nearlier in this thread; these animals are all CLOBBER_CACHE_ALWAYS:\n\n step s1insert: insert into d4_fk values (1);\n +ERROR: insert or update on table \"d4_fk\" violates foreign key constraint \"d4_fk_a_fkey\"\n\n> hyrax │ 2021-03-27 07:29:34 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hyrax&dt=2021-03-27%2007%3A29%3A34\n> trilobite │ 2021-03-29 18:14:24 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=trilobite&dt=2021-03-29%2018%3A14%3A24\n> hyrax │ 2021-04-01 07:21:10 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hyrax&dt=2021-04-01%2007%3A21%3A10\n> avocet │ 2021-04-05 15:45:56 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=avocet&dt=2021-04-05%2015%3A45%3A56\n> hyrax │ 2021-04-06 07:15:16 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hyrax&dt=2021-04-06%2007%3A15%3A16\n> hyrax │ 2021-04-11 07:25:50 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hyrax&dt=2021-04-11%2007%3A25%3A50\n> hyrax │ 2021-04-20 18:25:37 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hyrax&dt=2021-04-20%2018%3A25%3A37\n\nThis is fine, because the fix commit 8aba932 is dated April 22 and these\nfailures all predate that.\n\n\nAnd finally there's these two:\n\n> topminnow │ 2021-03-28 20:37:38 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=topminnow&dt=2021-03-28%2020%3A37%3A38\n> dragonet │ 2021-04-01 19:48:03 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dragonet&dt=2021-04-01%2019%3A48%3A03\n\n(animals not CCA) which are exposing the same problem in\ndetach-partition-concurrently-4 that we just fixed in\ndetach-partition-concurrently-3, so we should apply the same fix: add a\nno-op step right after the cancel to prevent the error report from\nchanging. I'll go do that after grabbing some coffee.\n\nThanks for digging into the reports!\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W\n\"Cada quien es cada cual y baja las escaleras como quiere\" (JMSerrat)\n\n\n", "msg_date": "Tue, 25 May 2021 15:31:58 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY" } ]
[ { "msg_contents": "Hi,\n\nAt the moment JIT compilation, if enabled, is applied to all\nexpressions in the entire plan. This can sometimes be a problem as\nsome expressions may be evaluated lots and warrant being JITted, but\nothers may only be evaluated just a few times, or even not at all.\n\nThis problem tends to become large when table partitioning is involved\nas the number of expressions in the plan grows with each partition\npresent in the plan. Some partitions may have many rows and it can be\nuseful to JIT expression, but others may have few rows or even no\nrows, in which case JIT is a waste of effort.\n\nI recall a few cases where people have complained that JIT was too\nslow. One case, in particular, is [1].\n\nIt would be nice if JIT was more granular about which parts of the\nplan it could be enabled for. So I went and did that in the attached.\n\nThe patch basically changes the plan-level consideration of if JIT\nshould be enabled and to what level into a per-plan-node\nconsideration. So, instead of considering JIT based on the overall\ntotal_cost of the plan, we just consider it on the plan-node's\ntotal_cost.\n\nI was just planing around with a test case of:\n\ncreate table listp(a int, b int) partition by list(a);\nselect 'create table listp'|| x || ' partition of listp for values\nin('||x||');' from generate_Series(1,1000) x;\n\\gexec\ninsert into listp select 1,x from generate_series(1,100000000) x;\nvacuum analyze listp;\n\nexplain (analyze, buffers) select count(*) from listp where b < 0;\n\nI get:\n\nmaster jit=on\n JIT:\n Functions: 3002\n Options: Inlining true, Optimization true, Expressions true, Deforming true\n Timing: Generation 141.587 ms, Inlining 11.760 ms, Optimization\n6518.664 ms, Emission 3152.266 ms, Total 9824.277 ms\n Execution Time: 12588.292 ms\n(2013 rows)\n\nmaster jit=off\n Execution Time: 3672.391 ms\n\npatched jit=on\n JIT:\n Functions: 5\n Options: Inlining true, Optimization true, Expressions true, Deforming true\n Timing: Generation 0.675 ms, Inlining 3.322 ms, Optimization 10.766\nms, Emission 5.892 ms, Total 20.655 ms\n Execution Time: 2754.160 ms\n\nThis explain format will need further work as each of those flags is\nnow per plan node rather than on the plan as a whole. I considered\njust making the true/false a counter to count the number of functions,\ne.g Inlined: 5 Optimized: 5 etc.\n\nI understand from [2] that Andres has WIP code to improve the\nperformance of JIT compilation. That's really great, but I also\nbelieve that no matter how fast we make it, it's going to be a waste\nof effort unless the expressions are evaluated enough times for the\ncheaper evaluations to pay off the compilation costs. It'll never be a\nwin when we evaluate certain expressions zero times. What Andres has\nshould allow us to drop the default jit costs.\n\nHappy to hear people's thoughts on this.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/7736C40E-6DB5-4E7A-8FE3-4B2AB8E22793@elevated-dev.com\n[2] https://www.postgresql.org/message-id/20200728212806.tu5ebmdbmfrvhoao@alap3.anarazel.de", "msg_date": "Tue, 4 Aug 2020 14:01:03 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Making JIT more granular" }, { "msg_contents": "(This is an old thread. See [1] if you're missing the original email.)\n\nOn Tue, 4 Aug 2020 at 14:01, David Rowley <dgrowleyml@gmail.com> wrote:\n> At the moment JIT compilation, if enabled, is applied to all\n> expressions in the entire plan. This can sometimes be a problem as\n> some expressions may be evaluated lots and warrant being JITted, but\n> others may only be evaluated just a few times, or even not at all.\n>\n> This problem tends to become large when table partitioning is involved\n> as the number of expressions in the plan grows with each partition\n> present in the plan. Some partitions may have many rows and it can be\n> useful to JIT expression, but others may have few rows or even no\n> rows, in which case JIT is a waste of effort.\n\nThis patch recently came up again in [2], where Magnus proposed we add\na new GUC [3] to warn users when JIT compilation takes longer than the\nspecified fraction of execution time. Over there I mentioned that I\nthink it might be better to have a go at making the JIT costing better\nso that it's more aligned to the amount of JITing work there is to do\nrather than the total cost of the plan without any consideration about\nhow much there is to JIT compile.\n\nIn [4], Andres reminded me that I need to account for the number of\ntimes a given plan is (re)scanned rather than just the total_cost of\nthe Plan node. There followed some discussion about how that might be\ndone.\n\nI've loosely implemented this in the attached patch. In order to get\nthe information about the expected number of \"loops\" a given Plan node\nwill be subject to, I've modified create_plan() so that it passes this\nvalue down recursively while creating the plan. Nodes such as Nested\nLoop multiply the \"est_calls\" by the number of outer rows. For nodes\nsuch as Material, I've made the estimated calls = 1.0. Memoize must\ntake into account the expected cache hit ratio, which I've had to\nrecord as part of MemoizePath so that create_plan knows about that.\nAltogether, this is a fair bit of churn for createplan.c, and it's\nstill only part of the way there. When planning subplans, we do\ncreate_plan() right away and since we plan subplans before the outer\nplans, we've no idea how many times the subplan will be rescanned. So\nto make this work fully I think we'd need to modify the planner so\nthat we delay the create_plan() for subplans until sometime after\nwe've planned the outer query.\n\nThe reason that I'm posting about this now is mostly because I did say\nI'd come back to this patch for v16 and I'm also feeling bad that I\n-1'd Magnus' patch, which likely resulted in making zero forward\nprogress in improving JIT and it's costing situation for v15.\n\nThe reason I've not completed this patch to fix the deficiencies\nregarding subplans is that that's quite a bit of work and I don't\nreally want to do that right now. We might decide that JIT costing\nshould work in a completely different way that does not require\nestimating how many times a plan node will be rescanned. I think\nthere's enough patch here to allow us to test this and then decide if\nit's any good or not.\n\nThere's also maybe some controversy in the patch. I ended up modifying\nEXPLAIN so that it shows loops=N as part of the estimated costs. I\nunderstand there's likely to be fallout from doing that as there are\nvarious tools around that this would likely break. I added that for a\ncouple of reasons; 1) I think it would be tricky to figure out why JIT\nwas or was not enabled without showing that in EXPLAIN, and; 2) I\nneeded to display it somewhere for my testing so I could figure out if\nI'd done something wrong when calculating the value during\ncreate_plan().\n\nThis basically looks like:\n\npostgres=# explain select * from h, h h1, h h2;\n QUERY PLAN\n--------------------------------------------------------------------------\n Nested Loop (cost=0.00..12512550.00 rows=1000000000 width=12)\n -> Nested Loop (cost=0.00..12532.50 rows=1000000 width=8)\n -> Seq Scan on h (cost=0.00..15.00 rows=1000 width=4)\n -> Materialize (cost=0.00..20.00 rows=1000 width=4 loops=1000)\n -> Seq Scan on h h1 (cost=0.00..15.00 rows=1000 width=4)\n -> Materialize (cost=0.00..20.00 rows=1000 width=4 loops=1000000)\n -> Seq Scan on h h2 (cost=0.00..15.00 rows=1000 width=4)\n(7 rows)\n\nJust the same as EXPLAIN ANALYZE, I've coded loops= to only show when\nthere's more than 1 loop. You can also see that the node below\nMaterialize is not expected to be scanned multiple times. Technically\nit could when a parameter changes, but right now it seems more trouble\nthan it's worth to go to the trouble of estimating that during\ncreate_plan(). There's also some variation from the expected loops and\nthe actual regarding parallel workers. In the estimate, this is just\nthe number of times an average worker is expected to invoke the plan,\nwhereas the actual \"loops\" is the sum of each worker's invocations.\n\nThe other slight controversy that I can see in the patch is\nrepurposing the JIT cost GUCs and giving them a completely different\nmeaning than they had previously. I've left them as-is for now as I\ndidn't think renaming GUCs would ease the pain that DBAs would have to\nendure as a result of this change.\n\nDoes anyone have any thoughts about this JIT costing? Is this an\nimprovement? Is there a better way?\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvpQJqLrNOSi8P1JLM8YE2C+ksKFpSdZg=q6sTbtQ-v=aw@mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAApHDvrEoQ5p61NjDCKVgEWaH0qm1KprYw2-7m8-6ZGGJ8A2Dw%40mail.gmail.com\n[3] https://www.postgresql.org/message-id/CABUevExR_9ZmkYj-aBvDreDKUinWLBBpORcmTbuPdNb5vGOLtA%40mail.gmail.com\n[4] https://www.postgresql.org/message-id/20220329231641.ai3qrzpdo2vqvwix%40alap3.anarazel.de", "msg_date": "Tue, 26 Apr 2022 17:24:02 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Making JIT more granular" }, { "msg_contents": "Hi David:\n\n\n> Does anyone have any thoughts about this JIT costing? Is this an\n> improvement? Is there a better way?\n>\n>\nI think this is an improvement. However I'm not sure how much improvement\n& effort we want pay for it. I just shared my thoughts to start this\ndiscussion.\n\n1. Ideally there is no GUC needed at all. For given a operation, like\nExpression execution, tuple deform, if we can know the extra cost\nof JIT in compile and the saved cost of JIT in execution, we\ncan choose JIT automatically. But as for now, it is hard to\nsay both. and we don't have a GUC to for DBA like jit_compile_cost\n/ jit_compile_tuple_deform_cost as well. Looks we have some\nlong way to go for this and cost is always a headache.\n\n2. You calculate the cost to compare with jit_above_cost as:\n\nplan->total_cost * plan->est_loops.\n\nAn alternative way might be to consider the rescan cost like\ncost_rescan. This should be closer for a final execution cost.\nHowever since it is hard to set a reasonable jit_above_cost,\nso I am feeling the current way is OK as well.\n\n\n3. At implementation level, I think it would be terrible to add\nanother parameter like est_loops to every create_xxx_plan\nin future, An alternative way may be:\n\ntypedef struct\n{\n int est_calls;\n} ExtPlanInfo;\n\nvoid\ncopyExtPlanInfo(Plan *targetPlan, ExtPlanInfo ext)\n{\ntargetPlan->est_calls = ext.est_calls;\n}\n\ncreate_xxx_plan(..., ExtPlanInfo extinfo)\n{\n copyExtPlanInfo(plan, extinfo);\n}\n\nBy this way, it would be easier to add another parameter\nlike est_calls easily. Not sure if this is over-engineered.\n\nI have gone through the patches for a while, General it looks\ngood to me. If we have finalized the design, I can do a final\ndouble check.\n\nAt last, I think the patched way should be better than\nthe current way.\n\n-- \nBest Regards\nAndy Fan\n\nHi David: \nDoes anyone have any thoughts about this JIT costing?  Is this an\nimprovement?  Is there a better way?I think this is an improvement.  However I'm not sure how much improvement& effort we want pay for it.  I just shared my thoughts to start this discussion. 1. Ideally there is no GUC needed at all.  For given a operation, likeExpression execution, tuple deform, if we can know the extra costof JIT in compile and the saved cost of JIT in execution, wecan choose JIT automatically. But as for now, it is hard tosay both. and we don't have a GUC to for DBA like jit_compile_cost/ jit_compile_tuple_deform_cost as well.  Looks we have somelong way to go for this and cost is always a headache.2. You calculate the cost to compare with jit_above_cost as:plan->total_cost * plan->est_loops.An alternative way might be to consider the rescan cost likecost_rescan. This should be closer for a final execution cost.However since it is hard to set a reasonable jit_above_cost,so I am feeling the current way is OK as well.3. At implementation level, I think it would be terrible to addanother parameter like est_loops to every create_xxx_planin future, An alternative way may be:typedef struct{   int  est_calls;} ExtPlanInfo;voidcopyExtPlanInfo(Plan *targetPlan,  ExtPlanInfo ext){\ttargetPlan->est_calls = ext.est_calls;}create_xxx_plan(...,  ExtPlanInfo extinfo){   copyExtPlanInfo(plan, extinfo);}By this way, it would be easier to add another parameterlike est_calls easily. Not sure if this is over-engineered.I have gone through the patches for a while, General it looks good to me. If we have finalized the design, I can do a finaldouble check. At last,  I think the patched way should be better than the current way. -- Best RegardsAndy Fan", "msg_date": "Sat, 14 May 2022 08:35:53 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Making JIT more granular" }, { "msg_contents": ">\n>\n> 2. You calculate the cost to compare with jit_above_cost as:\n>\n> plan->total_cost * plan->est_loops.\n>\n> An alternative way might be to consider the rescan cost like\n> cost_rescan. This should be closer for a final execution cost.\n> However since it is hard to set a reasonable jit_above_cost,\n> so I am feeling the current way is OK as well.\n>\n\nThere are two observers after thinking more about this. a). due to the\nrescan cost reason, plan->total_cost * plan->est_loops might be greater\nthan the whole plan's total_cost. This may cause users to be confused why\nthis change can make the plan not JITed in the past, but JITed now.\n\nexplain analyze select * from t1, t2 where t1.a = t2.a;\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..154.25 rows=100 width=16) (actual\ntime=0.036..2.618 rows=100 loops=1) Join Filter: (t1.a = t2.a) Rows\nRemoved by Join Filter: 9900 -> Seq Scan on t1 (cost=0.00..2.00\nrows=100 width=8) (actual time=0.015..0.031 rows=100 loops=1) ->\nMaterialize (cost=0.00..2.50 rows=100 width=8) (actual time=0.000..0.010\nrows=100 loops=100) -> Seq Scan on t2 (cost=0.00..2.00 rows=100\nwidth=8) (actual time=0.007..0.023 rows=100 loops=1) Planning Time: 0.299\nms Execution Time: 2.694 ms (8 rows)\n\nThe overall plan's total_cost is 154.25, but the Materialize's JIT cost is\n2.5 * 100 = 250.\n\nb). Since the total_cost for a plan counts all the costs for its children,\nso if one\nchild plan is JITed, I think all its parents would JITed. Is this by\ndesign?\n\n QUERY PLAN\n----------------------------\n Sort\n Sort Key: (count(*))\n -> HashAggregate\n Group Key: a\n -> Seq Scan on t1\n\n(If Seq Scan is JITed, both HashAggregate & Sort will be JITed.)\n\n-- \nBest Regards\nAndy Fan\n\n2. You calculate the cost to compare with jit_above_cost as:plan->total_cost * plan->est_loops.An alternative way might be to consider the rescan cost likecost_rescan. This should be closer for a final execution cost.However since it is hard to set a reasonable jit_above_cost,so I am feeling the current way is OK as well.There are two observers after thinking more about this.  a).  due to the rescan cost reason,  plan->total_cost * plan->est_loops might be greaterthan the whole plan's total_cost.  This may cause users to be confused why this change can make the plan not JITed in the past,  but JITed now. explain analyze select * from t1, t2 where t1.a  = t2.a;\n                                                 QUERY PLAN                                                 \n------------------------------------------------------------------------------------------------------------\n Nested Loop  (cost=0.00..154.25 rows=100 width=16) (actual time=0.036..2.618 rows=100 loops=1)\n   Join Filter: (t1.a = t2.a)\n   Rows Removed by Join Filter: 9900\n   ->  Seq Scan on t1  (cost=0.00..2.00 rows=100 width=8) (actual time=0.015..0.031 rows=100 loops=1)\n   ->  Materialize  (cost=0.00..2.50 rows=100 width=8) (actual time=0.000..0.010 rows=100 loops=100)\n         ->  Seq Scan on t2  (cost=0.00..2.00 rows=100 width=8) (actual time=0.007..0.023 rows=100 loops=1)\n Planning Time: 0.299 ms\n Execution Time: 2.694 ms\n(8 rows)The overall plan's total_cost is 154.25, but the Materialize's JIT cost is 2.5 * 100 = 250. b). Since the total_cost for a plan counts all the costs for its children, so if onechild plan is JITed, I think all its parents would JITed. Is this by design?          QUERY PLAN---------------------------- Sort   Sort Key: (count(*))   ->  HashAggregate         Group Key: a         ->  Seq Scan on t1(If Seq Scan is JITed, both HashAggregate & Sort will be JITed.) -- Best RegardsAndy Fan", "msg_date": "Mon, 16 May 2022 08:06:07 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Making JIT more granular" } ]
[ { "msg_contents": "Hello.\n\nWhile poking at ssl code, I noticed that 002_scram.pl fails if\n~/.postgresql/root.crt exists. This has been fixed once but\nd6e612f837 reintroduced one. The attached fixes that. Applies to\n14devel and 13.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 04 Aug 2020 12:00:33 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "SSL TAP test fails due to default client certs." }, { "msg_contents": "On Tue, Aug 04, 2020 at 12:00:33PM +0900, Kyotaro Horiguchi wrote:\n> While poking at ssl code, I noticed that 002_scram.pl fails if\n> ~/.postgresql/root.crt exists. This has been fixed once but\n> d6e612f837 reintroduced one. The attached fixes that. Applies to\n> 14devel and 13.\n\nIndeed, applied. I can reproduce the failure easily, and bdd6e9b is\nthe previous fix you are mentioning. It is the only test where we\ndon't rely on an $common_connstr that sets sslcert and sslrootcert to\nan invalid value, so the rest looks fine.\n--\nMichael", "msg_date": "Tue, 4 Aug 2020 14:43:58 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: SSL TAP test fails due to default client certs." }, { "msg_contents": "At Tue, 4 Aug 2020 14:43:58 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Tue, Aug 04, 2020 at 12:00:33PM +0900, Kyotaro Horiguchi wrote:\n> > While poking at ssl code, I noticed that 002_scram.pl fails if\n> > ~/.postgresql/root.crt exists. This has been fixed once but\n> > d6e612f837 reintroduced one. The attached fixes that. Applies to\n> > 14devel and 13.\n> \n> Indeed, applied. I can reproduce the failure easily, and bdd6e9b is\n> the previous fix you are mentioning. It is the only test where we\n> don't rely on an $common_connstr that sets sslcert and sslrootcert to\n> an invalid value, so the rest looks fine.\n\nAgreed. Thanks for committing!\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 04 Aug 2020 17:41:12 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: SSL TAP test fails due to default client certs." } ]