threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "Hi\n\nMain goal of this patch is to avoid repeated calls of immutable/stable\nfunctions.\nThis patch is against version 10.10.\nI guess same logic could be implemented up till version 12.",
"msg_date": "Sun, 3 Nov 2019 21:56:31 +0100",
"msg_from": "Andrzej Barszcz <abusinf@gmail.com>",
"msg_from_op": true,
"msg_subject": "function calls optimization"
},
{
"msg_contents": "Hi,\n\nOn 2019-11-03 21:56:31 +0100, Andrzej Barszcz wrote:\n> Main goal of this patch is to avoid repeated calls of immutable/stable\n> functions.\n> This patch is against version 10.10.\n> I guess same logic could be implemented up till version 12.\n\nIf you want actual development feedback, you're more likely to get that\nwhen posting patches against the master branch.\n\n\n> --- src/include/nodes/execnodes.h\t2019-08-05 23:16:54.000000000 +0200\n> +++ src/include/nodes/execnodes.h\t2019-11-03 20:05:34.338305825 +0100\n> @@ -882,6 +883,39 @@ typedef struct PlanState\n> \tTupleTableSlot *ps_ResultTupleSlot; /* slot for my result tuples */\n> \tExprContext *ps_ExprContext;\t/* node's expression-evaluation context */\n> \tProjectionInfo *ps_ProjInfo;\t/* info for doing tuple projection */\n> +#ifdef OPTFUNCALLS\n> +\t/* was_called - list of ExprEvalStep* or FuncExpr* depending on execution stage\n> +\t * \n> +\t * Stage I. ExecInitExprRec()\n> +\t *\tList gathers all not volatile, not set returning, not window FuncExpr*,\n> +\t *\tequal nodes occupy one position in the list. Position in this list ( counting from 1 )\n> +\t *\tand planstate are remembered in actual ExprEvalStep*\n> +\t *\n> +\t * \tFor query: select f(n),f(n) from t - was_called->length will be 1 and ptr_value \n> +\t *\t\t will be FuncExpr* node of f(n)\n> +\t *\n> +\t * \tFor query: select f(n),g(n),f(n) from t - list->length == 2\n> +\t *\n> +\t * Stage II. ExecProcnode()\n> +\t *\tFor every planstate->was_called list changes its interpretation - from now on\n> +\t *\tit is a list of ExprEvalStep* . Before executing real execProcnode\n> +\t *\tevery element of this list ( ptr_value ) is set to NULL. We don't know which\n> +\t *\tfunction will be called first\n> +\t *\n> +\t * Stage III. ExecInterpExpr() case EEOP_FUNCEXPR\n> +\t *\tExprEvalStep.position > 0 means that in planstate->was_called could be ExprEvalStep*\n> +\t *\twhich was done yet or NULL.\n> +\t *\n> +\t *\tNULL means that eval step is entered first time and:\n> +\t *\t\t1. real function must be called\n> +\t *\t\t2. ExprEvalStep has to be remembered in planstate->was_called at position\n> +\t *\t\tstep->position - 1\n> +\t *\n> +\t *\tNOT NULL means that in planstate->was_called is ExprEvalStep* with ready result, so\n> +\t *\tthere is no need to call function\n> +\t */\n> +\tList *was_called;\n> +#endif\n> } PlanState;\n\nI don't think the above describes a way to do this that is\nacceptable. For one, I think this needs to happen at plan time, not for\nevery single execution of the statement. Nor do I think is it ok to make\nExecProcNode() etc slower for this feature - that's a very crucial\nroutine (similar with EEOP_FUNCEXPR checking the cache, but there we\ncould just have a different step type doing so).\n\nHave you looked any of the previous work on such caching? I strongly\nsuggest doing so if you're interested in getting such a feature into\npostgres. E.g. there's a fair bit of relevant discussion in\nhttps://www.postgresql.org/message-id/da87bb6a014e029176a04f6e50033cfb%40postgrespro.ru\ne.g. between Tom and me further down.\n\n\n> /* ----------------\n> --- src/include/executor/execExpr.h\t2019-08-05 23:16:54.000000000 +0200\n> +++ src/include/executor/execExpr.h\t2019-11-03 20:04:03.739025142 +0100\n> @@ -561,6 +561,10 @@ typedef struct ExprEvalStep\n> \t\t\tAlternativeSubPlanState *asstate;\n> \t\t}\t\t\talternative_subplan;\n> \t}\t\t\td;\n> +#ifdef OPTFUNCALLS\n> +\tPlanState *planstate;\t/* parent PlanState for this expression */\n> +\tint position;\t\t/* position in planstate->was_called counted from 1 */\n> +#endif\n> } ExprEvalStep;\n\nThis is not ok. We cannot just make every single ExprEvalStep larger for\nthis feature. Nor is clear why this is even needed - every ExprEvalStep\nis associated with an ExprState, and ExprState already has a reference\nto the parent planstate.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 6 Nov 2019 15:26:08 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: function calls optimization"
}
] |
[
{
"msg_contents": "Buildfarm member drongo has been failing the pg_ctl regression test\npretty often. I happened to look closer at what's happening, and\nit's this:\n\ncould not read \"C:/prog/bf/root/HEAD/pgsql.build/src/bin/pg_ctl/tmp_check/t_004_logrotate_primary_data/pgdata/current_logfiles\": Permission denied at C:/prog/bf/root/HEAD/pgsql.build/src/test/perl/TestLib.pm line 397.\n\nThat is, TestLib::slurp_file is failing to read a file. Almost\ncertainly, \"permission denied\" doesn't really mean a permissions\nproblem, but failure to specify the file-opening flags needed to\nallow concurrent access on Windows. We fixed this in pg_ctl\nitself in commit 0ba06e0bf ... but we didn't fix the TAP\ninfrastructure. Is there an easy way to get Perl on board\nwith that?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 03 Nov 2019 22:53:00 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "TAP tests aren't using the magic words for Windows file access"
},
{
"msg_contents": "On Sun, Nov 03, 2019 at 10:53:00PM -0500, Tom Lane wrote:\n> That is, TestLib::slurp_file is failing to read a file. Almost\n> certainly, \"permission denied\" doesn't really mean a permissions\n> problem, but failure to specify the file-opening flags needed to\n> allow concurrent access on Windows. We fixed this in pg_ctl\n> itself in commit 0ba06e0bf ... but we didn't fix the TAP\n> infrastructure. Is there an easy way to get Perl on board\n> with that?\n\nIf we were to use Win32API::File so as the file is opened in shared\nmode, we would do the same as what our frontend/backend code does (see\n$uShare):\nhttps://metacpan.org/pod/Win32API::File\n--\nMichael",
"msg_date": "Tue, 5 Nov 2019 12:41:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests aren't using the magic words for Windows file access"
},
{
"msg_contents": "\nOn 11/4/19 10:41 PM, Michael Paquier wrote:\n> On Sun, Nov 03, 2019 at 10:53:00PM -0500, Tom Lane wrote:\n>> That is, TestLib::slurp_file is failing to read a file. Almost\n>> certainly, \"permission denied\" doesn't really mean a permissions\n>> problem, but failure to specify the file-opening flags needed to\n>> allow concurrent access on Windows. We fixed this in pg_ctl\n>> itself in commit 0ba06e0bf ... but we didn't fix the TAP\n>> infrastructure. Is there an easy way to get Perl on board\n>> with that?\n> If we were to use Win32API::File so as the file is opened in shared\n> mode, we would do the same as what our frontend/backend code does (see\n> $uShare):\n> https://metacpan.org/pod/Win32API::File\n\n\n\nHmm. What would that look like? (My eyes glazed over a bit reading that\npage - probably ENOTENOUGHCAFFEINE)\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Wed, 6 Nov 2019 08:40:17 -0500",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests aren't using the magic words for Windows file access"
},
{
"msg_contents": "On 2019-Nov-05, Michael Paquier wrote:\n\n> On Sun, Nov 03, 2019 at 10:53:00PM -0500, Tom Lane wrote:\n> > That is, TestLib::slurp_file is failing to read a file. Almost\n> > certainly, \"permission denied\" doesn't really mean a permissions\n> > problem, but failure to specify the file-opening flags needed to\n> > allow concurrent access on Windows. We fixed this in pg_ctl\n> > itself in commit 0ba06e0bf ... but we didn't fix the TAP\n> > infrastructure. Is there an easy way to get Perl on board\n> > with that?\n> \n> If we were to use Win32API::File so as the file is opened in shared\n> mode, we would do the same as what our frontend/backend code does (see\n> $uShare):\n> https://metacpan.org/pod/Win32API::File\n\nCompatibility-wise, that should be okay, since that module appears to\nhave been distributed with Perl core early on.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 6 Nov 2019 12:38:05 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests aren't using the magic words for Windows file access"
},
{
"msg_contents": "On Wed, Nov 6, 2019 at 4:38 PM Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> On 2019-Nov-05, Michael Paquier wrote:\n>\n> > On Sun, Nov 03, 2019 at 10:53:00PM -0500, Tom Lane wrote:\n> > > That is, TestLib::slurp_file is failing to read a file. Almost\n> > > certainly, \"permission denied\" doesn't really mean a permissions\n> > > problem, but failure to specify the file-opening flags needed to\n> > > allow concurrent access on Windows. We fixed this in pg_ctl\n> > > itself in commit 0ba06e0bf ... but we didn't fix the TAP\n> > > infrastructure. Is there an easy way to get Perl on board\n> > > with that?\n> >\n> > If we were to use Win32API::File so as the file is opened in shared\n> > mode, we would do the same as what our frontend/backend code does (see\n> > $uShare):\n> > https://metacpan.org/pod/Win32API::File\n>\n> Compatibility-wise, that should be okay, since that module appears to\n> have been distributed with Perl core early on.\n>\n>\nPlease find attached a patch that adds the FILE_SHARE options to\nTestLib::slurp_file using Win32API::File.\n\nRegards,\n\nJuan José Santamaría Flecha",
"msg_date": "Wed, 6 Nov 2019 21:41:56 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests aren't using the magic words for Windows file access"
},
{
"msg_contents": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?= <juanjo.santamaria@gmail.com> writes:\n> Please find attached a patch that adds the FILE_SHARE options to\n> TestLib::slurp_file using Win32API::File.\n\nIck. Are we going to need Windows-droppings like this all over the\nTAP tests? I'm not sure I believe that slurp_file is the only place\nwith a problem.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 06 Nov 2019 16:43:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: TAP tests aren't using the magic words for Windows file access"
},
{
"msg_contents": "On Thu, Nov 7, 2019 at 10:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> =?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?= <juanjo.santamaria@gmail.com> writes:\n> > Please find attached a patch that adds the FILE_SHARE options to\n> > TestLib::slurp_file using Win32API::File.\n>\n> Ick. Are we going to need Windows-droppings like this all over the\n> TAP tests? I'm not sure I believe that slurp_file is the only place\n> with a problem.\n\nNot a Windows or Perl person, but I see that you can redefine core\nfunctions with *CORE::GLOBAL::open = <replacement/wrapper>, if you\nwanted to make a version of open() that does that FILE_SHARE_READ\ndance. Alternatively we could of course have our own xxx_open()\nfunction and use that everywhere, but that might be more distracting.\n\nI'm a bit surprised that there doesn't seem to be a global switch\nthing you can set somewhere to make it do that anyway. Doesn't\neveryone eventually figure out that all files really want to be\nshared?\n\n\n",
"msg_date": "Thu, 7 Nov 2019 11:13:32 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests aren't using the magic words for Windows file access"
},
{
"msg_contents": "\nOn 11/6/19 4:43 PM, Tom Lane wrote:\n> =?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?= <juanjo.santamaria@gmail.com> writes:\n>> Please find attached a patch that adds the FILE_SHARE options to\n>> TestLib::slurp_file using Win32API::File.\n> Ick. Are we going to need Windows-droppings like this all over the\n> TAP tests? I'm not sure I believe that slurp_file is the only place\n> with a problem.\n>\n> \t\t\t\n\n\n\nIn any case, the patch will fail as written - on the Msys 1 system I\njust tested Win32::API is not available to the DTK perl we need to use\nto run TAP tests.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Wed, 6 Nov 2019 19:57:00 -0500",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests aren't using the magic words for Windows file access"
},
{
"msg_contents": "On Thu, Nov 07, 2019 at 11:13:32AM +1300, Thomas Munro wrote:\n> Not a Windows or Perl person, but I see that you can redefine core\n> functions with *CORE::GLOBAL::open = <replacement/wrapper>, if you\n> wanted to make a version of open() that does that FILE_SHARE_READ\n> dance.\n\nFWIW, I would have gone with a solution like that, say within\nTestLib.pm's INIT. This ensures that any new future tests don't fall\ninto that trap again.\n\n> Alternatively we could of course have our own xxx_open() function\n> and use that everywhere, but that might be more distracting.\n\nThat does not sound really appealing.\n\n> I'm a bit surprised that there doesn't seem to be a global switch\n> thing you can set somewhere to make it do that anyway. Doesn't\n> everyone eventually figure out that all files really want to be\n> shared?\n\nI guess it depends on your requirements. Looking around I can see\nsome mention about flock() but it does not solve the problem at the\ntime the fd is opened. If this does not exist, then it seems to me\nthat we have very special requirements for our perl code, and that\nthese are not popular.\n--\nMichael",
"msg_date": "Thu, 7 Nov 2019 12:31:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests aren't using the magic words for Windows file access"
},
{
"msg_contents": "On Thu, Nov 7, 2019 at 1:57 AM Andrew Dunstan <\nandrew.dunstan@2ndquadrant.com> wrote:\n\n>\n> In any case, the patch will fail as written - on the Msys 1 system I\n> just tested Win32::API is not available to the DTK perl we need to use\n> to run TAP tests.\n>\n>\nMay I ask which version of Msys is that system using? In a recent\ninstallation (post 1.0.11) I see that those modules are available.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Thu, Nov 7, 2019 at 1:57 AM Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:In any case, the patch will fail as written - on the Msys 1 system I\njust tested Win32::API is not available to the DTK perl we need to use\nto run TAP tests.May I ask which version of Msys is that system using? In a recent installation (post 1.0.11) I see that those modules are available.Regards,Juan José Santamaría Flecha",
"msg_date": "Thu, 7 Nov 2019 09:42:19 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests aren't using the magic words for Windows file access"
},
{
"msg_contents": "\nOn 11/7/19 3:42 AM, Juan José Santamaría Flecha wrote:\n>\n> On Thu, Nov 7, 2019 at 1:57 AM Andrew Dunstan\n> <andrew.dunstan@2ndquadrant.com\n> <mailto:andrew.dunstan@2ndquadrant.com>> wrote:\n>\n>\n> In any case, the patch will fail as written - on the Msys 1 system I\n> just tested Win32::API is not available to the DTK perl we need to use\n> to run TAP tests.\n>\n>\n> May I ask which version of Msys is that system using? In a recent\n> installation (post 1.0.11) I see that those modules are available.\n>\n>\n\nNot sure how I discover that. The path is c:\\mingw\\msys\\1.0, looks like\nit was installed in 2013 some time. perl reports version 5.8.8 built for\nmsys-int64\n\nThis is the machine that runs jacana on the buildfarm.\n\nThe test I'm running is:\n\n perl -MWin32::API -e ';'\n\nAnd perl reports it can't find the module.\n\nHowever, the perl on my pretty recent Msys2 system (the one that runs\nfairywren) reports the same problem. That's 5.30.0 built for\nx86_64-msys-thread-multi.\n\nSo my question is which perl you're testing with? If it's a Windows\nnative perl version such as ActivePerl or StrawberryPerl that won't do -\nthe buildfarm needs to use msys-perl to run prove.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Thu, 7 Nov 2019 08:42:22 -0500",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests aren't using the magic words for Windows file access"
},
{
"msg_contents": "On 2019-Nov-07, Andrew Dunstan wrote:\n\n> The test I'm running is:\n> \n> ��� perl -MWin32::API -e ';'\n> \n> And perl reports it can't find the module.\n\nThat's a curious test to try, given that the module is called\nWin32API::File.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 7 Nov 2019 10:53:43 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests aren't using the magic words for Windows file access"
},
{
"msg_contents": "\nOn 11/7/19 8:53 AM, Alvaro Herrera wrote:\n> On 2019-Nov-07, Andrew Dunstan wrote:\n>\n>> The test I'm running is:\n>>\n>> perl -MWin32::API -e ';'\n>>\n>> And perl reports it can't find the module.\n> That's a curious test to try, given that the module is called\n> Win32API::File.\n>\n\n\nThe patch says:\n\n\n+ require Win32::API;\n+ Win32::API->import;\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Thu, 7 Nov 2019 09:04:21 -0500",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests aren't using the magic words for Windows file access"
},
{
"msg_contents": "On 2019-Nov-07, Andrew Dunstan wrote:\n\n> On 11/7/19 8:53 AM, Alvaro Herrera wrote:\n\n> > That's a curious test to try, given that the module is called\n> > Win32API::File.\n> \n> The patch says:\n> \n> +��� ��� require Win32::API;\n> +��� ��� Win32::API->import;\n\nOh, you're right, it does. I wonder why, though:\n\n$ corelist -a Win32::API\n\nData for 2018-11-29\nWin32::API was not in CORE (or so I think)\n\n$ corelist -a Win32API::File\n\nData for 2018-11-29\nWin32API::File was first released with perl v5.8.9\n v5.8.9 0.1001_01 \n v5.9.4 0.1001 \n v5.9.5 0.1001_01 \n v5.10.0 0.1001_01 \n ...\n\n\nAccording to the Win32API::File manual, you can request a file to be\nshared by passing the string \"r\" to svShare to method createFile().\nSo do we really need all those extremely ugly \"droppings\" Juanjo added\nto the patch?\n\n(BTW the Win32API::File manual also says this:\n\"The default for $svShare is \"rw\" which provides the same sharing as\nusing regular perl open().\"\nI wonder why \"the regular perl open()\" is not doing the sharing thing\ncorrectly ... has the problem has been misdiagnosed?).\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 7 Nov 2019 11:12:29 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests aren't using the magic words for Windows file access"
},
{
"msg_contents": "\nOn 11/7/19 9:12 AM, Alvaro Herrera wrote:\n> On 2019-Nov-07, Andrew Dunstan wrote:\n>\n>> On 11/7/19 8:53 AM, Alvaro Herrera wrote:\n>>> That's a curious test to try, given that the module is called\n>>> Win32API::File.\n>> The patch says:\n>>\n>> + require Win32::API;\n>> + Win32::API->import;\n> Oh, you're right, it does. I wonder why, though:\n>\n> $ corelist -a Win32::API\n>\n> Data for 2018-11-29\n> Win32::API was not in CORE (or so I think)\n>\n> $ corelist -a Win32API::File\n>\n> Data for 2018-11-29\n> Win32API::File was first released with perl v5.8.9\n> v5.8.9 0.1001_01 \n> v5.9.4 0.1001 \n> v5.9.5 0.1001_01 \n> v5.10.0 0.1001_01 \n> ...\n\n\nYes, that's present on jacana and fairywren (not on frogmouth, which is\nrunning a very old perl, but it doesn't run TAP tests anyway.)\n\n\n>\n> According to the Win32API::File manual, you can request a file to be\n> shared by passing the string \"r\" to svShare to method createFile().\n> So do we really need all those extremely ugly \"droppings\" Juanjo added\n> to the patch?\n>\n> (BTW the Win32API::File manual also says this:\n> \"The default for $svShare is \"rw\" which provides the same sharing as\n> using regular perl open().\"\n> I wonder why \"the regular perl open()\" is not doing the sharing thing\n> correctly ... has the problem has been misdiagnosed?).\n>\n\n\nMaybe we need \"rwd\"?\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Thu, 7 Nov 2019 09:29:57 -0500",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests aren't using the magic words for Windows file access"
},
{
"msg_contents": "\nOn 11/7/19 9:12 AM, Alvaro Herrera wrote:\n>>\n>> The patch says:\n>>\n>> + require Win32::API;\n>> + Win32::API->import;\n> Oh, you're right, it does. I wonder why, though:\n>\n\nOn further inspection I think those lines are unnecessary. The remainder\nof the patch doesn't use this at all, AFAICT.\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Thu, 7 Nov 2019 09:41:43 -0500",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests aren't using the magic words for Windows file access"
},
{
"msg_contents": "On Fri, Nov 8, 2019 at 3:41 AM Andrew Dunstan\n<andrew.dunstan@2ndquadrant.com> wrote:\n> On 11/7/19 9:12 AM, Alvaro Herrera wrote:\n> >>\n> >> The patch says:\n> >>\n> >> + require Win32::API;\n> >> + Win32::API->import;\n> > Oh, you're right, it does. I wonder why, though:\n> >\n>\n> On further inspection I think those lines are unnecessary. The remainder\n> of the patch doesn't use this at all, AFAICT.\n\nSo does that mean we're back on, we can use a patch like Juan Jose's?\nI'd love to get rid of these intermittent buildfarm failures, like\nthis one just now:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2019-11-20%2010%3A00%3A10\n\nHere you can see:\n\ncould not read \"C:/prog/bf/root/HEAD/pgsql.build/src/bin/pg_ctl/tmp_check/t_004_logrotate_primary_data/pgdata/current_logfiles\":\nPermission denied at\nC:/prog/bf/root/HEAD/pgsql.build/src/test/perl/TestLib.pm line 397.\n\nThat line is in the subroutine slurp_file, and says open(my $in, '<',\n$filename). Using various clues from this thread, it seems like we\ncould, on Windows only, add code to TestLib.pm's INIT to rebind\n*CORE::GLOBAL::open to a wrapper function that would just do\nCreateFile(..., PLEASE_BE_MORE_LIKE_UNIX, ...).\n\n\n",
"msg_date": "Thu, 21 Nov 2019 09:40:02 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests aren't using the magic words for Windows file access"
},
{
"msg_contents": "\nOn 11/20/19 3:40 PM, Thomas Munro wrote:\n> On Fri, Nov 8, 2019 at 3:41 AM Andrew Dunstan\n> <andrew.dunstan@2ndquadrant.com> wrote:\n>> On 11/7/19 9:12 AM, Alvaro Herrera wrote:\n>>>> The patch says:\n>>>>\n>>>> + require Win32::API;\n>>>> + Win32::API->import;\n>>> Oh, you're right, it does. I wonder why, though:\n>>>\n>> On further inspection I think those lines are unnecessary. The remainder\n>> of the patch doesn't use this at all, AFAICT.\n> So does that mean we're back on, we can use a patch like Juan Jose's?\n> I'd love to get rid of these intermittent buildfarm failures, like\n> this one just now:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2019-11-20%2010%3A00%3A10\n>\n> Here you can see:\n>\n> could not read \"C:/prog/bf/root/HEAD/pgsql.build/src/bin/pg_ctl/tmp_check/t_004_logrotate_primary_data/pgdata/current_logfiles\":\n> Permission denied at\n> C:/prog/bf/root/HEAD/pgsql.build/src/test/perl/TestLib.pm line 397.\n>\n> That line is in the subroutine slurp_file, and says open(my $in, '<',\n> $filename). Using various clues from this thread, it seems like we\n> could, on Windows only, add code to TestLib.pm's INIT to rebind\n> *CORE::GLOBAL::open to a wrapper function that would just do\n> CreateFile(..., PLEASE_BE_MORE_LIKE_UNIX, ...).\n\n\nPossibly. I will do some testing on drongo in the next week or so.\n\n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Thu, 21 Nov 2019 09:35:33 -0500",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests aren't using the magic words for Windows file access"
},
{
"msg_contents": "On Thu, Nov 21, 2019 at 3:35 PM Andrew Dunstan <\nandrew.dunstan@2ndquadrant.com> wrote:\n\n>\n> On 11/20/19 3:40 PM, Thomas Munro wrote:\n> > On Fri, Nov 8, 2019 at 3:41 AM Andrew Dunstan\n> > <andrew.dunstan@2ndquadrant.com> wrote:\n> >> On 11/7/19 9:12 AM, Alvaro Herrera wrote:\n> >>>> The patch says:\n> >>>>\n> >>>> + require Win32::API;\n> >>>> + Win32::API->import;\n> >>> Oh, you're right, it does. I wonder why, though:\n> >>>\n> >> On further inspection I think those lines are unnecessary. The remainder\n> >> of the patch doesn't use this at all, AFAICT.\n> > So does that mean we're back on, we can use a patch like Juan Jose's?\n> > I'd love to get rid of these intermittent buildfarm failures, like\n> > this one just now:\n> >\n> >\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2019-11-20%2010%3A00%3A10\n> >\n> > Here you can see:\n> >\n> > could not read\n> \"C:/prog/bf/root/HEAD/pgsql.build/src/bin/pg_ctl/tmp_check/t_004_logrotate_primary_data/pgdata/current_logfiles\":\n> > Permission denied at\n> > C:/prog/bf/root/HEAD/pgsql.build/src/test/perl/TestLib.pm line 397.\n> >\n> > That line is in the subroutine slurp_file, and says open(my $in, '<',\n> > $filename). Using various clues from this thread, it seems like we\n> > could, on Windows only, add code to TestLib.pm's INIT to rebind\n> > *CORE::GLOBAL::open to a wrapper function that would just do\n> > CreateFile(..., PLEASE_BE_MORE_LIKE_UNIX, ...).\n>\n>\n> Possibly. I will do some testing on drongo in the next week or so.\n>\n>\nI think Perl's open() is a bad candidate for an overload, so I will update\nthe previous patch that only touches slurp_file().\n\nThis version address the issues with the required libraries and uses\nfunctions that expose less of the Windows API.\n\nRegards,\n\nJuan José Santamaría Flecha",
"msg_date": "Thu, 21 Nov 2019 20:09:38 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests aren't using the magic words for Windows file access"
},
{
"msg_contents": "On Thu, Nov 21, 2019 at 08:09:38PM +0100, Juan José Santamaría Flecha wrote:\n> I think Perl's open() is a bad candidate for an overload, so I will update\n> the previous patch that only touches slurp_file().\n\nFWIW, I don't like much the approach of patching only slurp_file().\nWhat gives us the guarantee that we won't have this discussion again\nin a couple of months or years once a new caller of open() is added\nfor some new TAP tests, and that it has the same problems with\nmulti-process concurrency?\n--\nMichael",
"msg_date": "Fri, 22 Nov 2019 17:00:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests aren't using the magic words for Windows file access"
},
{
"msg_contents": "On Fri, Nov 22, 2019 at 9:00 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Thu, Nov 21, 2019 at 08:09:38PM +0100, Juan José Santamaría Flecha\n> wrote:\n> > I think Perl's open() is a bad candidate for an overload, so I will\n> update\n> > the previous patch that only touches slurp_file().\n>\n> FWIW, I don't like much the approach of patching only slurp_file().\n> What gives us the guarantee that we won't have this discussion again\n> in a couple of months or years once a new caller of open() is added\n> for some new TAP tests, and that it has the same problems with\n> multi-process concurrency?\n>\n>\nI agree on that, from a technical stand point, overloading open() is\nprobably the best solution for the reasons above mentioned. My doubts come\nfrom the effort such a solution will take and its maintainability, also\ntaking into account that there are not that many calls to open() in\n\"src/test/perl\".\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Fri, Nov 22, 2019 at 9:00 AM Michael Paquier <michael@paquier.xyz> wrote:On Thu, Nov 21, 2019 at 08:09:38PM +0100, Juan José Santamaría Flecha wrote:\n> I think Perl's open() is a bad candidate for an overload, so I will update\n> the previous patch that only touches slurp_file().\n\nFWIW, I don't like much the approach of patching only slurp_file().\nWhat gives us the guarantee that we won't have this discussion again\nin a couple of months or years once a new caller of open() is added\nfor some new TAP tests, and that it has the same problems with\nmulti-process concurrency?\nI agree on that, from a technical stand point, overloading open() is probably the best solution for the reasons above mentioned. My doubts come from the effort such a solution will take and its maintainability, also taking into account that there are not that many calls to open() in \"src/test/perl\".Regards,Juan José Santamaría Flecha",
"msg_date": "Fri, 22 Nov 2019 09:55:46 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests aren't using the magic words for Windows file access"
},
{
"msg_contents": "\nOn 11/22/19 3:55 AM, Juan José Santamaría Flecha wrote:\n>\n> On Fri, Nov 22, 2019 at 9:00 AM Michael Paquier <michael@paquier.xyz\n> <mailto:michael@paquier.xyz>> wrote:\n>\n> On Thu, Nov 21, 2019 at 08:09:38PM +0100, Juan José Santamaría\n> Flecha wrote:\n> > I think Perl's open() is a bad candidate for an overload, so I\n> will update\n> > the previous patch that only touches slurp_file().\n>\n> FWIW, I don't like much the approach of patching only slurp_file().\n> What gives us the guarantee that we won't have this discussion again\n> in a couple of months or years once a new caller of open() is added\n> for some new TAP tests, and that it has the same problems with\n> multi-process concurrency?\n>\n>\n> I agree on that, from a technical stand point, overloading open() is\n> probably the best solution for the reasons above mentioned. My doubts\n> come from the effort such a solution will take and its\n> maintainability, also taking into account that there are not that many\n> calls to open() in \"src/test/perl\".\n>\n>\n\n\nI think the best course is for us to give your latest patch an outing on\nthe buildfarm and verify that the issues seen with slurp_file disappear.\nThat shouldn't take us more than a week or two to see - drongo has had 6\nsuch failures in the last 11 days on master. After that we can discuss\nhow much further we might want to take it.\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Fri, 22 Nov 2019 08:22:04 -0500",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests aren't using the magic words for Windows file access"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> I think the best course is for us to give your latest patch an outing on\n> the buildfarm and verify that the issues seen with slurp_file disappear.\n> That shouldn't take us more than a week or two to see - drongo has had 6\n> such failures in the last 11 days on master. After that we can discuss\n> how much further we might want to take it.\n\nSounds sensible to me. We don't yet have verification that this is\neven where the problem is ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 22 Nov 2019 15:46:02 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: TAP tests aren't using the magic words for Windows file access"
},
{
"msg_contents": "\nOn 11/22/19 3:46 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> I think the best course is for us to give your latest patch an outing on\n>> the buildfarm and verify that the issues seen with slurp_file disappear.\n>> That shouldn't take us more than a week or two to see - drongo has had 6\n>> such failures in the last 11 days on master. After that we can discuss\n>> how much further we might want to take it.\n> Sounds sensible to me. We don't yet have verification that this is\n> even where the problem is ...\n> \t\t\t\n\n\nDone.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Sun, 24 Nov 2019 18:33:25 -0500",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests aren't using the magic words for Windows file access"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 11/22/19 3:46 PM, Tom Lane wrote:\n>> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>>> I think the best course is for us to give your latest patch an outing on\n>>> the buildfarm and verify that the issues seen with slurp_file disappear.\n>>> That shouldn't take us more than a week or two to see - drongo has had 6\n>>> such failures in the last 11 days on master. After that we can discuss\n>>> how much further we might want to take it.\n\n>> Sounds sensible to me. We don't yet have verification that this is\n>> even where the problem is ...\n\n> Done.\n\n?? I don't see any commit ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 24 Nov 2019 18:46:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: TAP tests aren't using the magic words for Windows file access"
},
{
"msg_contents": "\nOn 11/24/19 6:46 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> On 11/22/19 3:46 PM, Tom Lane wrote:\n>>> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>>>> I think the best course is for us to give your latest patch an outing on\n>>>> the buildfarm and verify that the issues seen with slurp_file disappear.\n>>>> That shouldn't take us more than a week or two to see - drongo has had 6\n>>>> such failures in the last 11 days on master. After that we can discuss\n>>>> how much further we might want to take it.\n>>> Sounds sensible to me. We don't yet have verification that this is\n>>> even where the problem is ...\n>> Done.\n> ?? I don't see any commit ...\n>\n> \t\t\t\n\n\n\nYeash, forgot to push, sorry.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Sun, 24 Nov 2019 19:02:32 -0500",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests aren't using the magic words for Windows file access"
},
{
"msg_contents": "Hello hackers,\n\nAre there any plans to backport the patch to earlier versions\nof the Postgres?\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=114541d58e5970e51b78b77b65de16210beaab43\n\nWe rarely see the issue with the pg_ctl/004_logrotate test on\nthe REL_12_STABLE branch. On my notebook I can easily reproduce\nthe \"Permission denied at src/test/perl/TestLib.pm line 259\"\nerror with the small change below. But the same test on the\n13th version and the 12th version with the TestLib patch does\nnot fail.\n\ndiff --git a/src/bin/pg_ctl/t/004_logrotate.pl \nb/src/bin/pg_ctl/t/004_logrotate.pl \n \nindex bc39abd23e4..e49e159bc84 100644 \n \n --- \na/src/bin/pg_ctl/t/004_logrotate.pl \n \n +++ \nb/src/bin/pg_ctl/t/004_logrotate.pl \n \n @@ -72,7 +72,7 @@ for (my \n$attempts = 0; $attempts < $max_attempts; $attempts++) \n \n { \n \n \n $new_current_logfiles = slurp_file($node->data_dir . \n'/current_logfiles'); \n last \nif $new_current_logfiles ne $current_logfiles; \n \n - usleep(100_000); \n \n \n + usleep(1); \n \n } \n \n \n \n \n \nnote \"now current_logfiles = $new_current_logfiles\";\n\n\nOn 2019-11-22 20:22, Andrew Dunstan wrote:\n> On 11/22/19 3:55 AM, Juan José Santamaría Flecha wrote:\n>> \n>> On Fri, Nov 22, 2019 at 9:00 AM Michael Paquier <michael@paquier.xyz\n>> <mailto:michael@paquier.xyz>> wrote:\n>> \n>> On Thu, Nov 21, 2019 at 08:09:38PM +0100, Juan José Santamaría\n>> Flecha wrote:\n>> > I think Perl's open() is a bad candidate for an overload, so I\n>> will update\n>> > the previous patch that only touches slurp_file().\n>> \n>> FWIW, I don't like much the approach of patching only \n>> slurp_file().\n>> What gives us the guarantee that we won't have this discussion \n>> again\n>> in a couple of months or years once a new caller of open() is \n>> added\n>> for some new TAP tests, and that it has the same problems with\n>> multi-process concurrency?\n>> \n>> \n>> I agree on that, from a technical stand point, overloading open() is\n>> probably the best solution for the reasons above mentioned. My doubts\n>> come from the effort such a solution will take and its\n>> maintainability, also taking into account that there are not that many\n>> calls to open() in \"src/test/perl\".\n>> \n>> \n> \n> \n> I think the best course is for us to give your latest patch an outing \n> on\n> the buildfarm and verify that the issues seen with slurp_file \n> disappear.\n> That shouldn't take us more than a week or two to see - drongo has had \n> 6\n> such failures in the last 11 days on master. After that we can discuss\n> how much further we might want to take it.\n> \n> \n> cheers\n> \n> \n> andrew\n\n--\nregards,\n\nRoman Zharkov\n\n\n",
"msg_date": "Tue, 15 Dec 2020 12:05:58 +0700",
"msg_from": "r.zharkov@postgrespro.ru",
"msg_from_op": false,
"msg_subject": "Re: TAP tests aren't using the magic words for Windows file access"
},
{
"msg_contents": "\nOn 12/15/20 12:05 AM, r.zharkov@postgrespro.ru wrote:\n> Hello hackers,\n>\n> Are there any plans to backport the patch to earlier versions\n> of the Postgres?\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=114541d58e5970e51b78b77b65de16210beaab43\n>\n>\n> We rarely see the issue with the pg_ctl/004_logrotate test on\n> the REL_12_STABLE branch. On my notebook I can easily reproduce\n> the \"Permission denied at src/test/perl/TestLib.pm line 259\"\n> error with the small change below. But the same test on the\n> 13th version and the 12th version with the TestLib patch does\n> not fail.\n>\n> diff --git a/src/bin/pg_ctl/t/004_logrotate.pl\n> b/src/bin/pg_ctl/t/004_logrotate.pl \n> \n> index bc39abd23e4..e49e159bc84\n> 100644 \n> \n> ---\n> a/src/bin/pg_ctl/t/004_logrotate.pl \n> \n> +++\n> b/src/bin/pg_ctl/t/004_logrotate.pl \n> \n> @@ -72,7 +72,7 @@ for (my\n> $attempts = 0; $attempts < $max_attempts;\n> $attempts++) \n> \n> \n> { \n> \n> \n> $new_current_logfiles = slurp_file($node->data_dir .\n> '/current_logfiles'); \n> last\n> if $new_current_logfiles ne\n> $current_logfiles; \n> \n> - \n> usleep(100_000); \n> \n> \n> + \n> usleep(1); \n> \n> \n> } \n> \n> \n> \n> \n> \n> note \"now current_logfiles = $new_current_logfiles\";\n>\n>\n>\n\n\nOops, looks like that slipped off my radar somehow, I'll see about\nbackpatching it right away.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 15 Dec 2020 08:47:15 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests aren't using the magic words for Windows file access"
},
{
"msg_contents": "Thank you very much!\n\nOn 2020-12-15 20:47, Andrew Dunstan wrote:\n> On 12/15/20 12:05 AM, r.zharkov@postgrespro.ru wrote:\n>> Are there any plans to backport the patch to earlier versions\n>> of the Postgres?\n>> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=114541d58e5970e51b78b77b65de16210beaab43\n> \n> \n> Oops, looks like that slipped off my radar somehow, I'll see about\n> backpatching it right away.\n> \n> \n> cheers\n> \n> \n> andrew\n> \n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n\n-- \nregards,\n\nRoman Zharkov\n\n\n",
"msg_date": "Wed, 16 Dec 2020 11:22:58 +0700",
"msg_from": "r.zharkov@postgrespro.ru",
"msg_from_op": false,
"msg_subject": "Re: TAP tests aren't using the magic words for Windows file access"
}
] |
[
{
"msg_contents": "For parallel vacuum [1], we were discussing what is the best way to\ndivide the cost among parallel workers but we didn't get many inputs\napart from people who are very actively involved in patch development.\nI feel that we need some more inputs before we finalize anything, so\nstarting a new thread.\n\nThe initial version of the patch has a very rudimentary way of doing\nit which means each parallel vacuum worker operates independently\nw.r.t vacuum delay and cost. This will lead to more I/O in the system\nthan the user has intended to do. Assume that the overall I/O allowed\nfor vacuum operation is X after which it will sleep for some time,\nreset the balance and continue. In the patch, each worker will be\nallowed to perform X before which it can sleep and also there is no\ncoordination for the same with master backend which would have done\nsome I/O for the heap. So, in the worst-case scenario, there can be n\ntimes more I/O where n is the number of workers doing the parallel\noperation. This is somewhat similar to a memory usage problem with a\nparallel query where each worker is allowed to use up to work_mem of\nmemory. We can say that the users using parallel operation can expect\nmore system resources to be used as they want to get the operation\ndone faster, so we are fine with this. However, I am not sure if that\nis the right thing, so we should try to come up with some solution for\nit and if the solution is too complex, then probably we can think of\ndocumenting such behavior.\n\nThe two approaches to solve this problem being discussed in that\nthread [1] are as follows:\n(a) Allow the parallel workers and master backend to have a shared\nview of vacuum cost related parameters (mainly VacuumCostBalance) and\nallow each worker to update it and then based on that decide whether\nit needs to sleep. Sawada-San has done the POC for this approach.\nSee v32-0004-PoC-shared-vacuum-cost-balance in email [2]. One\ndrawback of this approach could be that we allow the worker to sleep\neven though the I/O has been performed by some other worker.\n\n(b) The other idea could be that we split the I/O among workers\nsomething similar to what we do for auto vacuum workers (see\nautovac_balance_cost). The basic idea would be that before launching\nworkers, we need to compute the remaining I/O (heap operation would\nhave used something) after which we need to sleep and split it equally\nacross workers. Here, we are primarily thinking of dividing\nVacuumCostBalance and VacuumCostLimit parameters. Once the workers\nare finished, they need to let master backend know how much I/O they\nhave consumed and then master backend can add it to it's current I/O\nconsumed. I think we also need to rebalance the cost of remaining\nworkers once some of the worker's exit. Dilip has prepared a POC\npatch for this, see 0002-POC-divide-vacuum-cost-limit in email [3].\n\nI think approach-2 is better in throttling the system as it doesn't\nhave the drawback of the first approach, but it might be a bit tricky\nto implement.\n\nAs of now, the POC for both the approaches has been developed and we\nsee similar results for both approaches, but we have tested simpler\ncases where each worker has similar amount of I/O to perform.\n\nThoughts?\n\n\n[1] - https://commitfest.postgresql.org/25/1774/\n[2] - https://www.postgresql.org/message-id/CAD21AoAqT17QwKJ_sWOqRxNvg66wMw1oZZzf9Rt-E-zD%2BXOh_Q%40mail.gmail.com\n[3] - https://www.postgresql.org/message-id/CAFiTN-thU-z8f04jO7xGMu5yUUpTpsBTvBrFW6EhRf-jGvEz%3Dg%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 4 Nov 2019 12:24:35 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "cost based vacuum (parallel)"
},
{
"msg_contents": ">\n>\n> This is somewhat similar to a memory usage problem with a\n> parallel query where each worker is allowed to use up to work_mem of\n> memory. We can say that the users using parallel operation can expect\n> more system resources to be used as they want to get the operation\n> done faster, so we are fine with this. However, I am not sure if that\n> is the right thing, so we should try to come up with some solution for\n> it and if the solution is too complex, then probably we can think of\n> documenting such behavior.\n>\n\nIn cloud environments (Amazon + gp2) there's a budget on input/output\noperations. If you cross it for long time, everything starts looking like\nyou work with a floppy disk.\n\nFor the ease of configuration, I would need a \"max_vacuum_disk_iops\" that\nwould limit number of input-output operations by all of the vacuums in the\nsystem. If I set it to less than value of budget refill, I can be sure than\nthat no vacuum runs too fast to impact any sibling query.\n\nThere's also value in non-throttled VACUUM for smaller tables. On gp2 such\nthings will be consumed out of surge budget, and its size is known to\nsysadmin. Let's call it \"max_vacuum_disk_surge_iops\" - if a relation has\nless blocks than this value and it's a blocking in any way situation\n(antiwraparound, interactive console, ...) - go on and run without\nthrottling.\n\nFor how to balance the cost: if we know a number of vacuum processes that\nwere running in the previous second, we can just divide a slot for this\niteration by that previous number.\n\nTo correct for overshots, we can subtract the previous second's overshot\nfrom next one's. That would also allow to account for surge budget usage\nand let it refill, pausing all autovacuum after a manual one for some time.\n\nPrecision of accounting limiting count of operations more than once a\nsecond isn't beneficial for this use case.\n\nPlease don't forget that processing one page can become several iops (read,\nwrite, wal).\n\nDoes this make sense? :)\n\nThis is somewhat similar to a memory usage problem with a\nparallel query where each worker is allowed to use up to work_mem of\nmemory. We can say that the users using parallel operation can expect\nmore system resources to be used as they want to get the operation\ndone faster, so we are fine with this. However, I am not sure if that\nis the right thing, so we should try to come up with some solution for\nit and if the solution is too complex, then probably we can think of\ndocumenting such behavior.In cloud environments (Amazon + gp2) there's a budget on input/output operations. If you cross it for long time, everything starts looking like you work with a floppy disk.For the ease of configuration, I would need a \"max_vacuum_disk_iops\" that would limit number of input-output operations by all of the vacuums in the system. If I set it to less than value of budget refill, I can be sure than that no vacuum runs too fast to impact any sibling query. There's also value in non-throttled VACUUM for smaller tables. On gp2 such things will be consumed out of surge budget, and its size is known to sysadmin. Let's call it \"max_vacuum_disk_surge_iops\" - if a relation has less blocks than this value and it's a blocking in any way situation (antiwraparound, interactive console, ...) - go on and run without throttling.For how to balance the cost: if we know a number of vacuum processes that were running in the previous second, we can just divide a slot for this iteration by that previous number. To correct for overshots, we can subtract the previous second's overshot from next one's. That would also allow to account for surge budget usage and let it refill, pausing all autovacuum after a manual one for some time.Precision of accounting limiting count of operations more than once a second isn't beneficial for this use case. Please don't forget that processing one page can become several iops (read, write, wal).Does this make sense? :)",
"msg_date": "Mon, 4 Nov 2019 10:33:18 +0300",
"msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>",
"msg_from_op": false,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "On Mon, Nov 4, 2019 at 3:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> I think approach-2 is better in throttling the system as it doesn't\n> have the drawback of the first approach, but it might be a bit tricky\n> to implement.\n\nI might be missing something but I think that there could be the\ndrawback of the approach-1 even on approach-2 depending on index pages\nloaded on the shared buffer and the vacuum delay setting. Is it right?\n\nRegards,\n\n---\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 4 Nov 2019 17:21:15 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "On Mon, Nov 4, 2019 at 1:51 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Nov 4, 2019 at 3:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > I think approach-2 is better in throttling the system as it doesn't\n> > have the drawback of the first approach, but it might be a bit tricky\n> > to implement.\n>\n> I might be missing something but I think that there could be the\n> drawback of the approach-1 even on approach-2 depending on index pages\n> loaded on the shared buffer and the vacuum delay setting.\n>\n\nCan you be a bit more specific about this?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 4 Nov 2019 15:56:00 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "On Mon, Nov 4, 2019 at 1:03 PM Darafei \"Komяpa\" Praliaskouski\n<me@komzpa.net> wrote:\n>>\n>>\n>> This is somewhat similar to a memory usage problem with a\n>> parallel query where each worker is allowed to use up to work_mem of\n>> memory. We can say that the users using parallel operation can expect\n>> more system resources to be used as they want to get the operation\n>> done faster, so we are fine with this. However, I am not sure if that\n>> is the right thing, so we should try to come up with some solution for\n>> it and if the solution is too complex, then probably we can think of\n>> documenting such behavior.\n>\n>\n> In cloud environments (Amazon + gp2) there's a budget on input/output operations. If you cross it for long time, everything starts looking like you work with a floppy disk.\n>\n> For the ease of configuration, I would need a \"max_vacuum_disk_iops\" that would limit number of input-output operations by all of the vacuums in the system. If I set it to less than value of budget refill, I can be sure than that no vacuum runs too fast to impact any sibling query.\n>\n> There's also value in non-throttled VACUUM for smaller tables. On gp2 such things will be consumed out of surge budget, and its size is known to sysadmin. Let's call it \"max_vacuum_disk_surge_iops\" - if a relation has less blocks than this value and it's a blocking in any way situation (antiwraparound, interactive console, ...) - go on and run without throttling.\n>\n\nI think the need for these things can be addressed by current\ncost-based-vacuum parameters. See docs [1]. For example, if you set\nvacuum_cost_delay as zero, it will allow the operation to be performed\nwithout throttling.\n\n> For how to balance the cost: if we know a number of vacuum processes that were running in the previous second, we can just divide a slot for this iteration by that previous number.\n>\n> To correct for overshots, we can subtract the previous second's overshot from next one's. That would also allow to account for surge budget usage and let it refill, pausing all autovacuum after a manual one for some time.\n>\n> Precision of accounting limiting count of operations more than once a second isn't beneficial for this use case.\n>\n\nI think it is better if we find a way to rebalance the cost on some\nworker exit rather than every second as anyway it won't change unless\nany worker exits.\n\n[1] - https://www.postgresql.org/docs/devel/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-VACUUM-COST\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 4 Nov 2019 16:05:29 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "On Mon, 4 Nov 2019 at 19:26, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Nov 4, 2019 at 1:51 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Nov 4, 2019 at 3:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > I think approach-2 is better in throttling the system as it doesn't\n> > > have the drawback of the first approach, but it might be a bit tricky\n> > > to implement.\n> >\n> > I might be missing something but I think that there could be the\n> > drawback of the approach-1 even on approach-2 depending on index pages\n> > loaded on the shared buffer and the vacuum delay setting.\n> >\n>\n> Can you be a bit more specific about this?\n\nSuppose there are two indexes: one index is loaded at all while\nanother index isn't. One vacuum worker who processes the former index\nhits all pages on the shared buffer but another worker who processes\nthe latter index read all pages from either OS page cache or disk.\nEven if both the cost limit and the cost balance are split evenly\namong workers because the cost of page hits and page misses are\ndifferent it's possible that one vacuum worker sleeps while other\nworkers doing I/O.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 4 Nov 2019 23:56:53 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "On Mon, Nov 4, 2019 at 1:54 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> For parallel vacuum [1], we were discussing what is the best way to\n> divide the cost among parallel workers but we didn't get many inputs\n> apart from people who are very actively involved in patch development.\n> I feel that we need some more inputs before we finalize anything, so\n> starting a new thread.\n>\n\nMaybe a I just don't have experience in the type of system that parallel\nvacuum is needed for, but if there is any meaningful IO throttling which is\nactive, then what is the point of doing the vacuum in parallel in the first\nplace?\n\nCheers,\n\nJeff\n\nOn Mon, Nov 4, 2019 at 1:54 AM Amit Kapila <amit.kapila16@gmail.com> wrote:For parallel vacuum [1], we were discussing what is the best way to\ndivide the cost among parallel workers but we didn't get many inputs\napart from people who are very actively involved in patch development.\nI feel that we need some more inputs before we finalize anything, so\nstarting a new thread.Maybe a I just don't have experience in the type of system that parallel vacuum is needed for, but if there is any meaningful IO throttling which is active, then what is the point of doing the vacuum in parallel in the first place?Cheers,Jeff",
"msg_date": "Mon, 4 Nov 2019 12:59:02 -0500",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "Hi,\n\nOn 2019-11-04 12:24:35 +0530, Amit Kapila wrote:\n> For parallel vacuum [1], we were discussing what is the best way to\n> divide the cost among parallel workers but we didn't get many inputs\n> apart from people who are very actively involved in patch development.\n> I feel that we need some more inputs before we finalize anything, so\n> starting a new thread.\n> \n> The initial version of the patch has a very rudimentary way of doing\n> it which means each parallel vacuum worker operates independently\n> w.r.t vacuum delay and cost.\n\nYea, that seems not ok for cases where vacuum delay is active.\n\nThere's also the question of when/why it is beneficial to use\nparallelism when you're going to encounter IO limits in all likelihood.\n\n\n> This will lead to more I/O in the system\n> than the user has intended to do. Assume that the overall I/O allowed\n> for vacuum operation is X after which it will sleep for some time,\n> reset the balance and continue. In the patch, each worker will be\n> allowed to perform X before which it can sleep and also there is no\n> coordination for the same with master backend which would have done\n> some I/O for the heap. So, in the worst-case scenario, there can be n\n> times more I/O where n is the number of workers doing the parallel\n> operation. This is somewhat similar to a memory usage problem with a\n> parallel query where each worker is allowed to use up to work_mem of\n> memory. We can say that the users using parallel operation can expect\n> more system resources to be used as they want to get the operation\n> done faster, so we are fine with this. However, I am not sure if that\n> is the right thing, so we should try to come up with some solution for\n> it and if the solution is too complex, then probably we can think of\n> documenting such behavior.\n\nI mean for parallel query the problem wasn't really introduced in\nparallel query, it existed before - and does still - for non-parallel\nqueries. And there's a complex underlying planning issue. I don't think\nthis is a good excuse for VACUUM, where none of the complex \"number of\npaths considered\" issues etc apply.\n\n\n> The two approaches to solve this problem being discussed in that\n> thread [1] are as follows:\n> (a) Allow the parallel workers and master backend to have a shared\n> view of vacuum cost related parameters (mainly VacuumCostBalance) and\n> allow each worker to update it and then based on that decide whether\n> it needs to sleep. Sawada-San has done the POC for this approach.\n> See v32-0004-PoC-shared-vacuum-cost-balance in email [2]. One\n> drawback of this approach could be that we allow the worker to sleep\n> even though the I/O has been performed by some other worker.\n\nI don't understand this drawback.\n\n\n> (b) The other idea could be that we split the I/O among workers\n> something similar to what we do for auto vacuum workers (see\n> autovac_balance_cost). The basic idea would be that before launching\n> workers, we need to compute the remaining I/O (heap operation would\n> have used something) after which we need to sleep and split it equally\n> across workers. Here, we are primarily thinking of dividing\n> VacuumCostBalance and VacuumCostLimit parameters. Once the workers\n> are finished, they need to let master backend know how much I/O they\n> have consumed and then master backend can add it to it's current I/O\n> consumed. I think we also need to rebalance the cost of remaining\n> workers once some of the worker's exit. Dilip has prepared a POC\n> patch for this, see 0002-POC-divide-vacuum-cost-limit in email [3].\n\n(b) doesn't strike me as advantageous. It seems quite possible that you\nend up with one worker that has a lot more IO than others, leading to\nunnecessary sleeps, even though the actually available IO budget has not\nbeen used up. Quite easy to see how that'd lead to parallel VACUUM\nhaving a lower throughput than a single threaded one.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 4 Nov 2019 10:11:57 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "Hi,\n\nOn 2019-11-04 12:59:02 -0500, Jeff Janes wrote:\n> On Mon, Nov 4, 2019 at 1:54 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > For parallel vacuum [1], we were discussing what is the best way to\n> > divide the cost among parallel workers but we didn't get many inputs\n> > apart from people who are very actively involved in patch development.\n> > I feel that we need some more inputs before we finalize anything, so\n> > starting a new thread.\n> >\n>\n> Maybe a I just don't have experience in the type of system that parallel\n> vacuum is needed for, but if there is any meaningful IO throttling which is\n> active, then what is the point of doing the vacuum in parallel in the first\n> place?\n\nI am wondering the same - but to be fair, it's pretty easy to run into\ncases where VACUUM is CPU bound. E.g. because most pages are in\nshared_buffers, and compared to the size of the indexes number of tids\nthat need to be pruned is fairly small (also [1]). That means a lot of\npages need to be scanned, without a whole lot of IO going on. The\nproblem with that is just that the defaults for vacuum throttling will\nalso apply here, I've never seen anybody tune vacuum_cost_page_hit = 0,\nvacuum_cost_page_dirty=0 or such (in contrast, the latter is the highest\ncost currently). Nor do we reduce the cost of vacuum_cost_page_dirty\nfor unlogged tables.\n\nSo while it doesn't seem unreasonable to want to use cost limiting to\nprotect against vacuum unexpectedly causing too much, especially read,\nIO, I'm doubtful it has current practical relevance.\n\nI'm wondering how much of the benefit of parallel vacuum really is just\nto work around vacuum ringbuffers often massively hurting performance\n(see e.g. [2]). Surely not all, but I'd be very unsurprised if it were a\nlarge fraction.\n\nGreetings,\n\nAndres Freund\n\n[1] I don't think the patch addresses this, IIUC it's only running index\n vacuums in parallel, but it's very easy to run into being CPU\n bottlenecked when vacuuming a busily updated table. heap_hot_prune\n can be really expensive, especially with longer update chains (I\n think it may have an O(n^2) worst case even).\n[2] https://www.postgresql.org/message-id/20160406105716.fhk2eparljthpzp6%40alap3.anarazel.de\n\n\n",
"msg_date": "Mon, 4 Nov 2019 10:28:29 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "Greetings,\n\n* Jeff Janes (jeff.janes@gmail.com) wrote:\n> On Mon, Nov 4, 2019 at 1:54 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > For parallel vacuum [1], we were discussing what is the best way to\n> > divide the cost among parallel workers but we didn't get many inputs\n> > apart from people who are very actively involved in patch development.\n> > I feel that we need some more inputs before we finalize anything, so\n> > starting a new thread.\n> \n> Maybe a I just don't have experience in the type of system that parallel\n> vacuum is needed for, but if there is any meaningful IO throttling which is\n> active, then what is the point of doing the vacuum in parallel in the first\n> place?\n\nWith parallelization across indexes, you could have a situation where\nthe individual indexes are on different tablespaces with independent\ni/o, therefore the parallelization ends up giving you an increase in i/o\nthroughput, not just additional CPU time.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 4 Nov 2019 14:06:19 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "Hi,\n\nOn 2019-11-04 14:06:19 -0500, Stephen Frost wrote:\n> * Jeff Janes (jeff.janes@gmail.com) wrote:\n> > On Mon, Nov 4, 2019 at 1:54 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > For parallel vacuum [1], we were discussing what is the best way to\n> > > divide the cost among parallel workers but we didn't get many inputs\n> > > apart from people who are very actively involved in patch development.\n> > > I feel that we need some more inputs before we finalize anything, so\n> > > starting a new thread.\n> > \n> > Maybe a I just don't have experience in the type of system that parallel\n> > vacuum is needed for, but if there is any meaningful IO throttling which is\n> > active, then what is the point of doing the vacuum in parallel in the first\n> > place?\n> \n> With parallelization across indexes, you could have a situation where\n> the individual indexes are on different tablespaces with independent\n> i/o, therefore the parallelization ends up giving you an increase in i/o\n> throughput, not just additional CPU time.\n\nHow's that related to IO throttling being active or not?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 4 Nov 2019 11:08:18 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2019-11-04 14:06:19 -0500, Stephen Frost wrote:\n> > * Jeff Janes (jeff.janes@gmail.com) wrote:\n> > > On Mon, Nov 4, 2019 at 1:54 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > For parallel vacuum [1], we were discussing what is the best way to\n> > > > divide the cost among parallel workers but we didn't get many inputs\n> > > > apart from people who are very actively involved in patch development.\n> > > > I feel that we need some more inputs before we finalize anything, so\n> > > > starting a new thread.\n> > > \n> > > Maybe a I just don't have experience in the type of system that parallel\n> > > vacuum is needed for, but if there is any meaningful IO throttling which is\n> > > active, then what is the point of doing the vacuum in parallel in the first\n> > > place?\n> > \n> > With parallelization across indexes, you could have a situation where\n> > the individual indexes are on different tablespaces with independent\n> > i/o, therefore the parallelization ends up giving you an increase in i/o\n> > throughput, not just additional CPU time.\n> \n> How's that related to IO throttling being active or not?\n\nYou might find that you have to throttle the IO down when operating\nexclusively against one IO channel, but if you have multiple IO channels\nthen the acceptable IO utilization could be higher as it would be \nspread across the different IO channels.\n\nIn other words, the overall i/o allowance for a given operation might be\nable to be higher if it's spread across multiple i/o channels, as it\nwouldn't completely consume the i/o resources of any of them, whereas\nwith a higher allowance and a single i/o channel, there would likely be\nan impact to other operations.\n\nAs for if this is really relevant only when it comes to parallel\noperations is a bit of an interesting question- these considerations\nmight not require actual parallel operations as a single process might\nbe able to go through multiple indexes concurrently and still hit the\ni/o limit that was set for it overall across the tablespaces. I don't\nknow that it would actually be interesting or useful to spend the effort\nto make that work though, so, from a practical perspective, it's\nprobably only interesting to think about this when talking about\nparallel vacuum.\n\nI've been wondering if the accounting system should consider the cost\nper tablespace when there's multiple tablespaces involved, instead of\nthrottling the overall process without consideration for the\nper-tablespace utilization.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 4 Nov 2019 14:33:41 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "Hi,\n\nOn 2019-11-04 14:33:41 -0500, Stephen Frost wrote:\n> * Andres Freund (andres@anarazel.de) wrote:\n> > On 2019-11-04 14:06:19 -0500, Stephen Frost wrote:\n> > > With parallelization across indexes, you could have a situation where\n> > > the individual indexes are on different tablespaces with independent\n> > > i/o, therefore the parallelization ends up giving you an increase in i/o\n> > > throughput, not just additional CPU time.\n> > \n> > How's that related to IO throttling being active or not?\n> \n> You might find that you have to throttle the IO down when operating\n> exclusively against one IO channel, but if you have multiple IO channels\n> then the acceptable IO utilization could be higher as it would be \n> spread across the different IO channels.\n> \n> In other words, the overall i/o allowance for a given operation might be\n> able to be higher if it's spread across multiple i/o channels, as it\n> wouldn't completely consume the i/o resources of any of them, whereas\n> with a higher allowance and a single i/o channel, there would likely be\n> an impact to other operations.\n> \n> As for if this is really relevant only when it comes to parallel\n> operations is a bit of an interesting question- these considerations\n> might not require actual parallel operations as a single process might\n> be able to go through multiple indexes concurrently and still hit the\n> i/o limit that was set for it overall across the tablespaces. I don't\n> know that it would actually be interesting or useful to spend the effort\n> to make that work though, so, from a practical perspective, it's\n> probably only interesting to think about this when talking about\n> parallel vacuum.\n\nBut you could just apply different budgets for different tablespaces?\nThat's quite doable independent of parallelism, as we don't have tables\nor indexes spanning more than one tablespace. True, you could then make\nthe processing of an individual vacuum faster by allowing to utilize\nmultiple tablespace budgets at the same time.\n\n\n> I've been wondering if the accounting system should consider the cost\n> per tablespace when there's multiple tablespaces involved, instead of\n> throttling the overall process without consideration for the\n> per-tablespace utilization.\n\nThis all seems like a feature proposal, or two, independent of the\npatch/question at hand. I think there's a good argument to be had that\nwe should severely overhaul the current vacuum cost limiting - it's way\nway too hard to understand the bandwidth that it's allowed to\nconsume. But unless one of the proposals makes that measurably harder or\neasier, I think we don't gain anything by entangling an already complex\npatchset with something new.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 4 Nov 2019 11:42:24 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2019-11-04 14:33:41 -0500, Stephen Frost wrote:\n> > * Andres Freund (andres@anarazel.de) wrote:\n> > > On 2019-11-04 14:06:19 -0500, Stephen Frost wrote:\n> > > > With parallelization across indexes, you could have a situation where\n> > > > the individual indexes are on different tablespaces with independent\n> > > > i/o, therefore the parallelization ends up giving you an increase in i/o\n> > > > throughput, not just additional CPU time.\n> > > \n> > > How's that related to IO throttling being active or not?\n> > \n> > You might find that you have to throttle the IO down when operating\n> > exclusively against one IO channel, but if you have multiple IO channels\n> > then the acceptable IO utilization could be higher as it would be \n> > spread across the different IO channels.\n> > \n> > In other words, the overall i/o allowance for a given operation might be\n> > able to be higher if it's spread across multiple i/o channels, as it\n> > wouldn't completely consume the i/o resources of any of them, whereas\n> > with a higher allowance and a single i/o channel, there would likely be\n> > an impact to other operations.\n> > \n> > As for if this is really relevant only when it comes to parallel\n> > operations is a bit of an interesting question- these considerations\n> > might not require actual parallel operations as a single process might\n> > be able to go through multiple indexes concurrently and still hit the\n> > i/o limit that was set for it overall across the tablespaces. I don't\n> > know that it would actually be interesting or useful to spend the effort\n> > to make that work though, so, from a practical perspective, it's\n> > probably only interesting to think about this when talking about\n> > parallel vacuum.\n> \n> But you could just apply different budgets for different tablespaces?\n\nYes, that would be one approach to addressing this, though it would\nchange the existing meaning of those cost parameters. I'm not sure if\nwe think that's an issue or not- if we only have this in the case of a\nparallel vacuum then it's probably fine, I'm less sure if it'd be\nalright to change that on an upgrade.\n\n> That's quite doable independent of parallelism, as we don't have tables\n> or indexes spanning more than one tablespace. True, you could then make\n> the processing of an individual vacuum faster by allowing to utilize\n> multiple tablespace budgets at the same time.\n\nYes, it's possible to do independent of parallelism, but what I was\ntrying to get at above is that it might not be worth the effort. When\nit comes to parallel vacuum though, I'm not sure that you can just punt\non this question since you'll naturally end up spanning multiple\ntablespaces concurrently, at least if the heap+indexes are spread across\nmultiple tablespaces and you're operating against more than one of those\nrelations at a time (which, I admit, I'm not 100% sure is actually\nhappening with this proposed patch set- if it isn't, then this isn't\nreally an issue, though that would be pretty unfortunate as then you\ncan't leverage multiple i/o channels concurrently and therefore Jeff's\nquestion about why you'd be doing parallel vacuum with IO throttling is\na pretty good one).\n\nThanks,\n\nStephen",
"msg_date": "Mon, 4 Nov 2019 15:12:05 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "On Mon, Nov 4, 2019 at 11:42 PM Andres Freund <andres@anarazel.de> wrote:\n>\n>\n> > The two approaches to solve this problem being discussed in that\n> > thread [1] are as follows:\n> > (a) Allow the parallel workers and master backend to have a shared\n> > view of vacuum cost related parameters (mainly VacuumCostBalance) and\n> > allow each worker to update it and then based on that decide whether\n> > it needs to sleep. Sawada-San has done the POC for this approach.\n> > See v32-0004-PoC-shared-vacuum-cost-balance in email [2]. One\n> > drawback of this approach could be that we allow the worker to sleep\n> > even though the I/O has been performed by some other worker.\n>\n> I don't understand this drawback.\n>\n\nI think the problem could be that the system is not properly throttled\nwhen it is supposed to be. Let me try by a simple example, say we\nhave two workers w-1 and w-2. The w-2 is primarily doing the I/O and\nw-1 is doing very less I/O but unfortunately whenever w-1 checks it\nfinds that cost_limit has exceeded and it goes for sleep, but w-1\nstill continues. Now in such a situation even though we have made one\nof the workers slept for a required time but ideally the worker which\nwas doing I/O should have slept. The aim is to make the system stop\ndoing I/O whenever the limit has exceeded, so that might not work in\nthe above situation.\n\n>\n> > (b) The other idea could be that we split the I/O among workers\n> > something similar to what we do for auto vacuum workers (see\n> > autovac_balance_cost). The basic idea would be that before launching\n> > workers, we need to compute the remaining I/O (heap operation would\n> > have used something) after which we need to sleep and split it equally\n> > across workers. Here, we are primarily thinking of dividing\n> > VacuumCostBalance and VacuumCostLimit parameters. Once the workers\n> > are finished, they need to let master backend know how much I/O they\n> > have consumed and then master backend can add it to it's current I/O\n> > consumed. I think we also need to rebalance the cost of remaining\n> > workers once some of the worker's exit. Dilip has prepared a POC\n> > patch for this, see 0002-POC-divide-vacuum-cost-limit in email [3].\n>\n> (b) doesn't strike me as advantageous. It seems quite possible that you\n> end up with one worker that has a lot more IO than others, leading to\n> unnecessary sleeps, even though the actually available IO budget has not\n> been used up.\n>\n\nYeah, this is possible, but to an extent, this is possible in the\ncurrent design as well where we balance the cost among autovacuum\nworkers. Now, it is quite possible that the current design itself is\nnot good and we don't want to do the same thing at another place, but\nat least we will be consistent and can explain the overall behavior.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Nov 2019 11:28:51 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "On Mon, Nov 4, 2019 at 11:58 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2019-11-04 12:59:02 -0500, Jeff Janes wrote:\n> > On Mon, Nov 4, 2019 at 1:54 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > > For parallel vacuum [1], we were discussing what is the best way to\n> > > divide the cost among parallel workers but we didn't get many inputs\n> > > apart from people who are very actively involved in patch development.\n> > > I feel that we need some more inputs before we finalize anything, so\n> > > starting a new thread.\n> > >\n> >\n> > Maybe a I just don't have experience in the type of system that parallel\n> > vacuum is needed for, but if there is any meaningful IO throttling which is\n> > active, then what is the point of doing the vacuum in parallel in the first\n> > place?\n>\n> I am wondering the same - but to be fair, it's pretty easy to run into\n> cases where VACUUM is CPU bound. E.g. because most pages are in\n> shared_buffers, and compared to the size of the indexes number of tids\n> that need to be pruned is fairly small (also [1]). That means a lot of\n> pages need to be scanned, without a whole lot of IO going on. The\n> problem with that is just that the defaults for vacuum throttling will\n> also apply here, I've never seen anybody tune vacuum_cost_page_hit = 0,\n> vacuum_cost_page_dirty=0 or such (in contrast, the latter is the highest\n> cost currently). Nor do we reduce the cost of vacuum_cost_page_dirty\n> for unlogged tables.\n>\n> So while it doesn't seem unreasonable to want to use cost limiting to\n> protect against vacuum unexpectedly causing too much, especially read,\n> IO, I'm doubtful it has current practical relevance.\n>\n\nIIUC, you mean to say that it is of not much practical use to do\nparallel vacuum if I/O throttling is enabled for an operation, is that\nright?\n\n\n> I'm wondering how much of the benefit of parallel vacuum really is just\n> to work around vacuum ringbuffers often massively hurting performance\n> (see e.g. [2]).\n>\n\nYeah, it is a good thing to check, but if anything, I think a parallel\nvacuum will further improve the performance with larger ring buffers\nas it will make it more CPU bound.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Nov 2019 14:40:42 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "On Tue, Nov 5, 2019 at 1:12 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2019-11-04 14:33:41 -0500, Stephen Frost wrote:\n>\n> > I've been wondering if the accounting system should consider the cost\n> > per tablespace when there's multiple tablespaces involved, instead of\n> > throttling the overall process without consideration for the\n> > per-tablespace utilization.\n>\n> This all seems like a feature proposal, or two, independent of the\n> patch/question at hand. I think there's a good argument to be had that\n> we should severely overhaul the current vacuum cost limiting - it's way\n> way too hard to understand the bandwidth that it's allowed to\n> consume. But unless one of the proposals makes that measurably harder or\n> easier, I think we don't gain anything by entangling an already complex\n> patchset with something new.\n>\n\n+1. I think even if we want something related to per-tablespace\ncosting for vacuum (parallel), it should be done as a separate patch.\nIt is a whole new area where we need to define what is the appropriate\nway to achieve. It is going to change the current vacuum costing\nsystem in a big way which I don't think is reasonable to do as part of\na parallel vacuum patch.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Nov 2019 14:52:21 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "On Tue, Nov 5, 2019 at 2:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Nov 4, 2019 at 11:58 PM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2019-11-04 12:59:02 -0500, Jeff Janes wrote:\n> > > On Mon, Nov 4, 2019 at 1:54 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > > For parallel vacuum [1], we were discussing what is the best way to\n> > > > divide the cost among parallel workers but we didn't get many inputs\n> > > > apart from people who are very actively involved in patch development.\n> > > > I feel that we need some more inputs before we finalize anything, so\n> > > > starting a new thread.\n> > > >\n> > >\n> > > Maybe a I just don't have experience in the type of system that parallel\n> > > vacuum is needed for, but if there is any meaningful IO throttling which is\n> > > active, then what is the point of doing the vacuum in parallel in the first\n> > > place?\n> >\n> > I am wondering the same - but to be fair, it's pretty easy to run into\n> > cases where VACUUM is CPU bound. E.g. because most pages are in\n> > shared_buffers, and compared to the size of the indexes number of tids\n> > that need to be pruned is fairly small (also [1]). That means a lot of\n> > pages need to be scanned, without a whole lot of IO going on. The\n> > problem with that is just that the defaults for vacuum throttling will\n> > also apply here, I've never seen anybody tune vacuum_cost_page_hit = 0,\n> > vacuum_cost_page_dirty=0 or such (in contrast, the latter is the highest\n> > cost currently). Nor do we reduce the cost of vacuum_cost_page_dirty\n> > for unlogged tables.\n> >\n> > So while it doesn't seem unreasonable to want to use cost limiting to\n> > protect against vacuum unexpectedly causing too much, especially read,\n> > IO, I'm doubtful it has current practical relevance.\n> >\n>\n> IIUC, you mean to say that it is of not much practical use to do\n> parallel vacuum if I/O throttling is enabled for an operation, is that\n> right?\n>\n>\n> > I'm wondering how much of the benefit of parallel vacuum really is just\n> > to work around vacuum ringbuffers often massively hurting performance\n> > (see e.g. [2]).\n> >\n>\n> Yeah, it is a good thing to check, but if anything, I think a parallel\n> vacuum will further improve the performance with larger ring buffers\n> as it will make it more CPU bound.\nI have tested the same and the results prove that increasing the ring\nbuffer size we can see the performance gain. And, the gain is much\nmore with the parallel vacuum.\n\nTest case:\ncreate table test(a int, b int, c int, d int, e int, f int, g int, h int);\ncreate index idx1 on test(a);\ncreate index idx2 on test(b);\ncreate index idx3 on test(c);\ncreate index idx4 on test(d);\ncreate index idx5 on test(e);\ncreate index idx6 on test(f);\ncreate index idx7 on test(g);\ncreate index idx8 on test(h);\ninsert into test select i,i,i,i,i,i,i,i from generate_series(1,1000000) as i;\ndelete from test where a < 300000;\n\n( I have tested the parallel vacuum and non-parallel vacuum with\ndifferent ring buffer size)\n\n8 index\nring buffer size 246kb-> non-parallel: 7.6 seconds parallel (2\nworker): 3.9 seconds\nring buffer size 256mb-> non-parallel: 6.1 seconds parallel (2\nworker): 3.2 seconds\n\n4 index\nring buffer size 246kb -> non-parallel: 4.8 seconds parallel (2\nworker): 3.2 seconds\nring buffer size 256mb -> non-parallel: 3.8 seconds parallel (2\nworker): 2.6 seconds\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Nov 2019 20:46:41 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "Hi, \n\nOn November 5, 2019 7:16:41 AM PST, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>On Tue, Nov 5, 2019 at 2:40 PM Amit Kapila <amit.kapila16@gmail.com>\n>wrote:\n>>\n>> On Mon, Nov 4, 2019 at 11:58 PM Andres Freund <andres@anarazel.de>\n>wrote:\n>> >\n>> > Hi,\n>> >\n>> > On 2019-11-04 12:59:02 -0500, Jeff Janes wrote:\n>> > > On Mon, Nov 4, 2019 at 1:54 AM Amit Kapila\n><amit.kapila16@gmail.com> wrote:\n>> > >\n>> > > > For parallel vacuum [1], we were discussing what is the best\n>way to\n>> > > > divide the cost among parallel workers but we didn't get many\n>inputs\n>> > > > apart from people who are very actively involved in patch\n>development.\n>> > > > I feel that we need some more inputs before we finalize\n>anything, so\n>> > > > starting a new thread.\n>> > > >\n>> > >\n>> > > Maybe a I just don't have experience in the type of system that\n>parallel\n>> > > vacuum is needed for, but if there is any meaningful IO\n>throttling which is\n>> > > active, then what is the point of doing the vacuum in parallel in\n>the first\n>> > > place?\n>> >\n>> > I am wondering the same - but to be fair, it's pretty easy to run\n>into\n>> > cases where VACUUM is CPU bound. E.g. because most pages are in\n>> > shared_buffers, and compared to the size of the indexes number of\n>tids\n>> > that need to be pruned is fairly small (also [1]). That means a lot\n>of\n>> > pages need to be scanned, without a whole lot of IO going on. The\n>> > problem with that is just that the defaults for vacuum throttling\n>will\n>> > also apply here, I've never seen anybody tune vacuum_cost_page_hit\n>= 0,\n>> > vacuum_cost_page_dirty=0 or such (in contrast, the latter is the\n>highest\n>> > cost currently). Nor do we reduce the cost of\n>vacuum_cost_page_dirty\n>> > for unlogged tables.\n>> >\n>> > So while it doesn't seem unreasonable to want to use cost limiting\n>to\n>> > protect against vacuum unexpectedly causing too much, especially\n>read,\n>> > IO, I'm doubtful it has current practical relevance.\n>> >\n>>\n>> IIUC, you mean to say that it is of not much practical use to do\n>> parallel vacuum if I/O throttling is enabled for an operation, is\n>that\n>> right?\n>>\n>>\n>> > I'm wondering how much of the benefit of parallel vacuum really is\n>just\n>> > to work around vacuum ringbuffers often massively hurting\n>performance\n>> > (see e.g. [2]).\n>> >\n>>\n>> Yeah, it is a good thing to check, but if anything, I think a\n>parallel\n>> vacuum will further improve the performance with larger ring buffers\n>> as it will make it more CPU bound.\n>I have tested the same and the results prove that increasing the ring\n>buffer size we can see the performance gain. And, the gain is much\n>more with the parallel vacuum.\n>\n>Test case:\n>create table test(a int, b int, c int, d int, e int, f int, g int, h\n>int);\n>create index idx1 on test(a);\n>create index idx2 on test(b);\n>create index idx3 on test(c);\n>create index idx4 on test(d);\n>create index idx5 on test(e);\n>create index idx6 on test(f);\n>create index idx7 on test(g);\n>create index idx8 on test(h);\n>insert into test select i,i,i,i,i,i,i,i from generate_series(1,1000000)\n>as i;\n>delete from test where a < 300000;\n>\n>( I have tested the parallel vacuum and non-parallel vacuum with\n>different ring buffer size)\n\nThanks!\n\n>8 index\n>ring buffer size 246kb-> non-parallel: 7.6 seconds parallel (2\n>worker): 3.9 seconds\n>ring buffer size 256mb-> non-parallel: 6.1 seconds parallel (2\n>worker): 3.2 seconds\n>\n>4 index\n>ring buffer size 246kb -> non-parallel: 4.8 seconds parallel (2\n>worker): 3.2 seconds\n>ring buffer size 256mb -> non-parallel: 3.8 seconds parallel (2\n>worker): 2.6 seconds\n\nWhat about the case of just disabling the ring buffer logic?\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Tue, 05 Nov 2019 07:19:28 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "On Tue, Nov 5, 2019 at 8:49 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On November 5, 2019 7:16:41 AM PST, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >On Tue, Nov 5, 2019 at 2:40 PM Amit Kapila <amit.kapila16@gmail.com>\n> >wrote:\n> >>\n> >> On Mon, Nov 4, 2019 at 11:58 PM Andres Freund <andres@anarazel.de>\n> >wrote:\n> >> >\n> >> > Hi,\n> >> >\n> >> > On 2019-11-04 12:59:02 -0500, Jeff Janes wrote:\n> >> > > On Mon, Nov 4, 2019 at 1:54 AM Amit Kapila\n> ><amit.kapila16@gmail.com> wrote:\n> >> > >\n> >> > > > For parallel vacuum [1], we were discussing what is the best\n> >way to\n> >> > > > divide the cost among parallel workers but we didn't get many\n> >inputs\n> >> > > > apart from people who are very actively involved in patch\n> >development.\n> >> > > > I feel that we need some more inputs before we finalize\n> >anything, so\n> >> > > > starting a new thread.\n> >> > > >\n> >> > >\n> >> > > Maybe a I just don't have experience in the type of system that\n> >parallel\n> >> > > vacuum is needed for, but if there is any meaningful IO\n> >throttling which is\n> >> > > active, then what is the point of doing the vacuum in parallel in\n> >the first\n> >> > > place?\n> >> >\n> >> > I am wondering the same - but to be fair, it's pretty easy to run\n> >into\n> >> > cases where VACUUM is CPU bound. E.g. because most pages are in\n> >> > shared_buffers, and compared to the size of the indexes number of\n> >tids\n> >> > that need to be pruned is fairly small (also [1]). That means a lot\n> >of\n> >> > pages need to be scanned, without a whole lot of IO going on. The\n> >> > problem with that is just that the defaults for vacuum throttling\n> >will\n> >> > also apply here, I've never seen anybody tune vacuum_cost_page_hit\n> >= 0,\n> >> > vacuum_cost_page_dirty=0 or such (in contrast, the latter is the\n> >highest\n> >> > cost currently). Nor do we reduce the cost of\n> >vacuum_cost_page_dirty\n> >> > for unlogged tables.\n> >> >\n> >> > So while it doesn't seem unreasonable to want to use cost limiting\n> >to\n> >> > protect against vacuum unexpectedly causing too much, especially\n> >read,\n> >> > IO, I'm doubtful it has current practical relevance.\n> >> >\n> >>\n> >> IIUC, you mean to say that it is of not much practical use to do\n> >> parallel vacuum if I/O throttling is enabled for an operation, is\n> >that\n> >> right?\n> >>\n> >>\n> >> > I'm wondering how much of the benefit of parallel vacuum really is\n> >just\n> >> > to work around vacuum ringbuffers often massively hurting\n> >performance\n> >> > (see e.g. [2]).\n> >> >\n> >>\n> >> Yeah, it is a good thing to check, but if anything, I think a\n> >parallel\n> >> vacuum will further improve the performance with larger ring buffers\n> >> as it will make it more CPU bound.\n> >I have tested the same and the results prove that increasing the ring\n> >buffer size we can see the performance gain. And, the gain is much\n> >more with the parallel vacuum.\n> >\n> >Test case:\n> >create table test(a int, b int, c int, d int, e int, f int, g int, h\n> >int);\n> >create index idx1 on test(a);\n> >create index idx2 on test(b);\n> >create index idx3 on test(c);\n> >create index idx4 on test(d);\n> >create index idx5 on test(e);\n> >create index idx6 on test(f);\n> >create index idx7 on test(g);\n> >create index idx8 on test(h);\n> >insert into test select i,i,i,i,i,i,i,i from generate_series(1,1000000)\n> >as i;\n> >delete from test where a < 300000;\n> >\n> >( I have tested the parallel vacuum and non-parallel vacuum with\n> >different ring buffer size)\n>\n> Thanks!\n>\n> >8 index\n> >ring buffer size 246kb-> non-parallel: 7.6 seconds parallel (2\n> >worker): 3.9 seconds\n> >ring buffer size 256mb-> non-parallel: 6.1 seconds parallel (2\n> >worker): 3.2 seconds\n> >\n> >4 index\n> >ring buffer size 246kb -> non-parallel: 4.8 seconds parallel (2\n> >worker): 3.2 seconds\n> >ring buffer size 256mb -> non-parallel: 3.8 seconds parallel (2\n> >worker): 2.6 seconds\n>\n> What about the case of just disabling the ring buffer logic?\n>\nRepeated the same test by disabling the ring buffer logic. Results\nare almost same as increasing the ring buffer size.\n\nTested with 4GB shared buffers:\n\n8 index\nuse shared buffers -> non-parallel: 6.2seconds parallel (2 worker): 3.3seconds\n\n4 index\nuse shared buffer -> non-parallel: 3.8seconds parallel (2 worker): 2.7seconds\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Nov 2019 21:20:15 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "On Tue, Nov 5, 2019 at 1:42 AM Stephen Frost <sfrost@snowman.net> wrote:\n> * Andres Freund (andres@anarazel.de) wrote:\n>\n> > That's quite doable independent of parallelism, as we don't have tables\n> > or indexes spanning more than one tablespace. True, you could then make\n> > the processing of an individual vacuum faster by allowing to utilize\n> > multiple tablespace budgets at the same time.\n>\n> Yes, it's possible to do independent of parallelism, but what I was\n> trying to get at above is that it might not be worth the effort. When\n> it comes to parallel vacuum though, I'm not sure that you can just punt\n> on this question since you'll naturally end up spanning multiple\n> tablespaces concurrently, at least if the heap+indexes are spread across\n> multiple tablespaces and you're operating against more than one of those\n> relations at a time\n>\n\nEach parallel worker operates on a separate index. It might be worth\nexploring per-tablespace vacuum throttling, but that should not be a\nrequirement for the currently proposed patch.\n\nAs per feedback in this thread, it seems that for now, it is better,\nif we can allow a parallel vacuum only when I/O throttling is not\nenabled. We can later extend it based on feedback from the field once\nthe feature starts getting used.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 6 Nov 2019 07:53:09 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "Hi,\n\nOn 2019-11-06 07:53:09 +0530, Amit Kapila wrote:\n> As per feedback in this thread, it seems that for now, it is better,\n> if we can allow a parallel vacuum only when I/O throttling is not\n> enabled. We can later extend it based on feedback from the field once\n> the feature starts getting used.\n\nThat's not my read on this thread. I don't think we should introduce\nthis feature without a solution for the throttling.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 5 Nov 2019 18:25:50 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "On Wed, Nov 6, 2019 at 7:55 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2019-11-06 07:53:09 +0530, Amit Kapila wrote:\n> > As per feedback in this thread, it seems that for now, it is better,\n> > if we can allow a parallel vacuum only when I/O throttling is not\n> > enabled. We can later extend it based on feedback from the field once\n> > the feature starts getting used.\n>\n> That's not my read on this thread. I don't think we should introduce\n> this feature without a solution for the throttling.\n>\n\nOkay, then I misunderstood your response to Jeff's email [1]. Anyway,\nwe have already explored two different approaches as mentioned in the\ninitial email which has somewhat similar results on initial tests.\nSo, we can explore more on those lines. Do you any preference or any\nother idea?\n\n\n[1] - https://www.postgresql.org/message-id/20191104182829.57bkz64qn5k3uwc3%40alap3.anarazel.de\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 6 Nov 2019 08:09:54 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "Greetings,\n\n* Amit Kapila (amit.kapila16@gmail.com) wrote:\n> On Tue, Nov 5, 2019 at 1:42 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > * Andres Freund (andres@anarazel.de) wrote:\n> > > That's quite doable independent of parallelism, as we don't have tables\n> > > or indexes spanning more than one tablespace. True, you could then make\n> > > the processing of an individual vacuum faster by allowing to utilize\n> > > multiple tablespace budgets at the same time.\n> >\n> > Yes, it's possible to do independent of parallelism, but what I was\n> > trying to get at above is that it might not be worth the effort. When\n> > it comes to parallel vacuum though, I'm not sure that you can just punt\n> > on this question since you'll naturally end up spanning multiple\n> > tablespaces concurrently, at least if the heap+indexes are spread across\n> > multiple tablespaces and you're operating against more than one of those\n> > relations at a time\n> \n> Each parallel worker operates on a separate index. It might be worth\n> exploring per-tablespace vacuum throttling, but that should not be a\n> requirement for the currently proposed patch.\n\nRight, that each operates on a separate index in parallel is what I had\nfigured was probably happening, and that's why I brought up the question\nof \"well, what does IO throttling mean when you've got multiple\ntablespaces involved with presumably independent IO channels...?\" (or,\nat least, that's what I was trying to go for).\n\nThis isn't a question with the current system and way the code works\nwithin a single vacuum operation, as we're never operating on more than\none relation concurrently in that case.\n\nOf course, we don't currently do anything to manage IO utilization\nacross tablespaces when there are multiple autovacuum workers running\nconcurrently, which I suppose goes to Andres' point that we aren't\nreally doing anything to deal with this today and therefore this is\nperhaps not all that new of an issue just with the addition of\nparallel vacuum. I'd still argue that it becomes a lot more apparent\nwhen you're talking about one parallel vacuum, but ultimately we should\nprobably be thinking about how to manage the resources across all the\nvacuums and tablespaces and queries and such.\n\nIn an ideal world, we'd track the i/o from front-end queries, have some\nidea of the total i/o possible for each IO channel, and allow vacuum and\nwhatever other background processes need to run to scale up and down,\nwith enough buffer to avoid ever being maxed out on i/o, but keeping up\na consistent rate of i/o that lets everything finish as quickly as\npossible.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 5 Nov 2019 22:51:28 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "On Tue, Nov 5, 2019 at 11:28 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Nov 4, 2019 at 11:42 PM Andres Freund <andres@anarazel.de> wrote:\n> >\n> >\n> > > The two approaches to solve this problem being discussed in that\n> > > thread [1] are as follows:\n> > > (a) Allow the parallel workers and master backend to have a shared\n> > > view of vacuum cost related parameters (mainly VacuumCostBalance) and\n> > > allow each worker to update it and then based on that decide whether\n> > > it needs to sleep. Sawada-San has done the POC for this approach.\n> > > See v32-0004-PoC-shared-vacuum-cost-balance in email [2]. One\n> > > drawback of this approach could be that we allow the worker to sleep\n> > > even though the I/O has been performed by some other worker.\n> >\n> > I don't understand this drawback.\n> >\n>\n> I think the problem could be that the system is not properly throttled\n> when it is supposed to be. Let me try by a simple example, say we\n> have two workers w-1 and w-2. The w-2 is primarily doing the I/O and\n> w-1 is doing very less I/O but unfortunately whenever w-1 checks it\n> finds that cost_limit has exceeded and it goes for sleep, but w-1\n> still continues.\n>\n\nTypo in the above sentence. /but w-1 still continues/but w-2 still continues.\n\n> Now in such a situation even though we have made one\n> of the workers slept for a required time but ideally the worker which\n> was doing I/O should have slept. The aim is to make the system stop\n> doing I/O whenever the limit has exceeded, so that might not work in\n> the above situation.\n>\n\nOne idea to fix this drawback is that if we somehow avoid letting the\nworkers sleep which has done less or no I/O as compared to other\nworkers, then we can to a good extent ensure that workers which are\ndoing more I/O will be throttled more. What we can do is to allow any\nworker sleep only if it has performed the I/O above a certain\nthreshold and the overall balance is more than the cost_limit set by\nthe system. Then we will allow the worker to sleep proportional to\nthe work done by it and reduce the VacuumSharedCostBalance by the\namount which is consumed by the current worker. Something like:\n\nIf ( VacuumSharedCostBalance >= VacuumCostLimit &&\n MyCostBalance > (threshold) VacuumCostLimit / workers)\n{\nVacuumSharedCostBalance -= MyCostBalance;\nSleep (delay * MyCostBalance/VacuumSharedCostBalance)\n}\n\nAssume threshold be 0.5, what that means is, if it has done work more\nthan 50% of what is expected from this worker and the overall share\ncost balance is exceeded, then we will consider this worker to sleep.\n\nWhat do you guys think?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 6 Nov 2019 12:14:42 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "On Wed, Nov 6, 2019 at 9:21 AM Stephen Frost <sfrost@snowman.net> wrote:\n>\n> Greetings,\n>\n> * Amit Kapila (amit.kapila16@gmail.com) wrote:\n> > On Tue, Nov 5, 2019 at 1:42 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > > * Andres Freund (andres@anarazel.de) wrote:\n> > > > That's quite doable independent of parallelism, as we don't have tables\n> > > > or indexes spanning more than one tablespace. True, you could then make\n> > > > the processing of an individual vacuum faster by allowing to utilize\n> > > > multiple tablespace budgets at the same time.\n> > >\n> > > Yes, it's possible to do independent of parallelism, but what I was\n> > > trying to get at above is that it might not be worth the effort. When\n> > > it comes to parallel vacuum though, I'm not sure that you can just punt\n> > > on this question since you'll naturally end up spanning multiple\n> > > tablespaces concurrently, at least if the heap+indexes are spread across\n> > > multiple tablespaces and you're operating against more than one of those\n> > > relations at a time\n> >\n> > Each parallel worker operates on a separate index. It might be worth\n> > exploring per-tablespace vacuum throttling, but that should not be a\n> > requirement for the currently proposed patch.\n>\n> Right, that each operates on a separate index in parallel is what I had\n> figured was probably happening, and that's why I brought up the question\n> of \"well, what does IO throttling mean when you've got multiple\n> tablespaces involved with presumably independent IO channels...?\" (or,\n> at least, that's what I was trying to go for).\n>\n> This isn't a question with the current system and way the code works\n> within a single vacuum operation, as we're never operating on more than\n> one relation concurrently in that case.\n>\n> Of course, we don't currently do anything to manage IO utilization\n> across tablespaces when there are multiple autovacuum workers running\n> concurrently, which I suppose goes to Andres' point that we aren't\n> really doing anything to deal with this today and therefore this is\n> perhaps not all that new of an issue just with the addition of\n> parallel vacuum. I'd still argue that it becomes a lot more apparent\n> when you're talking about one parallel vacuum, but ultimately we should\n> probably be thinking about how to manage the resources across all the\n> vacuums and tablespaces and queries and such.\n>\n> In an ideal world, we'd track the i/o from front-end queries, have some\n> idea of the total i/o possible for each IO channel, and allow vacuum and\n> whatever other background processes need to run to scale up and down,\n> with enough buffer to avoid ever being maxed out on i/o, but keeping up\n> a consistent rate of i/o that lets everything finish as quickly as\n> possible.\n\nIMHO, in future suppose we improve the I/O throttling for each\ntablespace, maybe by maintaining the independent balance for relation\nand each index of the relation or may be combined balance for the\nindexes which are on the same tablespace. And, the balance can be\nchecked against its tablespace i/o limit. So If we get such a\nmechanism in the future then it seems that it will be easily\nexpandable to the parallel vacuum, isn't it? Because across workers\nalso we can track tablespace wise shared balance (if we go with the\nshared costing approach for example).\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 6 Nov 2019 14:02:47 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "On Wed, 6 Nov 2019 at 15:45, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Nov 5, 2019 at 11:28 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Nov 4, 2019 at 11:42 PM Andres Freund <andres@anarazel.de> wrote:\n> > >\n> > >\n> > > > The two approaches to solve this problem being discussed in that\n> > > > thread [1] are as follows:\n> > > > (a) Allow the parallel workers and master backend to have a shared\n> > > > view of vacuum cost related parameters (mainly VacuumCostBalance) and\n> > > > allow each worker to update it and then based on that decide whether\n> > > > it needs to sleep. Sawada-San has done the POC for this approach.\n> > > > See v32-0004-PoC-shared-vacuum-cost-balance in email [2]. One\n> > > > drawback of this approach could be that we allow the worker to sleep\n> > > > even though the I/O has been performed by some other worker.\n> > >\n> > > I don't understand this drawback.\n> > >\n> >\n> > I think the problem could be that the system is not properly throttled\n> > when it is supposed to be. Let me try by a simple example, say we\n> > have two workers w-1 and w-2. The w-2 is primarily doing the I/O and\n> > w-1 is doing very less I/O but unfortunately whenever w-1 checks it\n> > finds that cost_limit has exceeded and it goes for sleep, but w-1\n> > still continues.\n> >\n>\n> Typo in the above sentence. /but w-1 still continues/but w-2 still continues.\n>\n> > Now in such a situation even though we have made one\n> > of the workers slept for a required time but ideally the worker which\n> > was doing I/O should have slept. The aim is to make the system stop\n> > doing I/O whenever the limit has exceeded, so that might not work in\n> > the above situation.\n> >\n>\n> One idea to fix this drawback is that if we somehow avoid letting the\n> workers sleep which has done less or no I/O as compared to other\n> workers, then we can to a good extent ensure that workers which are\n> doing more I/O will be throttled more. What we can do is to allow any\n> worker sleep only if it has performed the I/O above a certain\n> threshold and the overall balance is more than the cost_limit set by\n> the system. Then we will allow the worker to sleep proportional to\n> the work done by it and reduce the VacuumSharedCostBalance by the\n> amount which is consumed by the current worker. Something like:\n>\n> If ( VacuumSharedCostBalance >= VacuumCostLimit &&\n> MyCostBalance > (threshold) VacuumCostLimit / workers)\n> {\n> VacuumSharedCostBalance -= MyCostBalance;\n> Sleep (delay * MyCostBalance/VacuumSharedCostBalance)\n> }\n>\n> Assume threshold be 0.5, what that means is, if it has done work more\n> than 50% of what is expected from this worker and the overall share\n> cost balance is exceeded, then we will consider this worker to sleep.\n>\n> What do you guys think?\n\nI think the idea that the more consuming I/O they sleep more longer\ntime seems good. There seems not to be the drawback of approach(b)\nthat is to unnecessarily delay vacuum if some indexes are very small\nor bulk-deletions of indexes does almost nothing such as brin. But on\nthe other hand it's possible that workers don't sleep even if shared\ncost balance already exceeds the limit because it's necessary for\nsleeping that local balance exceeds the worker's limit divided by the\nnumber of workers. For example, a worker is scheduled doing I/O and\nexceeds the limit substantially while other 2 workers do less I/O. And\nthen the 2 workers are scheduled and consume I/O. The total cost\nbalance already exceeds the limit but the workers will not sleep as\nlong as the local balance is less than (limit / # of workers).\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 8 Nov 2019 11:48:16 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "On Fri, Nov 8, 2019 at 8:18 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Wed, 6 Nov 2019 at 15:45, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Nov 5, 2019 at 11:28 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, Nov 4, 2019 at 11:42 PM Andres Freund <andres@anarazel.de> wrote:\n> > > >\n> > > >\n> > > > > The two approaches to solve this problem being discussed in that\n> > > > > thread [1] are as follows:\n> > > > > (a) Allow the parallel workers and master backend to have a shared\n> > > > > view of vacuum cost related parameters (mainly VacuumCostBalance) and\n> > > > > allow each worker to update it and then based on that decide whether\n> > > > > it needs to sleep. Sawada-San has done the POC for this approach.\n> > > > > See v32-0004-PoC-shared-vacuum-cost-balance in email [2]. One\n> > > > > drawback of this approach could be that we allow the worker to sleep\n> > > > > even though the I/O has been performed by some other worker.\n> > > >\n> > > > I don't understand this drawback.\n> > > >\n> > >\n> > > I think the problem could be that the system is not properly throttled\n> > > when it is supposed to be. Let me try by a simple example, say we\n> > > have two workers w-1 and w-2. The w-2 is primarily doing the I/O and\n> > > w-1 is doing very less I/O but unfortunately whenever w-1 checks it\n> > > finds that cost_limit has exceeded and it goes for sleep, but w-1\n> > > still continues.\n> > >\n> >\n> > Typo in the above sentence. /but w-1 still continues/but w-2 still continues.\n> >\n> > > Now in such a situation even though we have made one\n> > > of the workers slept for a required time but ideally the worker which\n> > > was doing I/O should have slept. The aim is to make the system stop\n> > > doing I/O whenever the limit has exceeded, so that might not work in\n> > > the above situation.\n> > >\n> >\n> > One idea to fix this drawback is that if we somehow avoid letting the\n> > workers sleep which has done less or no I/O as compared to other\n> > workers, then we can to a good extent ensure that workers which are\n> > doing more I/O will be throttled more. What we can do is to allow any\n> > worker sleep only if it has performed the I/O above a certain\n> > threshold and the overall balance is more than the cost_limit set by\n> > the system. Then we will allow the worker to sleep proportional to\n> > the work done by it and reduce the VacuumSharedCostBalance by the\n> > amount which is consumed by the current worker. Something like:\n> >\n> > If ( VacuumSharedCostBalance >= VacuumCostLimit &&\n> > MyCostBalance > (threshold) VacuumCostLimit / workers)\n> > {\n> > VacuumSharedCostBalance -= MyCostBalance;\n> > Sleep (delay * MyCostBalance/VacuumSharedCostBalance)\n> > }\n> >\n> > Assume threshold be 0.5, what that means is, if it has done work more\n> > than 50% of what is expected from this worker and the overall share\n> > cost balance is exceeded, then we will consider this worker to sleep.\n> >\n> > What do you guys think?\n>\n> I think the idea that the more consuming I/O they sleep more longer\n> time seems good. There seems not to be the drawback of approach(b)\n> that is to unnecessarily delay vacuum if some indexes are very small\n> or bulk-deletions of indexes does almost nothing such as brin. But on\n> the other hand it's possible that workers don't sleep even if shared\n> cost balance already exceeds the limit because it's necessary for\n> sleeping that local balance exceeds the worker's limit divided by the\n> number of workers. For example, a worker is scheduled doing I/O and\n> exceeds the limit substantially while other 2 workers do less I/O. And\n> then the 2 workers are scheduled and consume I/O. The total cost\n> balance already exceeds the limit but the workers will not sleep as\n> long as the local balance is less than (limit / # of workers).\n>\n\nRight, this is the reason I told to keep some threshold for local\nbalance(say 50% of (limit / # of workers)). I think we need to do\nsome experiments to see what is the best thing to do.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 8 Nov 2019 08:37:31 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "On Fri, Nov 8, 2019 at 8:37 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> On Fri, Nov 8, 2019 at 8:18 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Wed, 6 Nov 2019 at 15:45, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Nov 5, 2019 at 11:28 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Mon, Nov 4, 2019 at 11:42 PM Andres Freund <andres@anarazel.de> wrote:\n> > > > >\n> > > > >\n> > > > > > The two approaches to solve this problem being discussed in that\n> > > > > > thread [1] are as follows:\n> > > > > > (a) Allow the parallel workers and master backend to have a shared\n> > > > > > view of vacuum cost related parameters (mainly VacuumCostBalance) and\n> > > > > > allow each worker to update it and then based on that decide whether\n> > > > > > it needs to sleep. Sawada-San has done the POC for this approach.\n> > > > > > See v32-0004-PoC-shared-vacuum-cost-balance in email [2]. One\n> > > > > > drawback of this approach could be that we allow the worker to sleep\n> > > > > > even though the I/O has been performed by some other worker.\n> > > > >\n> > > > > I don't understand this drawback.\n> > > > >\n> > > >\n> > > > I think the problem could be that the system is not properly throttled\n> > > > when it is supposed to be. Let me try by a simple example, say we\n> > > > have two workers w-1 and w-2. The w-2 is primarily doing the I/O and\n> > > > w-1 is doing very less I/O but unfortunately whenever w-1 checks it\n> > > > finds that cost_limit has exceeded and it goes for sleep, but w-1\n> > > > still continues.\n> > > >\n> > >\n> > > Typo in the above sentence. /but w-1 still continues/but w-2 still continues.\n> > >\n> > > > Now in such a situation even though we have made one\n> > > > of the workers slept for a required time but ideally the worker which\n> > > > was doing I/O should have slept. The aim is to make the system stop\n> > > > doing I/O whenever the limit has exceeded, so that might not work in\n> > > > the above situation.\n> > > >\n> > >\n> > > One idea to fix this drawback is that if we somehow avoid letting the\n> > > workers sleep which has done less or no I/O as compared to other\n> > > workers, then we can to a good extent ensure that workers which are\n> > > doing more I/O will be throttled more. What we can do is to allow any\n> > > worker sleep only if it has performed the I/O above a certain\n> > > threshold and the overall balance is more than the cost_limit set by\n> > > the system. Then we will allow the worker to sleep proportional to\n> > > the work done by it and reduce the VacuumSharedCostBalance by the\n> > > amount which is consumed by the current worker. Something like:\n> > >\n> > > If ( VacuumSharedCostBalance >= VacuumCostLimit &&\n> > > MyCostBalance > (threshold) VacuumCostLimit / workers)\n> > > {\n> > > VacuumSharedCostBalance -= MyCostBalance;\n> > > Sleep (delay * MyCostBalance/VacuumSharedCostBalance)\n> > > }\n> > >\n> > > Assume threshold be 0.5, what that means is, if it has done work more\n> > > than 50% of what is expected from this worker and the overall share\n> > > cost balance is exceeded, then we will consider this worker to sleep.\n> > >\n> > > What do you guys think?\n> >\n> > I think the idea that the more consuming I/O they sleep more longer\n> > time seems good. There seems not to be the drawback of approach(b)\n> > that is to unnecessarily delay vacuum if some indexes are very small\n> > or bulk-deletions of indexes does almost nothing such as brin. But on\n> > the other hand it's possible that workers don't sleep even if shared\n> > cost balance already exceeds the limit because it's necessary for\n> > sleeping that local balance exceeds the worker's limit divided by the\n> > number of workers. For example, a worker is scheduled doing I/O and\n> > exceeds the limit substantially while other 2 workers do less I/O. And\n> > then the 2 workers are scheduled and consume I/O. The total cost\n> > balance already exceeds the limit but the workers will not sleep as\n> > long as the local balance is less than (limit / # of workers).\n> >\n>\n> Right, this is the reason I told to keep some threshold for local\n> balance(say 50% of (limit / # of workers)). I think we need to do\n> some experiments to see what is the best thing to do.\n>\nI have done some experiments on this line. I have first produced a\ncase where we can show the problem with the existing shared costing\npatch (worker which is doing less I/O might pay the penalty on behalf\nof the worker who is doing more I/O). I have also hacked the shared\ncosting patch of Swada-san so that worker only go for sleep if the\nshared balance has crossed the limit and it's local balance has\ncrossed some threadshold[1].\n\nTest setup: I have created 4 indexes on the table. Out of which 3\nindexes will have a lot of pages to process but need to dirty a few\npages whereas the 4th index will have to process a very less number of\npages but need to dirty all of them. I have attached the test script\nalong with the mail. I have shown what is the delay time each worker\nhave done. What is total I/O[1] each worker and what is the page hit,\npage miss and page dirty count?\n[1] total I/O = _nhit * VacuumCostPageHit + _nmiss *\nVacuumCostPageMiss + _ndirty * VacuumCostPageDirty\n\npatch 1: Shared costing patch: (delay condition ->\nVacuumSharedCostBalance > VacuumCostLimit)\nworker 0 delay=80.00 total I/O=17931 hit=17891 miss=0 dirty=2\nworker 1 delay=40.00 total I/O=17931 hit=17891 miss=0 dirty=2\nworker 2 delay=110.00 total I/O=17931 hit=17891 miss=0 dirty=2\nworker 3 delay=120.98 total I/O=16378 hit=4318 miss=0 dirty=603\n\nObservation1: I think here it's clearly visible that worker 3 is\ndoing the least total I/O but delaying for maximum amount of time.\nOTOH, worker 1 is delaying for very little time compared to how much\nI/O it is doing. So for solving this problem, I have add a small\ntweak to the patch. Wherein the worker will only sleep if its local\nbalance has crossed some threshold. And, we can see that with that\nchange the problem is solved up to quite an extent.\n\npatch 2: Shared costing patch: (delay condition ->\nVacuumSharedCostBalance > VacuumCostLimit && VacuumLocalBalance >\nVacuumCostLimit/number of workers)\nworker 0 delay=100.12 total I/O=17931 hit=17891 miss=0 dirty=2\nworker 1 delay=90.00 total I/O=17931 hit=17891 miss=0 dirty=2\nworker 2 delay=80.06 total I/O=17931 hit=17891 miss=0 dirty=2\nworker 3 delay=80.72 total I/O=16378 hit=4318 miss=0 dirty=603\n\nObservation2: This patch solves the problem discussed with patch1 but\nin some extreme cases there is a possibility that the shared limit can\nbecome twice as much as local limit and still no worker goes for the\ndelay. For solving that there could be multiple ideas a) Set the max\nlimit on shared balance e.g. 1.5 * VacuumCostLimit after that we will\ngive the delay whoever tries to do the I/O irrespective of its local\nbalance.\nb) Set a little lower value for the local threshold e.g 50% of the local limit\n\nHere I have changed the patch2 as per (b) If local balance reaches to\n50% of the local limit and shared balance hit the vacuum cost limit\nthen go for the delay.\n\npatch 3: Shared costing patch: (delay condition ->\nVacuumSharedCostBalance > VacuumCostLimit && VacuumLocalBalance > 0.5\n* VacuumCostLimit/number of workers)\nworker 0 delay=70.03 total I/O=17931 hit=17891 miss=0 dirty=2\nworker 1 delay=100.14 total I/O=17931 hit=17891 miss=0 dirty=2\nworker 2 delay=80.01 total I/O=17931 hit=17891 miss=0 dirty=2\nworker 3 delay=101.03 total I/O=16378 hit=4318 miss=0 dirty=603\n\nObservation3: I think patch3 doesn't completely solve the issue\ndiscussed in patch1 but its far better than patch1. But, patch 2\nmight have another problem as discussed in observation2.\n\nI think I need to do some more analysis and experiment before we can\nreach to some conclusion. But, one point is clear that we need to do\nsomething to solve the problem observed with patch1 if we are going\nwith the shared costing approach.\n\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 8 Nov 2019 09:39:31 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "On Fri, Nov 8, 2019 at 9:39 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> I have done some experiments on this line. I have first produced a\n> case where we can show the problem with the existing shared costing\n> patch (worker which is doing less I/O might pay the penalty on behalf\n> of the worker who is doing more I/O). I have also hacked the shared\n> costing patch of Swada-san so that worker only go for sleep if the\n> shared balance has crossed the limit and it's local balance has\n> crossed some threadshold[1].\n>\n> Test setup: I have created 4 indexes on the table. Out of which 3\n> indexes will have a lot of pages to process but need to dirty a few\n> pages whereas the 4th index will have to process a very less number of\n> pages but need to dirty all of them. I have attached the test script\n> along with the mail. I have shown what is the delay time each worker\n> have done. What is total I/O[1] each worker and what is the page hit,\n> page miss and page dirty count?\n> [1] total I/O = _nhit * VacuumCostPageHit + _nmiss *\n> VacuumCostPageMiss + _ndirty * VacuumCostPageDirty\n>\n> patch 1: Shared costing patch: (delay condition ->\n> VacuumSharedCostBalance > VacuumCostLimit)\n> worker 0 delay=80.00 total I/O=17931 hit=17891 miss=0 dirty=2\n> worker 1 delay=40.00 total I/O=17931 hit=17891 miss=0 dirty=2\n> worker 2 delay=110.00 total I/O=17931 hit=17891 miss=0 dirty=2\n> worker 3 delay=120.98 total I/O=16378 hit=4318 miss=0 dirty=603\n>\n> Observation1: I think here it's clearly visible that worker 3 is\n> doing the least total I/O but delaying for maximum amount of time.\n> OTOH, worker 1 is delaying for very little time compared to how much\n> I/O it is doing. So for solving this problem, I have add a small\n> tweak to the patch. Wherein the worker will only sleep if its local\n> balance has crossed some threshold. And, we can see that with that\n> change the problem is solved up to quite an extent.\n>\n> patch 2: Shared costing patch: (delay condition ->\n> VacuumSharedCostBalance > VacuumCostLimit && VacuumLocalBalance >\n> VacuumCostLimit/number of workers)\n> worker 0 delay=100.12 total I/O=17931 hit=17891 miss=0 dirty=2\n> worker 1 delay=90.00 total I/O=17931 hit=17891 miss=0 dirty=2\n> worker 2 delay=80.06 total I/O=17931 hit=17891 miss=0 dirty=2\n> worker 3 delay=80.72 total I/O=16378 hit=4318 miss=0 dirty=603\n>\n> Observation2: This patch solves the problem discussed with patch1 but\n> in some extreme cases there is a possibility that the shared limit can\n> become twice as much as local limit and still no worker goes for the\n> delay. For solving that there could be multiple ideas a) Set the max\n> limit on shared balance e.g. 1.5 * VacuumCostLimit after that we will\n> give the delay whoever tries to do the I/O irrespective of its local\n> balance.\n> b) Set a little lower value for the local threshold e.g 50% of the local limit\n>\n> Here I have changed the patch2 as per (b) If local balance reaches to\n> 50% of the local limit and shared balance hit the vacuum cost limit\n> then go for the delay.\n>\n> patch 3: Shared costing patch: (delay condition ->\n> VacuumSharedCostBalance > VacuumCostLimit && VacuumLocalBalance > 0.5\n> * VacuumCostLimit/number of workers)\n> worker 0 delay=70.03 total I/O=17931 hit=17891 miss=0 dirty=2\n> worker 1 delay=100.14 total I/O=17931 hit=17891 miss=0 dirty=2\n> worker 2 delay=80.01 total I/O=17931 hit=17891 miss=0 dirty=2\n> worker 3 delay=101.03 total I/O=16378 hit=4318 miss=0 dirty=603\n>\n> Observation3: I think patch3 doesn't completely solve the issue\n> discussed in patch1 but its far better than patch1.\n>\n\nYeah, I think it is difficult to get the exact balance, but we can try\nto be as close as possible. We can try to play with the threshold and\nanother possibility is to try to sleep in proportion to the amount of\nI/O done by the worker.\n\nThanks for doing these experiments, but I think it is better if you\ncan share the modified patches so that others can also reproduce what\nyou are seeing. There is no need to post the entire parallel vacuum\npatch-set, but the costing related patch can be posted with a\nreference to what all patches are required from parallel vacuum\nthread. Another option is to move this discussion to the parallel\nvacuum thread, but I think it is better to decide the costing model\nhere.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 8 Nov 2019 11:49:12 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "On Fri, Nov 8, 2019 at 11:49 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Nov 8, 2019 at 9:39 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > I have done some experiments on this line. I have first produced a\n> > case where we can show the problem with the existing shared costing\n> > patch (worker which is doing less I/O might pay the penalty on behalf\n> > of the worker who is doing more I/O). I have also hacked the shared\n> > costing patch of Swada-san so that worker only go for sleep if the\n> > shared balance has crossed the limit and it's local balance has\n> > crossed some threadshold[1].\n> >\n> > Test setup: I have created 4 indexes on the table. Out of which 3\n> > indexes will have a lot of pages to process but need to dirty a few\n> > pages whereas the 4th index will have to process a very less number of\n> > pages but need to dirty all of them. I have attached the test script\n> > along with the mail. I have shown what is the delay time each worker\n> > have done. What is total I/O[1] each worker and what is the page hit,\n> > page miss and page dirty count?\n> > [1] total I/O = _nhit * VacuumCostPageHit + _nmiss *\n> > VacuumCostPageMiss + _ndirty * VacuumCostPageDirty\n> >\n> > patch 1: Shared costing patch: (delay condition ->\n> > VacuumSharedCostBalance > VacuumCostLimit)\n> > worker 0 delay=80.00 total I/O=17931 hit=17891 miss=0 dirty=2\n> > worker 1 delay=40.00 total I/O=17931 hit=17891 miss=0 dirty=2\n> > worker 2 delay=110.00 total I/O=17931 hit=17891 miss=0 dirty=2\n> > worker 3 delay=120.98 total I/O=16378 hit=4318 miss=0 dirty=603\n> >\n> > Observation1: I think here it's clearly visible that worker 3 is\n> > doing the least total I/O but delaying for maximum amount of time.\n> > OTOH, worker 1 is delaying for very little time compared to how much\n> > I/O it is doing. So for solving this problem, I have add a small\n> > tweak to the patch. Wherein the worker will only sleep if its local\n> > balance has crossed some threshold. And, we can see that with that\n> > change the problem is solved up to quite an extent.\n> >\n> > patch 2: Shared costing patch: (delay condition ->\n> > VacuumSharedCostBalance > VacuumCostLimit && VacuumLocalBalance >\n> > VacuumCostLimit/number of workers)\n> > worker 0 delay=100.12 total I/O=17931 hit=17891 miss=0 dirty=2\n> > worker 1 delay=90.00 total I/O=17931 hit=17891 miss=0 dirty=2\n> > worker 2 delay=80.06 total I/O=17931 hit=17891 miss=0 dirty=2\n> > worker 3 delay=80.72 total I/O=16378 hit=4318 miss=0 dirty=603\n> >\n> > Observation2: This patch solves the problem discussed with patch1 but\n> > in some extreme cases there is a possibility that the shared limit can\n> > become twice as much as local limit and still no worker goes for the\n> > delay. For solving that there could be multiple ideas a) Set the max\n> > limit on shared balance e.g. 1.5 * VacuumCostLimit after that we will\n> > give the delay whoever tries to do the I/O irrespective of its local\n> > balance.\n> > b) Set a little lower value for the local threshold e.g 50% of the local limit\n> >\n> > Here I have changed the patch2 as per (b) If local balance reaches to\n> > 50% of the local limit and shared balance hit the vacuum cost limit\n> > then go for the delay.\n> >\n> > patch 3: Shared costing patch: (delay condition ->\n> > VacuumSharedCostBalance > VacuumCostLimit && VacuumLocalBalance > 0.5\n> > * VacuumCostLimit/number of workers)\n> > worker 0 delay=70.03 total I/O=17931 hit=17891 miss=0 dirty=2\n> > worker 1 delay=100.14 total I/O=17931 hit=17891 miss=0 dirty=2\n> > worker 2 delay=80.01 total I/O=17931 hit=17891 miss=0 dirty=2\n> > worker 3 delay=101.03 total I/O=16378 hit=4318 miss=0 dirty=603\n> >\n> > Observation3: I think patch3 doesn't completely solve the issue\n> > discussed in patch1 but its far better than patch1.\n> >\n>\n> Yeah, I think it is difficult to get the exact balance, but we can try\n> to be as close as possible. We can try to play with the threshold and\n> another possibility is to try to sleep in proportion to the amount of\n> I/O done by the worker.\nI have done another experiment where I have done another 2 changes on\ntop op patch3\na) Only reduce the local balance from the total shared balance\nwhenever it's applying delay\nb) Compute the delay based on the local balance.\n\npatch4:\nworker 0 delay=84.130000 total I/O=17931 hit=17891 miss=0 dirty=2\nworker 1 delay=89.230000 total I/O=17931 hit=17891 miss=0 dirty=2\nworker 2 delay=88.680000 total I/O=17931 hit=17891 miss=0 dirty=2\nworker 3 delay=80.790000 total I/O=16378 hit=4318 miss=0 dirty=603\n\nI think with this approach the delay is divided among the worker quite\nwell compared to other approaches\n\n>\n> Thanks for doing these experiments, but I think it is better if you\n> can share the modified patches so that others can also reproduce what\n> you are seeing. There is no need to post the entire parallel vacuum\n> patch-set, but the costing related patch can be posted with a\n> reference to what all patches are required from parallel vacuum\n> thread. Another option is to move this discussion to the parallel\n> vacuum thread, but I think it is better to decide the costing model\n> here.\n\nI have attached the POC patches I have for testing. Step for testing\n1. First, apply the parallel vacuum base patch and the shared costing patch[1].\n2. Apply 0001-vacuum_costing_test.patch attached in the mail\n3. Run the script shared in previous mail [2]. --> this will give the\nresults for patch 1 shared upthread[2]\n4. Apply patch shared_costing_plus_patch[2] or [3] or [4] to see the\nresults with different approaches explained in the mail.\n\n\n[1] https://www.postgresql.org/message-id/CAD21AoAqT17QwKJ_sWOqRxNvg66wMw1oZZzf9Rt-E-zD%2BXOh_Q%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAFiTN-tFLN%3Dvdu5Ra-23E9_7Z1JXkk5MkRY3Bkj2zAoWK7fULA%40mail.gmail.com\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 11 Nov 2019 09:43:40 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "On Mon, Nov 11, 2019 at 9:43 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Fri, Nov 8, 2019 at 11:49 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Nov 8, 2019 at 9:39 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > I have done some experiments on this line. I have first produced a\n> > > case where we can show the problem with the existing shared costing\n> > > patch (worker which is doing less I/O might pay the penalty on behalf\n> > > of the worker who is doing more I/O). I have also hacked the shared\n> > > costing patch of Swada-san so that worker only go for sleep if the\n> > > shared balance has crossed the limit and it's local balance has\n> > > crossed some threadshold[1].\n> > >\n> > > Test setup: I have created 4 indexes on the table. Out of which 3\n> > > indexes will have a lot of pages to process but need to dirty a few\n> > > pages whereas the 4th index will have to process a very less number of\n> > > pages but need to dirty all of them. I have attached the test script\n> > > along with the mail. I have shown what is the delay time each worker\n> > > have done. What is total I/O[1] each worker and what is the page hit,\n> > > page miss and page dirty count?\n> > > [1] total I/O = _nhit * VacuumCostPageHit + _nmiss *\n> > > VacuumCostPageMiss + _ndirty * VacuumCostPageDirty\n> > >\n> > > patch 1: Shared costing patch: (delay condition ->\n> > > VacuumSharedCostBalance > VacuumCostLimit)\n> > > worker 0 delay=80.00 total I/O=17931 hit=17891 miss=0 dirty=2\n> > > worker 1 delay=40.00 total I/O=17931 hit=17891 miss=0 dirty=2\n> > > worker 2 delay=110.00 total I/O=17931 hit=17891 miss=0 dirty=2\n> > > worker 3 delay=120.98 total I/O=16378 hit=4318 miss=0 dirty=603\n> > >\n> > > Observation1: I think here it's clearly visible that worker 3 is\n> > > doing the least total I/O but delaying for maximum amount of time.\n> > > OTOH, worker 1 is delaying for very little time compared to how much\n> > > I/O it is doing. So for solving this problem, I have add a small\n> > > tweak to the patch. Wherein the worker will only sleep if its local\n> > > balance has crossed some threshold. And, we can see that with that\n> > > change the problem is solved up to quite an extent.\n> > >\n> > > patch 2: Shared costing patch: (delay condition ->\n> > > VacuumSharedCostBalance > VacuumCostLimit && VacuumLocalBalance >\n> > > VacuumCostLimit/number of workers)\n> > > worker 0 delay=100.12 total I/O=17931 hit=17891 miss=0 dirty=2\n> > > worker 1 delay=90.00 total I/O=17931 hit=17891 miss=0 dirty=2\n> > > worker 2 delay=80.06 total I/O=17931 hit=17891 miss=0 dirty=2\n> > > worker 3 delay=80.72 total I/O=16378 hit=4318 miss=0 dirty=603\n> > >\n> > > Observation2: This patch solves the problem discussed with patch1 but\n> > > in some extreme cases there is a possibility that the shared limit can\n> > > become twice as much as local limit and still no worker goes for the\n> > > delay. For solving that there could be multiple ideas a) Set the max\n> > > limit on shared balance e.g. 1.5 * VacuumCostLimit after that we will\n> > > give the delay whoever tries to do the I/O irrespective of its local\n> > > balance.\n> > > b) Set a little lower value for the local threshold e.g 50% of the local limit\n> > >\n> > > Here I have changed the patch2 as per (b) If local balance reaches to\n> > > 50% of the local limit and shared balance hit the vacuum cost limit\n> > > then go for the delay.\n> > >\n> > > patch 3: Shared costing patch: (delay condition ->\n> > > VacuumSharedCostBalance > VacuumCostLimit && VacuumLocalBalance > 0.5\n> > > * VacuumCostLimit/number of workers)\n> > > worker 0 delay=70.03 total I/O=17931 hit=17891 miss=0 dirty=2\n> > > worker 1 delay=100.14 total I/O=17931 hit=17891 miss=0 dirty=2\n> > > worker 2 delay=80.01 total I/O=17931 hit=17891 miss=0 dirty=2\n> > > worker 3 delay=101.03 total I/O=16378 hit=4318 miss=0 dirty=603\n> > >\n> > > Observation3: I think patch3 doesn't completely solve the issue\n> > > discussed in patch1 but its far better than patch1.\n> > >\n> >\n> > Yeah, I think it is difficult to get the exact balance, but we can try\n> > to be as close as possible. We can try to play with the threshold and\n> > another possibility is to try to sleep in proportion to the amount of\n> > I/O done by the worker.\n> I have done another experiment where I have done another 2 changes on\n> top op patch3\n> a) Only reduce the local balance from the total shared balance\n> whenever it's applying delay\n> b) Compute the delay based on the local balance.\n>\n> patch4:\n> worker 0 delay=84.130000 total I/O=17931 hit=17891 miss=0 dirty=2\n> worker 1 delay=89.230000 total I/O=17931 hit=17891 miss=0 dirty=2\n> worker 2 delay=88.680000 total I/O=17931 hit=17891 miss=0 dirty=2\n> worker 3 delay=80.790000 total I/O=16378 hit=4318 miss=0 dirty=603\n>\n> I think with this approach the delay is divided among the worker quite\n> well compared to other approaches\n>\n> >\n> > Thanks for doing these experiments, but I think it is better if you\n> > can share the modified patches so that others can also reproduce what\n> > you are seeing. There is no need to post the entire parallel vacuum\n> > patch-set, but the costing related patch can be posted with a\n> > reference to what all patches are required from parallel vacuum\n> > thread. Another option is to move this discussion to the parallel\n> > vacuum thread, but I think it is better to decide the costing model\n> > here.\n>\n> I have attached the POC patches I have for testing. Step for testing\n> 1. First, apply the parallel vacuum base patch and the shared costing patch[1].\n> 2. Apply 0001-vacuum_costing_test.patch attached in the mail\n> 3. Run the script shared in previous mail [2]. --> this will give the\n> results for patch 1 shared upthread[2]\n> 4. Apply patch shared_costing_plus_patch[2] or [3] or [4] to see the\n> results with different approaches explained in the mail.\n>\n>\n> [1] https://www.postgresql.org/message-id/CAD21AoAqT17QwKJ_sWOqRxNvg66wMw1oZZzf9Rt-E-zD%2BXOh_Q%40mail.gmail.com\n> [2] https://www.postgresql.org/message-id/CAFiTN-tFLN%3Dvdu5Ra-23E9_7Z1JXkk5MkRY3Bkj2zAoWK7fULA%40mail.gmail.com\n>\nI have tested the same with some other workload(test file attached).\nI can see the same behaviour with this workload as well that with the\npatch 4 the distribution of the delay is better compared to other\npatches i.e. worker with more I/O have more delay and with equal IO\nhave alsomost equal delay. Only thing is that the total delay with\nthe patch 4 is slightly less compared to other pacthes.\n\npatch1:\n worker 0 delay=120.000000 total io=35828 hit=35788 miss=0 dirty=2\n worker 1 delay=170.000000 total io=35828 hit=35788 miss=0 dirty=2\n worker 2 delay=210.000000 total io=35828 hit=35788 miss=0 dirty=2\n worker 3 delay=263.400000 total io=44322 hit=8352 miss=1199 dirty=1199\n\npatch2:\n worker 0 delay=190.645000 total io=35828 hit=35788 miss=0 dirty=2\n worker 1 delay=160.090000 total io=35828 hit=35788 miss=0 dirty=2\n worker 2 delay=170.775000 total io=35828 hit=35788 miss=0 dirty=2\n worker 3 delay=243.180000 total io=44322 hit=8352 miss=1199 dirty=1199\n\npatch3:\n worker 0 delay=191.765000 total io=35828 hit=35788 miss=0 dirty=2\n worker 1 delay=180.935000 total io=35828 hit=35788 miss=0 dirty=2\n worker 2 delay=201.305000 total io=35828 hit=35788 miss=0 dirty=2\n worker 3 delay=192.770000 total io=44322 hit=8352 miss=1199 dirty=1199\n\npatch4:\n worker 0 delay=175.290000 total io=35828 hit=35788 miss=0 dirty=2\n worker 1 delay=174.135000 total io=35828 hit=35788 miss=0 dirty=2\n worker 2 delay=175.560000 total io=35828 hit=35788 miss=0 dirty=2\n worker 3 delay=212.100000 total io=44322 hit=8352 miss=1199 dirty=1199\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 11 Nov 2019 12:59:09 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "On Mon, Nov 11, 2019 at 12:59 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Nov 11, 2019 at 9:43 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Fri, Nov 8, 2019 at 11:49 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > >\n> > > Yeah, I think it is difficult to get the exact balance, but we can try\n> > > to be as close as possible. We can try to play with the threshold and\n> > > another possibility is to try to sleep in proportion to the amount of\n> > > I/O done by the worker.\n> > I have done another experiment where I have done another 2 changes on\n> > top op patch3\n> > a) Only reduce the local balance from the total shared balance\n> > whenever it's applying delay\n> > b) Compute the delay based on the local balance.\n> >\n> > patch4:\n> > worker 0 delay=84.130000 total I/O=17931 hit=17891 miss=0 dirty=2\n> > worker 1 delay=89.230000 total I/O=17931 hit=17891 miss=0 dirty=2\n> > worker 2 delay=88.680000 total I/O=17931 hit=17891 miss=0 dirty=2\n> > worker 3 delay=80.790000 total I/O=16378 hit=4318 miss=0 dirty=603\n> >\n> > I think with this approach the delay is divided among the worker quite\n> > well compared to other approaches\n> >\n> > >\n..\n> I have tested the same with some other workload(test file attached).\n> I can see the same behaviour with this workload as well that with the\n> patch 4 the distribution of the delay is better compared to other\n> patches i.e. worker with more I/O have more delay and with equal IO\n> have alsomost equal delay. Only thing is that the total delay with\n> the patch 4 is slightly less compared to other pacthes.\n>\n\nI see one problem with the formula you have used in the patch, maybe\nthat is causing the value of total delay to go down.\n\n- if (new_balance >= VacuumCostLimit)\n+ VacuumCostBalanceLocal += VacuumCostBalance;\n+ if ((new_balance >= VacuumCostLimit) &&\n+ (VacuumCostBalanceLocal > VacuumCostLimit/(0.5 * nworker)))\n\nAs per discussion, the second part of the condition should be\n\"VacuumCostBalanceLocal > (0.5) * VacuumCostLimit/nworker\". I think\nyou can once change this and try again. Also, please try with the\ndifferent values of threshold (0.3, 0.5, 0.7, etc.).\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 11 Nov 2019 16:23:41 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "On Mon, Nov 11, 2019 at 4:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Nov 11, 2019 at 12:59 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Mon, Nov 11, 2019 at 9:43 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Fri, Nov 8, 2019 at 11:49 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > >\n> > > > Yeah, I think it is difficult to get the exact balance, but we can try\n> > > > to be as close as possible. We can try to play with the threshold and\n> > > > another possibility is to try to sleep in proportion to the amount of\n> > > > I/O done by the worker.\n> > > I have done another experiment where I have done another 2 changes on\n> > > top op patch3\n> > > a) Only reduce the local balance from the total shared balance\n> > > whenever it's applying delay\n> > > b) Compute the delay based on the local balance.\n> > >\n> > > patch4:\n> > > worker 0 delay=84.130000 total I/O=17931 hit=17891 miss=0 dirty=2\n> > > worker 1 delay=89.230000 total I/O=17931 hit=17891 miss=0 dirty=2\n> > > worker 2 delay=88.680000 total I/O=17931 hit=17891 miss=0 dirty=2\n> > > worker 3 delay=80.790000 total I/O=16378 hit=4318 miss=0 dirty=603\n> > >\n> > > I think with this approach the delay is divided among the worker quite\n> > > well compared to other approaches\n> > >\n> > > >\n> ..\n> > I have tested the same with some other workload(test file attached).\n> > I can see the same behaviour with this workload as well that with the\n> > patch 4 the distribution of the delay is better compared to other\n> > patches i.e. worker with more I/O have more delay and with equal IO\n> > have alsomost equal delay. Only thing is that the total delay with\n> > the patch 4 is slightly less compared to other pacthes.\n> >\n>\n> I see one problem with the formula you have used in the patch, maybe\n> that is causing the value of total delay to go down.\n>\n> - if (new_balance >= VacuumCostLimit)\n> + VacuumCostBalanceLocal += VacuumCostBalance;\n> + if ((new_balance >= VacuumCostLimit) &&\n> + (VacuumCostBalanceLocal > VacuumCostLimit/(0.5 * nworker)))\n>\n> As per discussion, the second part of the condition should be\n> \"VacuumCostBalanceLocal > (0.5) * VacuumCostLimit/nworker\".\nMy Bad\nI think\n> you can once change this and try again. Also, please try with the\n> different values of threshold (0.3, 0.5, 0.7, etc.).\n>\nOkay, I will retest with both patch3 and path4 for both the scenarios.\nI will also try with different multipliers.\n\n> --\n> With Regards,\n> Amit Kapila.\n> EnterpriseDB: http://www.enterprisedb.com\n\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 11 Nov 2019 17:14:03 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "On Mon, Nov 11, 2019 at 5:14 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Nov 11, 2019 at 4:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > ..\n> > > I have tested the same with some other workload(test file attached).\n> > > I can see the same behaviour with this workload as well that with the\n> > > patch 4 the distribution of the delay is better compared to other\n> > > patches i.e. worker with more I/O have more delay and with equal IO\n> > > have alsomost equal delay. Only thing is that the total delay with\n> > > the patch 4 is slightly less compared to other pacthes.\n> > >\n> >\n> > I see one problem with the formula you have used in the patch, maybe\n> > that is causing the value of total delay to go down.\n> >\n> > - if (new_balance >= VacuumCostLimit)\n> > + VacuumCostBalanceLocal += VacuumCostBalance;\n> > + if ((new_balance >= VacuumCostLimit) &&\n> > + (VacuumCostBalanceLocal > VacuumCostLimit/(0.5 * nworker)))\n> >\n> > As per discussion, the second part of the condition should be\n> > \"VacuumCostBalanceLocal > (0.5) * VacuumCostLimit/nworker\".\n> My Bad\n> I think\n> > you can once change this and try again. Also, please try with the\n> > different values of threshold (0.3, 0.5, 0.7, etc.).\n> >\n> Okay, I will retest with both patch3 and path4 for both the scenarios.\n> I will also try with different multipliers.\n>\n\nOne more thing, I think we should also test these cases with a varying\nnumber of indexes (say 2,6,8,etc.) and then probably, we should test\nby a varying number of workers where the number of workers are lesser\nthan indexes. You can do these after finishing your previous\nexperiments.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 11 Nov 2019 17:56:36 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "On Mon, Nov 11, 2019 at 4:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Nov 11, 2019 at 12:59 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Mon, Nov 11, 2019 at 9:43 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Fri, Nov 8, 2019 at 11:49 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > >\n> > > > Yeah, I think it is difficult to get the exact balance, but we can try\n> > > > to be as close as possible. We can try to play with the threshold and\n> > > > another possibility is to try to sleep in proportion to the amount of\n> > > > I/O done by the worker.\n> > > I have done another experiment where I have done another 2 changes on\n> > > top op patch3\n> > > a) Only reduce the local balance from the total shared balance\n> > > whenever it's applying delay\n> > > b) Compute the delay based on the local balance.\n> > >\n> > > patch4:\n> > > worker 0 delay=84.130000 total I/O=17931 hit=17891 miss=0 dirty=2\n> > > worker 1 delay=89.230000 total I/O=17931 hit=17891 miss=0 dirty=2\n> > > worker 2 delay=88.680000 total I/O=17931 hit=17891 miss=0 dirty=2\n> > > worker 3 delay=80.790000 total I/O=16378 hit=4318 miss=0 dirty=603\n> > >\n> > > I think with this approach the delay is divided among the worker quite\n> > > well compared to other approaches\n> > >\n> > > >\n> ..\n> > I have tested the same with some other workload(test file attached).\n> > I can see the same behaviour with this workload as well that with the\n> > patch 4 the distribution of the delay is better compared to other\n> > patches i.e. worker with more I/O have more delay and with equal IO\n> > have alsomost equal delay. Only thing is that the total delay with\n> > the patch 4 is slightly less compared to other pacthes.\n> >\n>\n> I see one problem with the formula you have used in the patch, maybe\n> that is causing the value of total delay to go down.\n>\n> - if (new_balance >= VacuumCostLimit)\n> + VacuumCostBalanceLocal += VacuumCostBalance;\n> + if ((new_balance >= VacuumCostLimit) &&\n> + (VacuumCostBalanceLocal > VacuumCostLimit/(0.5 * nworker)))\n>\n> As per discussion, the second part of the condition should be\n> \"VacuumCostBalanceLocal > (0.5) * VacuumCostLimit/nworker\". I think\n> you can once change this and try again. Also, please try with the\n> different values of threshold (0.3, 0.5, 0.7, etc.).\n>\nI have modified the patch4 and ran with different values. But, I\ndon't see much difference in the values with the patch4. Infact I\nremoved the condition for the local balancing check completely still\nthe delays are the same, I think this is because with patch4 worker\nare only reducing their own balance and also delaying as much as their\nlocal balance. So maybe the second condition will not have much\nimpact.\n\nPatch4 (test.sh)\n0\n worker 0 delay=82.380000 total io=17931 hit=17891 miss=0 dirty=2\n worker 1 delay=89.370000 total io=17931 hit=17891 miss=0 dirty=2\n worker 2 delay=89.645000 total io=17931 hit=17891 miss=0 dirty=2\n worker 3 delay=79.150000 total io=16378 hit=4318 miss=0 dirty=603\n\n0.1\n worker 0 delay=89.295000 total io=17931 hit=17891 miss=0 dirty=2\n worker 1 delay=89.230000 total io=17931 hit=17891 miss=0 dirty=2\n worker 2 delay=89.675000 total io=17931 hit=17891 miss=0 dirty=2\n worker 3 delay=81.840000 total io=16378 hit=4318 miss=0 dirty=603\n\n0.3\n worker 0 delay=85.915000 total io=17931 hit=17891 miss=0 dirty=2\n worker 1 delay=85.180000 total io=17931 hit=17891 miss=0 dirty=2\n worker 2 delay=88.760000 total io=17931 hit=17891 miss=0 dirty=2\n worker 3 delay=81.975000 total io=16378 hit=4318 miss=0 dirty=603\n\n0.5\n worker 0 delay=81.635000 total io=17931 hit=17891 miss=0 dirty=2\n worker 1 delay=87.490000 total io=17931 hit=17891 miss=0 dirty=2\n worker 2 delay=89.425000 total io=17931 hit=17891 miss=0 dirty=2\n worker 3 delay=82.050000 total io=16378 hit=4318 miss=0 dirty=603\n\n0.7\n worker 0 delay=85.185000 total io=17931 hit=17891 miss=0 dirty=2\n worker 1 delay=88.835000 total io=17931 hit=17891 miss=0 dirty=2\n worker 2 delay=86.005000 total io=17931 hit=17891 miss=0 dirty=2\n worker 3 delay=76.160000 total io=16378 hit=4318 miss=0 dirty=603\n\nPatch4 (test1.sh)\n0\n worker 0 delay=179.005000 total io=35828 hit=35788 miss=0 dirty=2\n worker 1 delay=179.010000 total io=35828 hit=35788 miss=0 dirty=2\n worker 2 delay=179.010000 total io=35828 hit=35788 miss=0 dirty=2\n worker 3 delay=221.900000 total io=44322 hit=8352 miss=1199 dirty=1199\n\n0.1\n worker 0 delay=177.840000 total io=35828 hit=35788 miss=0 dirty=2\n worker 1 delay=179.465000 total io=35828 hit=35788 miss=0 dirty=2\n worker 2 delay=179.255000 total io=35828 hit=35788 miss=0 dirty=2\n worker 3 delay=222.695000 total io=44322 hit=8352 miss=1199 dirty=1199\n\n0.3\n worker 0 delay=178.295000 total io=35828 hit=35788 miss=0 dirty=2\n worker 1 delay=178.720000 total io=35828 hit=35788 miss=0 dirty=2\n worker 2 delay=178.270000 total io=35828 hit=35788 miss=0 dirty=2\n worker 3 delay=220.420000 total io=44322 hit=8352 miss=1199 dirty=1199\n\n0.5\n worker 0 delay=178.415000 total io=35828 hit=35788 miss=0 dirty=2\n worker 1 delay=178.385000 total io=35828 hit=35788 miss=0 dirty=2\n worker 2 delay=173.805000 total io=35828 hit=35788 miss=0 dirty=2\n worker 3 delay=221.605000 total io=44322 hit=8352 miss=1199 dirty=1199\n\n0.7\n worker 0 delay=175.330000 total io=35828 hit=35788 miss=0 dirty=2\n worker 1 delay=177.890000 total io=35828 hit=35788 miss=0 dirty=2\n worker 2 delay=167.540000 total io=35828 hit=35788 miss=0 dirty=2\n worker 3 delay=216.725000 total io=44322 hit=8352 miss=1199 dirty=1199\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 12 Nov 2019 10:47:06 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "On Tue, Nov 12, 2019 at 10:47 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Nov 11, 2019 at 4:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Nov 11, 2019 at 12:59 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Mon, Nov 11, 2019 at 9:43 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > On Fri, Nov 8, 2019 at 11:49 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > >\n> > > > > Yeah, I think it is difficult to get the exact balance, but we can try\n> > > > > to be as close as possible. We can try to play with the threshold and\n> > > > > another possibility is to try to sleep in proportion to the amount of\n> > > > > I/O done by the worker.\n> > > > I have done another experiment where I have done another 2 changes on\n> > > > top op patch3\n> > > > a) Only reduce the local balance from the total shared balance\n> > > > whenever it's applying delay\n> > > > b) Compute the delay based on the local balance.\n> > > >\n> > > > patch4:\n> > > > worker 0 delay=84.130000 total I/O=17931 hit=17891 miss=0 dirty=2\n> > > > worker 1 delay=89.230000 total I/O=17931 hit=17891 miss=0 dirty=2\n> > > > worker 2 delay=88.680000 total I/O=17931 hit=17891 miss=0 dirty=2\n> > > > worker 3 delay=80.790000 total I/O=16378 hit=4318 miss=0 dirty=603\n> > > >\n> > > > I think with this approach the delay is divided among the worker quite\n> > > > well compared to other approaches\n> > > >\n> > > > >\n> > ..\n> > > I have tested the same with some other workload(test file attached).\n> > > I can see the same behaviour with this workload as well that with the\n> > > patch 4 the distribution of the delay is better compared to other\n> > > patches i.e. worker with more I/O have more delay and with equal IO\n> > > have alsomost equal delay. Only thing is that the total delay with\n> > > the patch 4 is slightly less compared to other pacthes.\n> > >\n> >\n> > I see one problem with the formula you have used in the patch, maybe\n> > that is causing the value of total delay to go down.\n> >\n> > - if (new_balance >= VacuumCostLimit)\n> > + VacuumCostBalanceLocal += VacuumCostBalance;\n> > + if ((new_balance >= VacuumCostLimit) &&\n> > + (VacuumCostBalanceLocal > VacuumCostLimit/(0.5 * nworker)))\n> >\n> > As per discussion, the second part of the condition should be\n> > \"VacuumCostBalanceLocal > (0.5) * VacuumCostLimit/nworker\". I think\n> > you can once change this and try again. Also, please try with the\n> > different values of threshold (0.3, 0.5, 0.7, etc.).\n> >\n> I have modified the patch4 and ran with different values. But, I\n> don't see much difference in the values with the patch4. Infact I\n> removed the condition for the local balancing check completely still\n> the delays are the same, I think this is because with patch4 worker\n> are only reducing their own balance and also delaying as much as their\n> local balance. So maybe the second condition will not have much\n> impact.\n>\n> Patch4 (test.sh)\n> 0\n> worker 0 delay=82.380000 total io=17931 hit=17891 miss=0 dirty=2\n> worker 1 delay=89.370000 total io=17931 hit=17891 miss=0 dirty=2\n> worker 2 delay=89.645000 total io=17931 hit=17891 miss=0 dirty=2\n> worker 3 delay=79.150000 total io=16378 hit=4318 miss=0 dirty=603\n>\n> 0.1\n> worker 0 delay=89.295000 total io=17931 hit=17891 miss=0 dirty=2\n> worker 1 delay=89.230000 total io=17931 hit=17891 miss=0 dirty=2\n> worker 2 delay=89.675000 total io=17931 hit=17891 miss=0 dirty=2\n> worker 3 delay=81.840000 total io=16378 hit=4318 miss=0 dirty=603\n>\n> 0.3\n> worker 0 delay=85.915000 total io=17931 hit=17891 miss=0 dirty=2\n> worker 1 delay=85.180000 total io=17931 hit=17891 miss=0 dirty=2\n> worker 2 delay=88.760000 total io=17931 hit=17891 miss=0 dirty=2\n> worker 3 delay=81.975000 total io=16378 hit=4318 miss=0 dirty=603\n>\n> 0.5\n> worker 0 delay=81.635000 total io=17931 hit=17891 miss=0 dirty=2\n> worker 1 delay=87.490000 total io=17931 hit=17891 miss=0 dirty=2\n> worker 2 delay=89.425000 total io=17931 hit=17891 miss=0 dirty=2\n> worker 3 delay=82.050000 total io=16378 hit=4318 miss=0 dirty=603\n>\n> 0.7\n> worker 0 delay=85.185000 total io=17931 hit=17891 miss=0 dirty=2\n> worker 1 delay=88.835000 total io=17931 hit=17891 miss=0 dirty=2\n> worker 2 delay=86.005000 total io=17931 hit=17891 miss=0 dirty=2\n> worker 3 delay=76.160000 total io=16378 hit=4318 miss=0 dirty=603\n>\n> Patch4 (test1.sh)\n> 0\n> worker 0 delay=179.005000 total io=35828 hit=35788 miss=0 dirty=2\n> worker 1 delay=179.010000 total io=35828 hit=35788 miss=0 dirty=2\n> worker 2 delay=179.010000 total io=35828 hit=35788 miss=0 dirty=2\n> worker 3 delay=221.900000 total io=44322 hit=8352 miss=1199 dirty=1199\n>\n> 0.1\n> worker 0 delay=177.840000 total io=35828 hit=35788 miss=0 dirty=2\n> worker 1 delay=179.465000 total io=35828 hit=35788 miss=0 dirty=2\n> worker 2 delay=179.255000 total io=35828 hit=35788 miss=0 dirty=2\n> worker 3 delay=222.695000 total io=44322 hit=8352 miss=1199 dirty=1199\n>\n> 0.3\n> worker 0 delay=178.295000 total io=35828 hit=35788 miss=0 dirty=2\n> worker 1 delay=178.720000 total io=35828 hit=35788 miss=0 dirty=2\n> worker 2 delay=178.270000 total io=35828 hit=35788 miss=0 dirty=2\n> worker 3 delay=220.420000 total io=44322 hit=8352 miss=1199 dirty=1199\n>\n> 0.5\n> worker 0 delay=178.415000 total io=35828 hit=35788 miss=0 dirty=2\n> worker 1 delay=178.385000 total io=35828 hit=35788 miss=0 dirty=2\n> worker 2 delay=173.805000 total io=35828 hit=35788 miss=0 dirty=2\n> worker 3 delay=221.605000 total io=44322 hit=8352 miss=1199 dirty=1199\n>\n> 0.7\n> worker 0 delay=175.330000 total io=35828 hit=35788 miss=0 dirty=2\n> worker 1 delay=177.890000 total io=35828 hit=35788 miss=0 dirty=2\n> worker 2 delay=167.540000 total io=35828 hit=35788 miss=0 dirty=2\n> worker 3 delay=216.725000 total io=44322 hit=8352 miss=1199 dirty=1199\n>\nI have revised the patch4 so that it doesn't depent upon the fix\nnumber of workers, instead I have dynamically updated the worker\ncount.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 12 Nov 2019 15:03:26 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "On Tue, Nov 12, 2019 at 3:03 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Nov 12, 2019 at 10:47 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Mon, Nov 11, 2019 at 4:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, Nov 11, 2019 at 12:59 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > On Mon, Nov 11, 2019 at 9:43 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > >\n> > > > > On Fri, Nov 8, 2019 at 11:49 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > >\n> > > > > >\n> > > > > > Yeah, I think it is difficult to get the exact balance, but we can try\n> > > > > > to be as close as possible. We can try to play with the threshold and\n> > > > > > another possibility is to try to sleep in proportion to the amount of\n> > > > > > I/O done by the worker.\n> > > > > I have done another experiment where I have done another 2 changes on\n> > > > > top op patch3\n> > > > > a) Only reduce the local balance from the total shared balance\n> > > > > whenever it's applying delay\n> > > > > b) Compute the delay based on the local balance.\n> > > > >\n> > > > > patch4:\n> > > > > worker 0 delay=84.130000 total I/O=17931 hit=17891 miss=0 dirty=2\n> > > > > worker 1 delay=89.230000 total I/O=17931 hit=17891 miss=0 dirty=2\n> > > > > worker 2 delay=88.680000 total I/O=17931 hit=17891 miss=0 dirty=2\n> > > > > worker 3 delay=80.790000 total I/O=16378 hit=4318 miss=0 dirty=603\n> > > > >\n> > > > > I think with this approach the delay is divided among the worker quite\n> > > > > well compared to other approaches\n> > > > >\n> > > > > >\n> > > ..\n> > > > I have tested the same with some other workload(test file attached).\n> > > > I can see the same behaviour with this workload as well that with the\n> > > > patch 4 the distribution of the delay is better compared to other\n> > > > patches i.e. worker with more I/O have more delay and with equal IO\n> > > > have alsomost equal delay. Only thing is that the total delay with\n> > > > the patch 4 is slightly less compared to other pacthes.\n> > > >\n> > >\n> > > I see one problem with the formula you have used in the patch, maybe\n> > > that is causing the value of total delay to go down.\n> > >\n> > > - if (new_balance >= VacuumCostLimit)\n> > > + VacuumCostBalanceLocal += VacuumCostBalance;\n> > > + if ((new_balance >= VacuumCostLimit) &&\n> > > + (VacuumCostBalanceLocal > VacuumCostLimit/(0.5 * nworker)))\n> > >\n> > > As per discussion, the second part of the condition should be\n> > > \"VacuumCostBalanceLocal > (0.5) * VacuumCostLimit/nworker\". I think\n> > > you can once change this and try again. Also, please try with the\n> > > different values of threshold (0.3, 0.5, 0.7, etc.).\n> > >\n> > I have modified the patch4 and ran with different values. But, I\n> > don't see much difference in the values with the patch4. Infact I\n> > removed the condition for the local balancing check completely still\n> > the delays are the same, I think this is because with patch4 worker\n> > are only reducing their own balance and also delaying as much as their\n> > local balance. So maybe the second condition will not have much\n> > impact.\n> >\n\nYeah, but I suspect the condition (when the local balance exceeds a\ncertain threshold, then only try to perform delay) you mentioned can\nhave an impact in some other scenarios. So, it is better to retain\nthe same. I feel the overall results look sane and the approach seems\nreasonable to me.\n\n> >\n> I have revised the patch4 so that it doesn't depent upon the fix\n> number of workers, instead I have dynamically updated the worker\n> count.\n>\n\nThanks. Sawada-San, by any chance, can you try some of the tests done\nby Dilip or some similar tests just to rule out any sort of\nmachine-specific dependency?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 12 Nov 2019 15:37:52 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "On Tue, 12 Nov 2019 at 19:08, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Nov 12, 2019 at 3:03 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Tue, Nov 12, 2019 at 10:47 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Mon, Nov 11, 2019 at 4:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Mon, Nov 11, 2019 at 12:59 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > >\n> > > > > On Mon, Nov 11, 2019 at 9:43 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > > >\n> > > > > > On Fri, Nov 8, 2019 at 11:49 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > > >\n> > > > > > >\n> > > > > > > Yeah, I think it is difficult to get the exact balance, but we can try\n> > > > > > > to be as close as possible. We can try to play with the threshold and\n> > > > > > > another possibility is to try to sleep in proportion to the amount of\n> > > > > > > I/O done by the worker.\n> > > > > > I have done another experiment where I have done another 2 changes on\n> > > > > > top op patch3\n> > > > > > a) Only reduce the local balance from the total shared balance\n> > > > > > whenever it's applying delay\n> > > > > > b) Compute the delay based on the local balance.\n> > > > > >\n> > > > > > patch4:\n> > > > > > worker 0 delay=84.130000 total I/O=17931 hit=17891 miss=0 dirty=2\n> > > > > > worker 1 delay=89.230000 total I/O=17931 hit=17891 miss=0 dirty=2\n> > > > > > worker 2 delay=88.680000 total I/O=17931 hit=17891 miss=0 dirty=2\n> > > > > > worker 3 delay=80.790000 total I/O=16378 hit=4318 miss=0 dirty=603\n> > > > > >\n> > > > > > I think with this approach the delay is divided among the worker quite\n> > > > > > well compared to other approaches\n> > > > > >\n> > > > > > >\n> > > > ..\n> > > > > I have tested the same with some other workload(test file attached).\n> > > > > I can see the same behaviour with this workload as well that with the\n> > > > > patch 4 the distribution of the delay is better compared to other\n> > > > > patches i.e. worker with more I/O have more delay and with equal IO\n> > > > > have alsomost equal delay. Only thing is that the total delay with\n> > > > > the patch 4 is slightly less compared to other pacthes.\n> > > > >\n> > > >\n> > > > I see one problem with the formula you have used in the patch, maybe\n> > > > that is causing the value of total delay to go down.\n> > > >\n> > > > - if (new_balance >= VacuumCostLimit)\n> > > > + VacuumCostBalanceLocal += VacuumCostBalance;\n> > > > + if ((new_balance >= VacuumCostLimit) &&\n> > > > + (VacuumCostBalanceLocal > VacuumCostLimit/(0.5 * nworker)))\n> > > >\n> > > > As per discussion, the second part of the condition should be\n> > > > \"VacuumCostBalanceLocal > (0.5) * VacuumCostLimit/nworker\". I think\n> > > > you can once change this and try again. Also, please try with the\n> > > > different values of threshold (0.3, 0.5, 0.7, etc.).\n> > > >\n> > > I have modified the patch4 and ran with different values. But, I\n> > > don't see much difference in the values with the patch4. Infact I\n> > > removed the condition for the local balancing check completely still\n> > > the delays are the same, I think this is because with patch4 worker\n> > > are only reducing their own balance and also delaying as much as their\n> > > local balance. So maybe the second condition will not have much\n> > > impact.\n> > >\n>\n> Yeah, but I suspect the condition (when the local balance exceeds a\n> certain threshold, then only try to perform delay) you mentioned can\n> have an impact in some other scenarios. So, it is better to retain\n> the same. I feel the overall results look sane and the approach seems\n> reasonable to me.\n>\n> > >\n> > I have revised the patch4 so that it doesn't depent upon the fix\n> > number of workers, instead I have dynamically updated the worker\n> > count.\n> >\n>\n> Thanks. Sawada-San, by any chance, can you try some of the tests done\n> by Dilip or some similar tests just to rule out any sort of\n> machine-specific dependency?\n\nSure. I'll try it tomorrow.\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 12 Nov 2019 20:22:58 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "On Tue, 12 Nov 2019 at 20:22, Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Tue, 12 Nov 2019 at 19:08, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Nov 12, 2019 at 3:03 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Tue, Nov 12, 2019 at 10:47 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > On Mon, Nov 11, 2019 at 4:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > On Mon, Nov 11, 2019 at 12:59 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > > >\n> > > > > > On Mon, Nov 11, 2019 at 9:43 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > > > >\n> > > > > > > On Fri, Nov 8, 2019 at 11:49 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > > > >\n> > > > > > > >\n> > > > > > > > Yeah, I think it is difficult to get the exact balance, but we can try\n> > > > > > > > to be as close as possible. We can try to play with the threshold and\n> > > > > > > > another possibility is to try to sleep in proportion to the amount of\n> > > > > > > > I/O done by the worker.\n> > > > > > > I have done another experiment where I have done another 2 changes on\n> > > > > > > top op patch3\n> > > > > > > a) Only reduce the local balance from the total shared balance\n> > > > > > > whenever it's applying delay\n> > > > > > > b) Compute the delay based on the local balance.\n> > > > > > >\n> > > > > > > patch4:\n> > > > > > > worker 0 delay=84.130000 total I/O=17931 hit=17891 miss=0 dirty=2\n> > > > > > > worker 1 delay=89.230000 total I/O=17931 hit=17891 miss=0 dirty=2\n> > > > > > > worker 2 delay=88.680000 total I/O=17931 hit=17891 miss=0 dirty=2\n> > > > > > > worker 3 delay=80.790000 total I/O=16378 hit=4318 miss=0 dirty=603\n> > > > > > >\n> > > > > > > I think with this approach the delay is divided among the worker quite\n> > > > > > > well compared to other approaches\n> > > > > > >\n> > > > > > > >\n> > > > > ..\n> > > > > > I have tested the same with some other workload(test file attached).\n> > > > > > I can see the same behaviour with this workload as well that with the\n> > > > > > patch 4 the distribution of the delay is better compared to other\n> > > > > > patches i.e. worker with more I/O have more delay and with equal IO\n> > > > > > have alsomost equal delay. Only thing is that the total delay with\n> > > > > > the patch 4 is slightly less compared to other pacthes.\n> > > > > >\n> > > > >\n> > > > > I see one problem with the formula you have used in the patch, maybe\n> > > > > that is causing the value of total delay to go down.\n> > > > >\n> > > > > - if (new_balance >= VacuumCostLimit)\n> > > > > + VacuumCostBalanceLocal += VacuumCostBalance;\n> > > > > + if ((new_balance >= VacuumCostLimit) &&\n> > > > > + (VacuumCostBalanceLocal > VacuumCostLimit/(0.5 * nworker)))\n> > > > >\n> > > > > As per discussion, the second part of the condition should be\n> > > > > \"VacuumCostBalanceLocal > (0.5) * VacuumCostLimit/nworker\". I think\n> > > > > you can once change this and try again. Also, please try with the\n> > > > > different values of threshold (0.3, 0.5, 0.7, etc.).\n> > > > >\n> > > > I have modified the patch4 and ran with different values. But, I\n> > > > don't see much difference in the values with the patch4. Infact I\n> > > > removed the condition for the local balancing check completely still\n> > > > the delays are the same, I think this is because with patch4 worker\n> > > > are only reducing their own balance and also delaying as much as their\n> > > > local balance. So maybe the second condition will not have much\n> > > > impact.\n> > > >\n> >\n> > Yeah, but I suspect the condition (when the local balance exceeds a\n> > certain threshold, then only try to perform delay) you mentioned can\n> > have an impact in some other scenarios. So, it is better to retain\n> > the same. I feel the overall results look sane and the approach seems\n> > reasonable to me.\n> >\n> > > >\n> > > I have revised the patch4 so that it doesn't depent upon the fix\n> > > number of workers, instead I have dynamically updated the worker\n> > > count.\n> > >\n> >\n> > Thanks. Sawada-San, by any chance, can you try some of the tests done\n> > by Dilip or some similar tests just to rule out any sort of\n> > machine-specific dependency?\n>\n> Sure. I'll try it tomorrow.\n\nI've done some tests while changing shared buffer size, delays and\nnumber of workers. The overall results has the similar tendency as the\nresult shared by Dilip and looks reasonable to me.\n\n* test.sh\n\nshared_buffers = '4GB';\nmax_parallel_maintenance_workers = 6;\nvacuum_cost_delay = 1;\nworker 0 delay=89.315000 total io=17931 hit=17891 miss=0 dirty=2\nworker 1 delay=88.860000 total io=17931 hit=17891 miss=0 dirty=2\nworker 2 delay=89.290000 total io=17931 hit=17891 miss=0 dirty=2\nworker 3 delay=81.805000 total io=16378 hit=4318 miss=0 dirty=603\n\nshared_buffers = '1GB';\nmax_parallel_maintenance_workers = 6;\nvacuum_cost_delay = 1;\nworker 0 delay=89.210000 total io=17931 hit=17891 miss=0 dirty=2\nworker 1 delay=89.325000 total io=17931 hit=17891 miss=0 dirty=2\nworker 2 delay=88.870000 total io=17931 hit=17891 miss=0 dirty=2\nworker 3 delay=81.735000 total io=16378 hit=4318 miss=0 dirty=603\n\nshared_buffers = '512MB';\nmax_parallel_maintenance_workers = 6;\nvacuum_cost_delay = 1;\nworker 0 delay=88.480000 total io=17931 hit=17891 miss=0 dirty=2\nworker 1 delay=88.635000 total io=17931 hit=17891 miss=0 dirty=2\nworker 2 delay=88.600000 total io=17931 hit=17891 miss=0 dirty=2\nworker 3 delay=81.660000 total io=16378 hit=4318 miss=0 dirty=603\n\nshared_buffers = '512MB';\nmax_parallel_maintenance_workers = 6;\nvacuum_cost_delay = 5;\nworker 0 delay=447.725000 total io=17931 hit=17891 miss=0 dirty=2\nworker 1 delay=445.850000 total io=17931 hit=17891 miss=0 dirty=2\nworker 2 delay=445.125000 total io=17931 hit=17891 miss=0 dirty=2\nworker 3 delay=409.025000 total io=16378 hit=4318 miss=0 dirty=603\n\nshared_buffers = '512MB';\nmax_parallel_maintenance_workers = 2;\nvacuum_cost_delay = 5;\nworker 0 delay=854.750000 total io=34309 hit=22209 miss=0 dirty=605\nworker 1 delay=446.500000 total io=17931 hit=17891 miss=0 dirty=2\nworker 2 delay=444.175000 total io=17931 hit=17891 miss=0 dirty=2\n\n---\n* test1.sh\n\nshared_buffers = '4GB';\nmax_parallel_maintenance_workers = 6;\nvacuum_cost_delay = 1;\nworker 0 delay=178.205000 total io=35828 hit=35788 miss=0 dirty=2\nworker 1 delay=178.550000 total io=35828 hit=35788 miss=0 dirty=2\nworker 2 delay=178.660000 total io=35828 hit=35788 miss=0 dirty=2\nworker 3 delay=221.280000 total io=44322 hit=8352 miss=1199 dirty=1199\n\nshared_buffers = '1GB';\nmax_parallel_maintenance_workers = 6;\nvacuum_cost_delay = 1;\nworker 0 delay=178.035000 total io=35828 hit=35788 miss=0 dirty=2\nworker 1 delay=178.535000 total io=35828 hit=35788 miss=0 dirty=2\nworker 2 delay=178.585000 total io=35828 hit=35788 miss=0 dirty=2\nworker 3 delay=221.465000 total io=44322 hit=8352 miss=1199 dirty=1199\n\nshared_buffers = '512MB';\nmax_parallel_maintenance_workers = 6;\nvacuum_cost_delay = 1;\nworker 0 delay=1795.900000 total io=357911 hit=1 miss=35787 dirty=2\nworker 1 delay=1790.700000 total io=357911 hit=1 miss=35787 dirty=2\nworker 2 delay=179.000000 total io=35828 hit=35788 miss=0 dirty=2\nworker 3 delay=221.355000 total io=44322 hit=8352 miss=1199 dirty=1199\n\nshared_buffers = '512MB';\nmax_parallel_maintenance_workers = 6;\nvacuum_cost_delay = 5;\nworker 0 delay=8958.500000 total io=357911 hit=1 miss=35787 dirty=2\nworker 1 delay=8950.000000 total io=357911 hit=1 miss=35787 dirty=2\nworker 2 delay=894.150000 total io=35828 hit=35788 miss=0 dirty=2\nworker 3 delay=1106.400000 total io=44322 hit=8352 miss=1199 dirty=1199\n\nshared_buffers = '512MB';\nmax_parallel_maintenance_workers = 2;\nvacuum_cost_delay = 5;\nworker 0 delay=8956.500000 total io=357911 hit=1 miss=35787 dirty=2\nworker 1 delay=8955.050000 total io=357893 hit=3 miss=35785 dirty=2\nworker 2 delay=2002.825000 total io=80150 hit=44140 miss=1199 dirty=1201\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 13 Nov 2019 13:32:13 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "On Mon, 11 Nov 2019 at 17:56, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Nov 11, 2019 at 5:14 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Mon, Nov 11, 2019 at 4:23 PM Amit Kapila <amit.kapila16@gmail.com>\nwrote:\n> > >\n> > > ..\n> > > > I have tested the same with some other workload(test file attached).\n> > > > I can see the same behaviour with this workload as well that with\nthe\n> > > > patch 4 the distribution of the delay is better compared to other\n> > > > patches i.e. worker with more I/O have more delay and with equal IO\n> > > > have alsomost equal delay. Only thing is that the total delay with\n> > > > the patch 4 is slightly less compared to other pacthes.\n> > > >\n> > >\n> > > I see one problem with the formula you have used in the patch, maybe\n> > > that is causing the value of total delay to go down.\n> > >\n> > > - if (new_balance >= VacuumCostLimit)\n> > > + VacuumCostBalanceLocal += VacuumCostBalance;\n> > > + if ((new_balance >= VacuumCostLimit) &&\n> > > + (VacuumCostBalanceLocal > VacuumCostLimit/(0.5 * nworker)))\n> > >\n> > > As per discussion, the second part of the condition should be\n> > > \"VacuumCostBalanceLocal > (0.5) * VacuumCostLimit/nworker\".\n> > My Bad\n> > I think\n> > > you can once change this and try again. Also, please try with the\n> > > different values of threshold (0.3, 0.5, 0.7, etc.).\n> > >\n> > Okay, I will retest with both patch3 and path4 for both the scenarios.\n> > I will also try with different multipliers.\n> >\n>\n> One more thing, I think we should also test these cases with a varying\n> number of indexes (say 2,6,8,etc.) and then probably, we should test\n> by a varying number of workers where the number of workers are lesser\n> than indexes. You can do these after finishing your previous\n> experiments.\n\nOn the top of parallel vacuum patch, I applied Dilip's\npatch(0001-vacuum_costing_test.patch). I have tested by varying number of\nindexes and number of workers. I compared shared\ncosting(0001-vacuum_costing_test.patch) vs shared costing latest\npatch(shared_costing_plus_patch4_v1.patch).\nWith shared costing base patch, I can see that delay is not in sync\ncompared to I/O which is resolved by applying patch\n(shared_costing_plus_patch4_v1.patch). I have also observed that total\ndelay is slightly reduced with shared_costing_plus_patch4_v1.patch patch.\n\nBelow is the full testing summary:\n*Test setup:*\nstep1) Apply parallel vacuum patch\nstep2) Apply 0001-vacuum_costing_test.patch patch (on the top of this\npatch, delay is not in sync compared to I/O)\nstep3) Apply shared_costing_plus_patch4_v1.patch (delay is in sync compared\nto I/O)\n\n*Configuration settings:*\nautovacuum = off\nmax_parallel_workers = 30\nshared_buffers = 2GB\nmax_parallel_maintenance_workers = 20\nvacuum_cost_limit = 2000\nvacuum_cost_delay = 10\n\n*Test 1: Varry indexes(2,4,6,8) but parallel workers are fixed as 4:*\n\nCase 1) When indexes are 2:\n*Without shared_costing_plus_patch4_v1.patch:*\nWARNING: worker 0 delay=120.000000 total io=17931 hit=17891 miss=0 dirty=2\nWARNING: worker 1 delay=60.000000 total io=17931 hit=17891 miss=0 dirty=2\n\n*With shared_costing_plus_patch4_v1.patch:*\nWARNING: worker 0 delay=87.780000 total io=17931 hit=17891 miss=0 dirty=2\nWARNING: worker 1 delay=87.995000 total io=17931 hit=17891 miss=0 dirty=2\n\nCase 2) When indexes are 4:\n*Without shared_costing_plus_patch4_v1.patch:*\nWARNING: worker 0 delay=120.000000 total io=17931 hit=17891 miss=0 dirty=2\nWARNING: worker 1 delay=80.000000 total io=17931 hit=17891 miss=0 dirty=2\nWARNING: worker 2 delay=60.000000 total io=17931 hit=17891 miss=0 dirty=2\nWARNING: worker 3 delay=100.000000 total io=17931 hit=17891 miss=0 dirty=2\n\n*With shared_costing_plus_patch4_v1.patch:*\nWARNING: worker 0 delay=87.430000 total io=17931 hit=17891 miss=0 dirty=2\nWARNING: worker 1 delay=87.175000 total io=17931 hit=17891 miss=0 dirty=2\nWARNING: worker 2 delay=86.340000 total io=17931 hit=17891 miss=0 dirty=2\nWARNING: worker 3 delay=88.020000 total io=17931 hit=17891 miss=0 dirty=2\n\nCase 3) When indexes are 6:\n*Without shared_costing_plus_patch4_v1.patch:*\nWARNING: worker 0 delay=110.000000 total io=17931 hit=17891 miss=0 dirty=2\nWARNING: worker 1 delay=100.000000 total io=17931 hit=17891 miss=0 dirty=2\nWARNING: worker 2 delay=160.000000 total io=35862 hit=35782 miss=0 dirty=4\nWARNING: worker 3 delay=90.000000 total io=17931 hit=17891 miss=0 dirty=2\nWARNING: worker 4 delay=80.000000 total io=17931 hit=17891 miss=0 dirty=2\n\n*With shared_costing_plus_patch4_v1.patch*:\nWARNING: worker 0 delay=173.195000 total io=35862 hit=35782 miss=0 dirty=4\nWARNING: worker 1 delay=88.715000 total io=17931 hit=17891 miss=0 dirty=2\nWARNING: worker 2 delay=87.710000 total io=17931 hit=17891 miss=0 dirty=2\nWARNING: worker 3 delay=86.460000 total io=17931 hit=17891 miss=0 dirty=2\nWARNING: worker 4 delay=89.435000 total io=17931 hit=17891 miss=0 dirty=2\n\nCase 4) When indexes are 8:\n*Without shared_costing_plus_patch4_v1.patch:*\nWARNING: worker 0 delay=170.000000 total io=35862 hit=35782 miss=0 dirty=4\nWARNING: worker 1 delay=120.000000 total io=17931 hit=17891 miss=0 dirty=2\nWARNING: worker 2 delay=130.000000 total io=17931 hit=17891 miss=0 dirty=2\nWARNING: worker 3 delay=190.000000 total io=35862 hit=35782 miss=0 dirty=4\nWARNING: worker 4 delay=110.000000 total io=35862 hit=35782 miss=0 dirty=4\n\n*With shared_costing_plus_patch4_v1.patch*:\nWARNING: worker 0 delay=174.700000 total io=35862 hit=35782 miss=0 dirty=4\nWARNING: worker 1 delay=177.880000 total io=35862 hit=35782 miss=0 dirty=4\nWARNING: worker 2 delay=89.460000 total io=17931 hit=17891 miss=0 dirty=2\nWARNING: worker 3 delay=177.320000 total io=35862 hit=35782 miss=0 dirty=4\nWARNING: worker 4 delay=86.810000 total io=17931 hit=17891 miss=0 dirty=2\n\n*Test 2: Indexes are 16 but parallel workers are 2,4,8:*\n\nCase 1) When 2 parallel workers:\n*Without shared_costing_plus_patch4_v1.patch:*\nWARNING: worker 0 delay=1513.230000 total io=307197 hit=85167 miss=22179\ndirty=12\nWARNING: worker 1 delay=1543.385000 total io=326553 hit=63133 miss=26322\ndirty=10\nWARNING: worker 2 delay=1633.625000 total io=302199 hit=65839 miss=23616\ndirty=10\n\n*With shared_costing_plus_patch4_v1.patch:*\nWARNING: worker 0 delay=1539.475000 total io=308175 hit=65175 miss=24280\ndirty=10\nWARNING: worker 1 delay=1251.200000 total io=250692 hit=71562 miss=17893\ndirty=10\nWARNING: worker 2 delay=1143.690000 total io=228987 hit=93857 miss=13489\ndirty=12\n\nCase 2) When 4 parallel workers:\n*Without shared_costing_plus_patch4_v1.patch:*\nWARNING: worker 0 delay=1182.430000 total io=213567 hit=16037 miss=19745\ndirty=4\nWARNING: worker 1 delay=1202.710000 total io=178941 hit=1 miss=17890\ndirty=2\nWARNING: worker 2 delay=210.000000 total io=89655 hit=89455 miss=0 dirty=10\nWARNING: worker 3 delay=270.000000 total io=71724 hit=71564 miss=0 dirty=8\nWARNING: worker 4 delay=851.825000 total io=188229 hit=58619 miss=12945\ndirty=8\n\n*With shared_costing_plus_patch4_v1.patch:*\nWARNING: worker 0 delay=1136.875000 total io=227679 hit=14469 miss=21313\ndirty=4\nWARNING: worker 1 delay=973.745000 total io=196881 hit=17891 miss=17891\ndirty=4\nWARNING: worker 2 delay=447.410000 total io=89655 hit=89455 miss=0 dirty=10\nWARNING: worker 3 delay=833.235000 total io=168228 hit=40958 miss=12715\ndirty=6\nWARNING: worker 4 delay=683.200000 total io=136488 hit=64368 miss=7196\ndirty=8\n\nCase 3) When 8 parallel workers:\n*Without shared_costing_plus_patch4_v1.patch:*\nWARNING: worker 0 delay=1022.300000 total io=178941 hit=1 miss=17890\ndirty=2\nWARNING: worker 1 delay=1072.770000 total io=178941 hit=1 miss=17890\ndirty=2\nWARNING: worker 2 delay=170.000000 total io=35862 hit=35782 miss=0 dirty=4\nWARNING: worker 3 delay=170.000000 total io=35862 hit=35782 miss=0 dirty=4\nWARNING: worker 4 delay=140.035000 total io=35862 hit=35782 miss=0 dirty=4\nWARNING: worker 5 delay=200.000000 total io=53802 hit=53672 miss=1 dirty=6\nWARNING: worker 6 delay=130.000000 total io=35862 hit=35782 miss=0 dirty=4\nWARNING: worker 7 delay=150.000000 total io=53793 hit=53673 miss=0 dirty=6\n\n*With shared_costing_plus_patch4_v1.patch:*\nWARNING: worker 0 delay=872.800000 total io=178941 hit=1 miss=17890 dirty=2\nWARNING: worker 1 delay=885.950000 total io=178941 hit=1 miss=17890 dirty=2\nWARNING: worker 2 delay=175.680000 total io=35862 hit=35782 miss=0 dirty=4\nWARNING: worker 3 delay=259.560000 total io=53793 hit=53673 miss=0 dirty=6\nWARNING: worker 4 delay=169.945000 total io=35862 hit=35782 miss=0 dirty=4\nWARNING: worker 5 delay=613.845000 total io=125100 hit=45750 miss=7923\ndirty=6\nWARNING: worker 6 delay=171.895000 total io=35862 hit=35782 miss=0 dirty=4\nWARNING: worker 7 delay=176.505000 total io=35862 hit=35782 miss=0 dirty=4\n\n\nThanks and Regards\nMahendra Thalor\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Mon, 11 Nov 2019 at 17:56, Amit Kapila <amit.kapila16@gmail.com> wrote:>> On Mon, Nov 11, 2019 at 5:14 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:> >> > On Mon, Nov 11, 2019 at 4:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:> > >> > > ..> > > > I have tested the same with some other workload(test file attached).> > > > I can see the same behaviour with this workload as well that with the> > > > patch 4 the distribution of the delay is better compared to other> > > > patches i.e. worker with more I/O have more delay and with equal IO> > > > have alsomost equal delay. Only thing is that the total delay with> > > > the patch 4 is slightly less compared to other pacthes.> > > >> > >> > > I see one problem with the formula you have used in the patch, maybe> > > that is causing the value of total delay to go down.> > >> > > - if (new_balance >= VacuumCostLimit)> > > + VacuumCostBalanceLocal += VacuumCostBalance;> > > + if ((new_balance >= VacuumCostLimit) &&> > > + (VacuumCostBalanceLocal > VacuumCostLimit/(0.5 * nworker)))> > >> > > As per discussion, the second part of the condition should be> > > \"VacuumCostBalanceLocal > (0.5) * VacuumCostLimit/nworker\".> > My Bad> > I think> > > you can once change this and try again. Also, please try with the> > > different values of threshold (0.3, 0.5, 0.7, etc.).> > >> > Okay, I will retest with both patch3 and path4 for both the scenarios.> > I will also try with different multipliers.> >>> One more thing, I think we should also test these cases with a varying> number of indexes (say 2,6,8,etc.) and then probably, we should test> by a varying number of workers where the number of workers are lesser> than indexes. You can do these after finishing your previous> experiments.On the top of parallel vacuum patch, I applied Dilip's patch(0001-vacuum_costing_test.patch). I have tested by varying number of indexes and number of workers. I compared shared costing(0001-vacuum_costing_test.patch) vs shared costing latest patch(shared_costing_plus_patch4_v1.patch).With shared costing base patch, I can see that delay is not in sync compared to I/O which is resolved by applying patch (shared_costing_plus_patch4_v1.patch). I have also observed that total delay is slightly reduced with shared_costing_plus_patch4_v1.patch patch.Below is the full testing summary:Test setup:step1) Apply parallel vacuum patch step2) Apply 0001-vacuum_costing_test.patch patch (on the top of this patch, delay is not in sync compared to I/O)step3) Apply shared_costing_plus_patch4_v1.patch (delay is in sync compared to I/O)Configuration settings:autovacuum = offmax_parallel_workers = 30shared_buffers = 2GBmax_parallel_maintenance_workers = 20vacuum_cost_limit = 2000vacuum_cost_delay = 10Test 1: Varry indexes(2,4,6,8) but parallel workers are fixed as 4:Case 1) When indexes are 2:Without shared_costing_plus_patch4_v1.patch:WARNING: worker 0 delay=120.000000 total io=17931 hit=17891 miss=0 dirty=2WARNING: worker 1 delay=60.000000 total io=17931 hit=17891 miss=0 dirty=2With shared_costing_plus_patch4_v1.patch:WARNING: worker 0 delay=87.780000 total io=17931 hit=17891 miss=0 dirty=2WARNING: worker 1 delay=87.995000 total io=17931 hit=17891 miss=0 dirty=2Case 2) When indexes are 4:Without shared_costing_plus_patch4_v1.patch:WARNING: worker 0 delay=120.000000 total io=17931 hit=17891 miss=0 dirty=2WARNING: worker 1 delay=80.000000 total io=17931 hit=17891 miss=0 dirty=2WARNING: worker 2 delay=60.000000 total io=17931 hit=17891 miss=0 dirty=2WARNING: worker 3 delay=100.000000 total io=17931 hit=17891 miss=0 dirty=2With shared_costing_plus_patch4_v1.patch:WARNING: worker 0 delay=87.430000 total io=17931 hit=17891 miss=0 dirty=2WARNING: worker 1 delay=87.175000 total io=17931 hit=17891 miss=0 dirty=2WARNING: worker 2 delay=86.340000 total io=17931 hit=17891 miss=0 dirty=2WARNING: worker 3 delay=88.020000 total io=17931 hit=17891 miss=0 dirty=2Case 3) When indexes are 6:Without shared_costing_plus_patch4_v1.patch:WARNING: worker 0 delay=110.000000 total io=17931 hit=17891 miss=0 dirty=2WARNING: worker 1 delay=100.000000 total io=17931 hit=17891 miss=0 dirty=2WARNING: worker 2 delay=160.000000 total io=35862 hit=35782 miss=0 dirty=4WARNING: worker 3 delay=90.000000 total io=17931 hit=17891 miss=0 dirty=2WARNING: worker 4 delay=80.000000 total io=17931 hit=17891 miss=0 dirty=2With shared_costing_plus_patch4_v1.patch:WARNING: worker 0 delay=173.195000 total io=35862 hit=35782 miss=0 dirty=4WARNING: worker 1 delay=88.715000 total io=17931 hit=17891 miss=0 dirty=2WARNING: worker 2 delay=87.710000 total io=17931 hit=17891 miss=0 dirty=2WARNING: worker 3 delay=86.460000 total io=17931 hit=17891 miss=0 dirty=2WARNING: worker 4 delay=89.435000 total io=17931 hit=17891 miss=0 dirty=2Case 4) When indexes are 8:Without shared_costing_plus_patch4_v1.patch:WARNING: worker 0 delay=170.000000 total io=35862 hit=35782 miss=0 dirty=4WARNING: worker 1 delay=120.000000 total io=17931 hit=17891 miss=0 dirty=2WARNING: worker 2 delay=130.000000 total io=17931 hit=17891 miss=0 dirty=2WARNING: worker 3 delay=190.000000 total io=35862 hit=35782 miss=0 dirty=4WARNING: worker 4 delay=110.000000 total io=35862 hit=35782 miss=0 dirty=4With shared_costing_plus_patch4_v1.patch:WARNING: worker 0 delay=174.700000 total io=35862 hit=35782 miss=0 dirty=4WARNING: worker 1 delay=177.880000 total io=35862 hit=35782 miss=0 dirty=4WARNING: worker 2 delay=89.460000 total io=17931 hit=17891 miss=0 dirty=2WARNING: worker 3 delay=177.320000 total io=35862 hit=35782 miss=0 dirty=4WARNING: worker 4 delay=86.810000 total io=17931 hit=17891 miss=0 dirty=2Test 2: Indexes are 16 but parallel workers are 2,4,8:Case 1) When 2 parallel workers:Without shared_costing_plus_patch4_v1.patch:WARNING: worker 0 delay=1513.230000 total io=307197 hit=85167 miss=22179 dirty=12WARNING: worker 1 delay=1543.385000 total io=326553 hit=63133 miss=26322 dirty=10WARNING: worker 2 delay=1633.625000 total io=302199 hit=65839 miss=23616 dirty=10With shared_costing_plus_patch4_v1.patch:WARNING: worker 0 delay=1539.475000 total io=308175 hit=65175 miss=24280 dirty=10WARNING: worker 1 delay=1251.200000 total io=250692 hit=71562 miss=17893 dirty=10WARNING: worker 2 delay=1143.690000 total io=228987 hit=93857 miss=13489 dirty=12Case 2) When 4 parallel workers:Without shared_costing_plus_patch4_v1.patch:WARNING: worker 0 delay=1182.430000 total io=213567 hit=16037 miss=19745 dirty=4WARNING: worker 1 delay=1202.710000 total io=178941 hit=1 miss=17890 dirty=2WARNING: worker 2 delay=210.000000 total io=89655 hit=89455 miss=0 dirty=10WARNING: worker 3 delay=270.000000 total io=71724 hit=71564 miss=0 dirty=8WARNING: worker 4 delay=851.825000 total io=188229 hit=58619 miss=12945 dirty=8With shared_costing_plus_patch4_v1.patch:WARNING: worker 0 delay=1136.875000 total io=227679 hit=14469 miss=21313 dirty=4WARNING: worker 1 delay=973.745000 total io=196881 hit=17891 miss=17891 dirty=4WARNING: worker 2 delay=447.410000 total io=89655 hit=89455 miss=0 dirty=10WARNING: worker 3 delay=833.235000 total io=168228 hit=40958 miss=12715 dirty=6WARNING: worker 4 delay=683.200000 total io=136488 hit=64368 miss=7196 dirty=8Case 3) When 8 parallel workers:Without shared_costing_plus_patch4_v1.patch:WARNING: worker 0 delay=1022.300000 total io=178941 hit=1 miss=17890 dirty=2WARNING: worker 1 delay=1072.770000 total io=178941 hit=1 miss=17890 dirty=2WARNING: worker 2 delay=170.000000 total io=35862 hit=35782 miss=0 dirty=4WARNING: worker 3 delay=170.000000 total io=35862 hit=35782 miss=0 dirty=4WARNING: worker 4 delay=140.035000 total io=35862 hit=35782 miss=0 dirty=4WARNING: worker 5 delay=200.000000 total io=53802 hit=53672 miss=1 dirty=6WARNING: worker 6 delay=130.000000 total io=35862 hit=35782 miss=0 dirty=4WARNING: worker 7 delay=150.000000 total io=53793 hit=53673 miss=0 dirty=6With shared_costing_plus_patch4_v1.patch:WARNING: worker 0 delay=872.800000 total io=178941 hit=1 miss=17890 dirty=2WARNING: worker 1 delay=885.950000 total io=178941 hit=1 miss=17890 dirty=2WARNING: worker 2 delay=175.680000 total io=35862 hit=35782 miss=0 dirty=4WARNING: worker 3 delay=259.560000 total io=53793 hit=53673 miss=0 dirty=6WARNING: worker 4 delay=169.945000 total io=35862 hit=35782 miss=0 dirty=4WARNING: worker 5 delay=613.845000 total io=125100 hit=45750 miss=7923 dirty=6WARNING: worker 6 delay=171.895000 total io=35862 hit=35782 miss=0 dirty=4WARNING: worker 7 delay=176.505000 total io=35862 hit=35782 miss=0 dirty=4Thanks and RegardsMahendra ThalorEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 14 Nov 2019 17:02:26 +0530",
"msg_from": "Mahendra Singh <mahi6run@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "On Wed, Nov 13, 2019 at 10:02 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> I've done some tests while changing shared buffer size, delays and\n> number of workers. The overall results has the similar tendency as the\n> result shared by Dilip and looks reasonable to me.\n>\n\nThanks, Sawada-san for repeating the tests. I can see from yours,\nDilip and Mahendra's testing that the delay is distributed depending\nupon the I/O done by a particular worker and the total I/O is also as\nexpected in various kinds of scenarios. So, I think this is a better\napproach. Do you agree or you think we should still investigate more\non another approach as well?\n\nI would like to summarize this approach. The basic idea for parallel\nvacuum is to allow the parallel workers and master backend to have a\nshared view of vacuum cost related parameters (mainly\nVacuumCostBalance) and allow each worker to update it and then based\non that decide whether it needs to sleep. With this basic idea, we\nfound that in some cases the throttling is not accurate as explained\nwith an example in my email above [1] and then the tests performed by\nDilip and others in the following emails (In short, the workers doing\nmore I/O can be throttled less). Then as discussed in an email later\n[2], we tried a way to avoid letting the workers sleep which has done\nless or no I/O as compared to other workers. This ensured that\nworkers who are doing more I/O got throttled more. The idea is to\nallow any worker to sleep only if it has performed the I/O above a\ncertain threshold and the overall balance is more than the cost_limit\nset by the system. Then we will allow the worker to sleep\nproportional to the work done by it and reduce the\nVacuumSharedCostBalance by the amount which is consumed by the current\nworker. This scheme leads to the desired throttling by different\nworkers based on the work done by the individual worker.\n\nWe have tested this idea with various kinds of workloads like by\nvarying shared buffer size, delays and number of workers. Then also,\nwe have tried with a different number of indexes and workers. In all\nthe tests, we found that the workers are throttled proportional to the\nI/O being done by a particular worker.\n\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1JvxBTWTPqHGx1X7in7j42ZYwuKOZUySzH3YMwTNRE-2Q%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/CAA4eK1K9kCqLKbVA9KUuuarjj%2BsNYqrmf6UAFok5VTgZ8evWoA%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 15 Nov 2019 08:23:49 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "On Thu, Nov 14, 2019 at 5:02 PM Mahendra Singh <mahi6run@gmail.com> wrote:\n>\n> On Mon, 11 Nov 2019 at 17:56, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Nov 11, 2019 at 5:14 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Mon, Nov 11, 2019 at 4:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > ..\n> > > > > I have tested the same with some other workload(test file attached).\n> > > > > I can see the same behaviour with this workload as well that with the\n> > > > > patch 4 the distribution of the delay is better compared to other\n> > > > > patches i.e. worker with more I/O have more delay and with equal IO\n> > > > > have alsomost equal delay. Only thing is that the total delay with\n> > > > > the patch 4 is slightly less compared to other pacthes.\n> > > > >\n> > > >\n> > > > I see one problem with the formula you have used in the patch, maybe\n> > > > that is causing the value of total delay to go down.\n> > > >\n> > > > - if (new_balance >= VacuumCostLimit)\n> > > > + VacuumCostBalanceLocal += VacuumCostBalance;\n> > > > + if ((new_balance >= VacuumCostLimit) &&\n> > > > + (VacuumCostBalanceLocal > VacuumCostLimit/(0.5 * nworker)))\n> > > >\n> > > > As per discussion, the second part of the condition should be\n> > > > \"VacuumCostBalanceLocal > (0.5) * VacuumCostLimit/nworker\".\n> > > My Bad\n> > > I think\n> > > > you can once change this and try again. Also, please try with the\n> > > > different values of threshold (0.3, 0.5, 0.7, etc.).\n> > > >\n> > > Okay, I will retest with both patch3 and path4 for both the scenarios.\n> > > I will also try with different multipliers.\n> > >\n> >\n> > One more thing, I think we should also test these cases with a varying\n> > number of indexes (say 2,6,8,etc.) and then probably, we should test\n> > by a varying number of workers where the number of workers are lesser\n> > than indexes. You can do these after finishing your previous\n> > experiments.\n>\n> On the top of parallel vacuum patch, I applied Dilip's patch(0001-vacuum_costing_test.patch). I have tested by varying number of indexes and number of workers. I compared shared costing(0001-vacuum_costing_test.patch) vs shared costing latest patch(shared_costing_plus_patch4_v1.patch).\n> With shared costing base patch, I can see that delay is not in sync compared to I/O which is resolved by applying patch (shared_costing_plus_patch4_v1.patch). I have also observed that total delay is slightly reduced with shared_costing_plus_patch4_v1.patch patch.\n>\n> Below is the full testing summary:\n> Test setup:\n> step1) Apply parallel vacuum patch\n> step2) Apply 0001-vacuum_costing_test.patch patch (on the top of this patch, delay is not in sync compared to I/O)\n> step3) Apply shared_costing_plus_patch4_v1.patch (delay is in sync compared to I/O)\n>\n> Configuration settings:\n> autovacuum = off\n> max_parallel_workers = 30\n> shared_buffers = 2GB\n> max_parallel_maintenance_workers = 20\n> vacuum_cost_limit = 2000\n> vacuum_cost_delay = 10\n>\n> Test 1: Varry indexes(2,4,6,8) but parallel workers are fixed as 4:\n>\n> Case 1) When indexes are 2:\n> Without shared_costing_plus_patch4_v1.patch:\n> WARNING: worker 0 delay=120.000000 total io=17931 hit=17891 miss=0 dirty=2\n> WARNING: worker 1 delay=60.000000 total io=17931 hit=17891 miss=0 dirty=2\n>\n> With shared_costing_plus_patch4_v1.patch:\n> WARNING: worker 0 delay=87.780000 total io=17931 hit=17891 miss=0 dirty=2\n> WARNING: worker 1 delay=87.995000 total io=17931 hit=17891 miss=0 dirty=2\n>\n> Case 2) When indexes are 4:\n> Without shared_costing_plus_patch4_v1.patch:\n> WARNING: worker 0 delay=120.000000 total io=17931 hit=17891 miss=0 dirty=2\n> WARNING: worker 1 delay=80.000000 total io=17931 hit=17891 miss=0 dirty=2\n> WARNING: worker 2 delay=60.000000 total io=17931 hit=17891 miss=0 dirty=2\n> WARNING: worker 3 delay=100.000000 total io=17931 hit=17891 miss=0 dirty=2\n>\n> With shared_costing_plus_patch4_v1.patch:\n> WARNING: worker 0 delay=87.430000 total io=17931 hit=17891 miss=0 dirty=2\n> WARNING: worker 1 delay=87.175000 total io=17931 hit=17891 miss=0 dirty=2\n> WARNING: worker 2 delay=86.340000 total io=17931 hit=17891 miss=0 dirty=2\n> WARNING: worker 3 delay=88.020000 total io=17931 hit=17891 miss=0 dirty=2\n>\n> Case 3) When indexes are 6:\n> Without shared_costing_plus_patch4_v1.patch:\n> WARNING: worker 0 delay=110.000000 total io=17931 hit=17891 miss=0 dirty=2\n> WARNING: worker 1 delay=100.000000 total io=17931 hit=17891 miss=0 dirty=2\n> WARNING: worker 2 delay=160.000000 total io=35862 hit=35782 miss=0 dirty=4\n> WARNING: worker 3 delay=90.000000 total io=17931 hit=17891 miss=0 dirty=2\n> WARNING: worker 4 delay=80.000000 total io=17931 hit=17891 miss=0 dirty=2\n>\n> With shared_costing_plus_patch4_v1.patch:\n> WARNING: worker 0 delay=173.195000 total io=35862 hit=35782 miss=0 dirty=4\n> WARNING: worker 1 delay=88.715000 total io=17931 hit=17891 miss=0 dirty=2\n> WARNING: worker 2 delay=87.710000 total io=17931 hit=17891 miss=0 dirty=2\n> WARNING: worker 3 delay=86.460000 total io=17931 hit=17891 miss=0 dirty=2\n> WARNING: worker 4 delay=89.435000 total io=17931 hit=17891 miss=0 dirty=2\n>\n> Case 4) When indexes are 8:\n> Without shared_costing_plus_patch4_v1.patch:\n> WARNING: worker 0 delay=170.000000 total io=35862 hit=35782 miss=0 dirty=4\n> WARNING: worker 1 delay=120.000000 total io=17931 hit=17891 miss=0 dirty=2\n> WARNING: worker 2 delay=130.000000 total io=17931 hit=17891 miss=0 dirty=2\n> WARNING: worker 3 delay=190.000000 total io=35862 hit=35782 miss=0 dirty=4\n> WARNING: worker 4 delay=110.000000 total io=35862 hit=35782 miss=0 dirty=4\n>\n> With shared_costing_plus_patch4_v1.patch:\n> WARNING: worker 0 delay=174.700000 total io=35862 hit=35782 miss=0 dirty=4\n> WARNING: worker 1 delay=177.880000 total io=35862 hit=35782 miss=0 dirty=4\n> WARNING: worker 2 delay=89.460000 total io=17931 hit=17891 miss=0 dirty=2\n> WARNING: worker 3 delay=177.320000 total io=35862 hit=35782 miss=0 dirty=4\n> WARNING: worker 4 delay=86.810000 total io=17931 hit=17891 miss=0 dirty=2\n>\n> Test 2: Indexes are 16 but parallel workers are 2,4,8:\n>\n> Case 1) When 2 parallel workers:\n> Without shared_costing_plus_patch4_v1.patch:\n> WARNING: worker 0 delay=1513.230000 total io=307197 hit=85167 miss=22179 dirty=12\n> WARNING: worker 1 delay=1543.385000 total io=326553 hit=63133 miss=26322 dirty=10\n> WARNING: worker 2 delay=1633.625000 total io=302199 hit=65839 miss=23616 dirty=10\n>\n> With shared_costing_plus_patch4_v1.patch:\n> WARNING: worker 0 delay=1539.475000 total io=308175 hit=65175 miss=24280 dirty=10\n> WARNING: worker 1 delay=1251.200000 total io=250692 hit=71562 miss=17893 dirty=10\n> WARNING: worker 2 delay=1143.690000 total io=228987 hit=93857 miss=13489 dirty=12\n>\n> Case 2) When 4 parallel workers:\n> Without shared_costing_plus_patch4_v1.patch:\n> WARNING: worker 0 delay=1182.430000 total io=213567 hit=16037 miss=19745 dirty=4\n> WARNING: worker 1 delay=1202.710000 total io=178941 hit=1 miss=17890 dirty=2\n> WARNING: worker 2 delay=210.000000 total io=89655 hit=89455 miss=0 dirty=10\n> WARNING: worker 3 delay=270.000000 total io=71724 hit=71564 miss=0 dirty=8\n> WARNING: worker 4 delay=851.825000 total io=188229 hit=58619 miss=12945 dirty=8\n>\n> With shared_costing_plus_patch4_v1.patch:\n> WARNING: worker 0 delay=1136.875000 total io=227679 hit=14469 miss=21313 dirty=4\n> WARNING: worker 1 delay=973.745000 total io=196881 hit=17891 miss=17891 dirty=4\n> WARNING: worker 2 delay=447.410000 total io=89655 hit=89455 miss=0 dirty=10\n> WARNING: worker 3 delay=833.235000 total io=168228 hit=40958 miss=12715 dirty=6\n> WARNING: worker 4 delay=683.200000 total io=136488 hit=64368 miss=7196 dirty=8\n>\n> Case 3) When 8 parallel workers:\n> Without shared_costing_plus_patch4_v1.patch:\n> WARNING: worker 0 delay=1022.300000 total io=178941 hit=1 miss=17890 dirty=2\n> WARNING: worker 1 delay=1072.770000 total io=178941 hit=1 miss=17890 dirty=2\n> WARNING: worker 2 delay=170.000000 total io=35862 hit=35782 miss=0 dirty=4\n> WARNING: worker 3 delay=170.000000 total io=35862 hit=35782 miss=0 dirty=4\n> WARNING: worker 4 delay=140.035000 total io=35862 hit=35782 miss=0 dirty=4\n> WARNING: worker 5 delay=200.000000 total io=53802 hit=53672 miss=1 dirty=6\n> WARNING: worker 6 delay=130.000000 total io=35862 hit=35782 miss=0 dirty=4\n> WARNING: worker 7 delay=150.000000 total io=53793 hit=53673 miss=0 dirty=6\n>\n> With shared_costing_plus_patch4_v1.patch:\n> WARNING: worker 0 delay=872.800000 total io=178941 hit=1 miss=17890 dirty=2\n> WARNING: worker 1 delay=885.950000 total io=178941 hit=1 miss=17890 dirty=2\n> WARNING: worker 2 delay=175.680000 total io=35862 hit=35782 miss=0 dirty=4\n> WARNING: worker 3 delay=259.560000 total io=53793 hit=53673 miss=0 dirty=6\n> WARNING: worker 4 delay=169.945000 total io=35862 hit=35782 miss=0 dirty=4\n> WARNING: worker 5 delay=613.845000 total io=125100 hit=45750 miss=7923 dirty=6\n> WARNING: worker 6 delay=171.895000 total io=35862 hit=35782 miss=0 dirty=4\n> WARNING: worker 7 delay=176.505000 total io=35862 hit=35782 miss=0 dirty=4\n\nIt seems that the bigger delay difference (8% - 9 %), which is\nobserved with the higher number of indexes is due to the IO\ndifference, for example in case3, the total page miss without patch is\n35780 whereas with the patch it is 43703. So it seems that with more\nindexes your data is not fitting in the shared buffer so the page\nhits/misses are varying run to run and that will cause variance in the\ntotal delay. Another problem where delay with the patch is 2-3%\nlesser, is basically the problem of the \"0001-vacuum_costing_test\"\npatch because that patch is only displaying the delay during the index\nvacuuming phase, not the total delay. So if we observe the total\ndelay then it should be the same. The modified version of\n0001-vacuum_costing_test is attached to print the total delay.\n\nIn my test.sh, I can see the total delay is almost the same.\n\nNon-parallel vacuum\nWARNING: VacuumCostTotalDelay = 11332.170000\n\nParalle vacuum with shared_costing_plus_patch4_v1.patch:\nWARNING: worker 0 delay=89.230000 total io=17931 hit=17891 miss=0 dirty=2\nWARNING: worker 1 delay=85.205000 total io=17931 hit=17891 miss=0 dirty=2\nWARNING: worker 2 delay=87.290000 total io=17931 hit=17891 miss=0 dirty=2\nWARNING: worker 3 delay=78.365000 total io=16378 hit=4318 miss=0 dirty=603\n\nWARNING: VacuumCostTotalDelay = 11331.690000\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 15 Nov 2019 15:44:18 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cost based vacuum (parallel)"
},
{
"msg_contents": "On Fri, 15 Nov 2019 at 11:54, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Nov 13, 2019 at 10:02 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > I've done some tests while changing shared buffer size, delays and\n> > number of workers. The overall results has the similar tendency as the\n> > result shared by Dilip and looks reasonable to me.\n> >\n>\n> Thanks, Sawada-san for repeating the tests. I can see from yours,\n> Dilip and Mahendra's testing that the delay is distributed depending\n> upon the I/O done by a particular worker and the total I/O is also as\n> expected in various kinds of scenarios. So, I think this is a better\n> approach. Do you agree or you think we should still investigate more\n> on another approach as well?\n>\n> I would like to summarize this approach. The basic idea for parallel\n> vacuum is to allow the parallel workers and master backend to have a\n> shared view of vacuum cost related parameters (mainly\n> VacuumCostBalance) and allow each worker to update it and then based\n> on that decide whether it needs to sleep. With this basic idea, we\n> found that in some cases the throttling is not accurate as explained\n> with an example in my email above [1] and then the tests performed by\n> Dilip and others in the following emails (In short, the workers doing\n> more I/O can be throttled less). Then as discussed in an email later\n> [2], we tried a way to avoid letting the workers sleep which has done\n> less or no I/O as compared to other workers. This ensured that\n> workers who are doing more I/O got throttled more. The idea is to\n> allow any worker to sleep only if it has performed the I/O above a\n> certain threshold and the overall balance is more than the cost_limit\n> set by the system. Then we will allow the worker to sleep\n> proportional to the work done by it and reduce the\n> VacuumSharedCostBalance by the amount which is consumed by the current\n> worker. This scheme leads to the desired throttling by different\n> workers based on the work done by the individual worker.\n>\n> We have tested this idea with various kinds of workloads like by\n> varying shared buffer size, delays and number of workers. Then also,\n> we have tried with a different number of indexes and workers. In all\n> the tests, we found that the workers are throttled proportional to the\n> I/O being done by a particular worker.\n\nThank you for summarizing!\n\nI agreed to this approach.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 18 Nov 2019 15:40:59 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: cost based vacuum (parallel)"
}
] |
[
{
"msg_contents": "Hello, hackers!\n\nI`d like to propose a new argument for recovery_target parameter, which \nwill stand to recovering until all available WAL segments are applied.\n\nCurrent PostgreSQL recovery default behavior(when no recovery target is \nprovided) does exactly that, but there are several shortcomings:\n - without explicit recovery target standing for default behavior, \nrecovery_target_action is not coming to action at the end of recovery\n - with PG12 changes, the life of all backup tools became very hard, \nbecause now recovery parameters can be set outside of single config \nfile(recovery.conf), so it is impossible to ensure, that default \nrecovery behavior, desired in some cases, will not be silently \noverwritten by some recovery parameter forgotten by user.\n\nProposed path is very simple and solves the aforementioned problems by \nintroducing new argument \"latest\" for recovery_target parameter.\n\nOld recovery behavior is still available if no recovery target is \nprovided. I`m not sure, whether it should it be left as it is now, or not.\n\nAnother open question is what to do with recovery_target_inclusive if \nrecovery_target = \"latest\" is used.\n\n-- \nGrigory Smolkin\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 4 Nov 2019 16:03:38 +0300",
"msg_from": "Grigory Smolkin <g.smolkin@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "[proposal] recovery_target \"latest\""
},
{
"msg_contents": "Hello.\n\nAt Mon, 4 Nov 2019 16:03:38 +0300, Grigory Smolkin <g.smolkin@postgrespro.ru> wrote in \n> Hello, hackers!\n> \n> I`d like to propose a new argument for recovery_target parameter,\n> which will stand to recovering until all available WAL segments are\n> applied.\n> \n> Current PostgreSQL recovery default behavior(when no recovery target\n> is provided) does exactly that, but there are several shortcomings:\n> - without explicit recovery target standing for default behavior,\n> recovery_target_action is not coming to action at the end of recovery\n> - with PG12 changes, the life of all backup tools became very hard,\n> because now recovery parameters can be set outside of single config\n> file(recovery.conf), so it is impossible to ensure, that default\n> recovery behavior, desired in some cases, will not be silently\n> overwritten by some recovery parameter forgotten by user.\n> \n> Proposed path is very simple and solves the aforementioned problems by\n> introducing new argument \"latest\" for recovery_target parameter.\n\nDoes the tool remove or rename recovery.conf to cancel the settings?\nAnd do you intend that the new option is used to override settings by\nappending it at the end of postgresql.conf? If so, the commit\nf2cbffc7a6 seems to break the assumption. PG12 rejects to start if it\nfinds two different kinds of recovery target settings.\n\n> Old recovery behavior is still available if no recovery target is\n> provided. I`m not sure, whether it should it be left as it is now, or\n> not.\n> \n> Another open question is what to do with recovery_target_inclusive if\n> recovery_target = \"latest\" is used.\n\nAnyway inclusiveness doesn't affect to \"immediate\". If we had the\n\"latest\" option, it would behave the same way.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 05 Nov 2019 16:41:00 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [proposal] recovery_target \"latest\""
},
{
"msg_contents": "Thank you for you interest in this topic!\n\nOn 11/5/19 10:41 AM, Kyotaro Horiguchi wrote:\n> Hello.\n>\n> At Mon, 4 Nov 2019 16:03:38 +0300, Grigory Smolkin <g.smolkin@postgrespro.ru> wrote in\n>> Hello, hackers!\n>>\n>> I`d like to propose a new argument for recovery_target parameter,\n>> which will stand to recovering until all available WAL segments are\n>> applied.\n>>\n>> Current PostgreSQL recovery default behavior(when no recovery target\n>> is provided) does exactly that, but there are several shortcomings:\n>> - without explicit recovery target standing for default behavior,\n>> recovery_target_action is not coming to action at the end of recovery\n>> - with PG12 changes, the life of all backup tools became very hard,\n>> because now recovery parameters can be set outside of single config\n>> file(recovery.conf), so it is impossible to ensure, that default\n>> recovery behavior, desired in some cases, will not be silently\n>> overwritten by some recovery parameter forgotten by user.\n>>\n>> Proposed path is very simple and solves the aforementioned problems by\n>> introducing new argument \"latest\" for recovery_target parameter.\n> Does the tool remove or rename recovery.conf to cancel the settings?\n> And do you intend that the new option is used to override settings by\n> appending it at the end of postgresql.conf? If so, the commit\n> f2cbffc7a6 seems to break the assumption. PG12 rejects to start if it\n> finds two different kinds of recovery target settings.\nYes, previously it was possible to remove/rename old recovery.conf, but \nnot anymore.\nMy assumption is exactly that PG should reject to start because of \nmultiple recovery targets.\nFailing to start is infinitely better that recovering to the wrong \nrecovery target.\n>\n>> Old recovery behavior is still available if no recovery target is\n>> provided. I`m not sure, whether it should it be left as it is now, or\n>> not.\n>>\n>> Another open question is what to do with recovery_target_inclusive if\n>> recovery_target = \"latest\" is used.\n> Anyway inclusiveness doesn't affect to \"immediate\". If we had the\n> \"latest\" option, it would behave the same way.\nRight, thank you.\n>\n> regards.\n>\n-- \nGrigory Smolkin\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Tue, 5 Nov 2019 11:51:26 +0300",
"msg_from": "Grigory Smolkin <g.smolkin@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: [proposal] recovery_target \"latest\""
},
{
"msg_contents": "Attached new version of a patch with TAP test.\n\nOn 11/5/19 11:51 AM, Grigory Smolkin wrote:\n> Thank you for you interest in this topic!\n>\n> On 11/5/19 10:41 AM, Kyotaro Horiguchi wrote:\n>> Hello.\n>>\n>> At Mon, 4 Nov 2019 16:03:38 +0300, Grigory Smolkin \n>> <g.smolkin@postgrespro.ru> wrote in\n>>> Hello, hackers!\n>>>\n>>> I`d like to propose a new argument for recovery_target parameter,\n>>> which will stand to recovering until all available WAL segments are\n>>> applied.\n>>>\n>>> Current PostgreSQL recovery default behavior(when no recovery target\n>>> is provided) does exactly that, but there are several shortcomings:\n>>> - without explicit recovery target standing for default behavior,\n>>> recovery_target_action is not coming to action at the end of recovery\n>>> - with PG12 changes, the life of all backup tools became very hard,\n>>> because now recovery parameters can be set outside of single config\n>>> file(recovery.conf), so it is impossible to ensure, that default\n>>> recovery behavior, desired in some cases, will not be silently\n>>> overwritten by some recovery parameter forgotten by user.\n>>>\n>>> Proposed path is very simple and solves the aforementioned problems by\n>>> introducing new argument \"latest\" for recovery_target parameter.\n>> Does the tool remove or rename recovery.conf to cancel the settings?\n>> And do you intend that the new option is used to override settings by\n>> appending it at the end of postgresql.conf? If so, the commit\n>> f2cbffc7a6 seems to break the assumption. PG12 rejects to start if it\n>> finds two different kinds of recovery target settings.\n> Yes, previously it was possible to remove/rename old recovery.conf, \n> but not anymore.\n> My assumption is exactly that PG should reject to start because of \n> multiple recovery targets.\n> Failing to start is infinitely better that recovering to the wrong \n> recovery target.\n>>\n>>> Old recovery behavior is still available if no recovery target is\n>>> provided. I`m not sure, whether it should it be left as it is now, or\n>>> not.\n>>>\n>>> Another open question is what to do with recovery_target_inclusive if\n>>> recovery_target = \"latest\" is used.\n>> Anyway inclusiveness doesn't affect to \"immediate\". If we had the\n>> \"latest\" option, it would behave the same way.\n> Right, thank you.\n>>\n>> regards.\n>>\n-- \nGrigory Smolkin\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 5 Nov 2019 13:39:45 +0300",
"msg_from": "Grigory Smolkin <g.smolkin@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: [proposal] recovery_target \"latest\""
},
{
"msg_contents": "This seems to also be related to this discussion: \n<https://www.postgresql.org/message-id/flat/993736dd3f1713ec1f63fc3b653839f5@lako.no>\n\nI like this idea.\n\nI don't like the name \"latest\". What does that mean? Other \ndocumentation talks about the \"end of the archive\". What does that \nmean? It means until restore_command errors. Let's think of a name \nthat reflects that better. Maybe \"all_archive\" or something like that.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 6 Nov 2019 08:39:14 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [proposal] recovery_target \"latest\""
},
{
"msg_contents": "\nOn 11/6/19 10:39 AM, Peter Eisentraut wrote:\n> This seems to also be related to this discussion: \n> <https://www.postgresql.org/message-id/flat/993736dd3f1713ec1f63fc3b653839f5@lako.no>\n\nYes, in a way. Strengthening current lax recovery behavior is a very \ngood idea.\n\n>\n> I like this idea.\n>\n> I don't like the name \"latest\". What does that mean? Other \n> documentation talks about the \"end of the archive\". What does that \n> mean? It means until restore_command errors. Let's think of a name \n> that reflects that better. Maybe \"all_archive\" or something like that.\n\nAs with \"immediate\", \"latest\" reflects the latest possible state this \nPostgreSQL instance can achieve when using PITR. I think it is simple \nand easy to understand for an end user, which sees PITR as a way to go \nfrom one state to another. In my experience, at least, which is, of \ncourse, subjective.\n\nBut if we want an argument name to be technically accurate, then, I \nthink, something like \"end-of-available-WAL\"/\"all-WAL\", \"end-of-WAL\" is \na way to go.\n\n\n-- \nGrigory Smolkin\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Wed, 6 Nov 2019 11:33:29 +0200",
"msg_from": "Grigory Smolkin <g.smolkin@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: [proposal] recovery_target \"latest\""
},
{
"msg_contents": "On Wed, Nov 6, 2019 at 6:33 PM Grigory Smolkin <g.smolkin@postgrespro.ru> wrote:\n>\n>\n> On 11/6/19 10:39 AM, Peter Eisentraut wrote:\n> > This seems to also be related to this discussion:\n> > <https://www.postgresql.org/message-id/flat/993736dd3f1713ec1f63fc3b653839f5@lako.no>\n>\n> Yes, in a way. Strengthening current lax recovery behavior is a very\n> good idea.\n>\n> >\n> > I like this idea.\n> >\n> > I don't like the name \"latest\". What does that mean? Other\n> > documentation talks about the \"end of the archive\". What does that\n> > mean? It means until restore_command errors. Let's think of a name\n> > that reflects that better. Maybe \"all_archive\" or something like that.\n>\n> As with \"immediate\", \"latest\" reflects the latest possible state this\n> PostgreSQL instance can achieve when using PITR. I think it is simple\n> and easy to understand for an end user, which sees PITR as a way to go\n> from one state to another. In my experience, at least, which is, of\n> course, subjective.\n>\n> But if we want an argument name to be technically accurate, then, I\n> think, something like \"end-of-available-WAL\"/\"all-WAL\", \"end-of-WAL\" is\n> a way to go.\n\nWhat happens if this parameter is set to latest in the standby mode?\nOr the combination of those settings should be prohibited?\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Wed, 6 Nov 2019 18:56:28 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [proposal] recovery_target \"latest\""
},
{
"msg_contents": "\nOn 11/6/19 12:56 PM, Fujii Masao wrote:\n> On Wed, Nov 6, 2019 at 6:33 PM Grigory Smolkin <g.smolkin@postgrespro.ru> wrote:\n>>\n>> On 11/6/19 10:39 AM, Peter Eisentraut wrote:\n>>> This seems to also be related to this discussion:\n>>> <https://www.postgresql.org/message-id/flat/993736dd3f1713ec1f63fc3b653839f5@lako.no>\n>> Yes, in a way. Strengthening current lax recovery behavior is a very\n>> good idea.\n>>\n>>> I like this idea.\n>>>\n>>> I don't like the name \"latest\". What does that mean? Other\n>>> documentation talks about the \"end of the archive\". What does that\n>>> mean? It means until restore_command errors. Let's think of a name\n>>> that reflects that better. Maybe \"all_archive\" or something like that.\n>> As with \"immediate\", \"latest\" reflects the latest possible state this\n>> PostgreSQL instance can achieve when using PITR. I think it is simple\n>> and easy to understand for an end user, which sees PITR as a way to go\n>> from one state to another. In my experience, at least, which is, of\n>> course, subjective.\n>>\n>> But if we want an argument name to be technically accurate, then, I\n>> think, something like \"end-of-available-WAL\"/\"all-WAL\", \"end-of-WAL\" is\n>> a way to go.\n> What happens if this parameter is set to latest in the standby mode?\n> Or the combination of those settings should be prohibited?\n\n\nCurrently it will behave just like regular standby, so I think, to avoid \nconfusion and keep things simple, the combination of them should be \nprohibited.\nThank you for pointing this out, I will work on it.\n\nThe other way around, as I see it, is to define RECOVERY_TARGET_LATEST \nas something more complex, for example, the latest possible endptr in \nlatest WAL segment. But it is tricky, because WAL archive may keeps \ngrowing as recovery is progressing or, in case of standby, master keeps \nsending more and more WAL.\n\n>\n> Regards,\n>\n-- \nGrigory Smolkin\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Wed, 6 Nov 2019 12:55:18 +0200",
"msg_from": "Grigory Smolkin <g.smolkin@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: [proposal] recovery_target \"latest\""
},
{
"msg_contents": "On 11/6/19 1:55 PM, Grigory Smolkin wrote:\n>\n> On 11/6/19 12:56 PM, Fujii Masao wrote:\n>> On Wed, Nov 6, 2019 at 6:33 PM Grigory Smolkin \n>> <g.smolkin@postgrespro.ru> wrote:\n>>>\n>>> On 11/6/19 10:39 AM, Peter Eisentraut wrote:\n>>>> This seems to also be related to this discussion:\n>>>> <https://www.postgresql.org/message-id/flat/993736dd3f1713ec1f63fc3b653839f5@lako.no> \n>>>>\n>>> Yes, in a way. Strengthening current lax recovery behavior is a very\n>>> good idea.\n>>>\n>>>> I like this idea.\n>>>>\n>>>> I don't like the name \"latest\". What does that mean? Other\n>>>> documentation talks about the \"end of the archive\". What does that\n>>>> mean? It means until restore_command errors. Let's think of a name\n>>>> that reflects that better. Maybe \"all_archive\" or something like \n>>>> that.\n>>> As with \"immediate\", \"latest\" reflects the latest possible state this\n>>> PostgreSQL instance can achieve when using PITR. I think it is simple\n>>> and easy to understand for an end user, which sees PITR as a way to go\n>>> from one state to another. In my experience, at least, which is, of\n>>> course, subjective.\n>>>\n>>> But if we want an argument name to be technically accurate, then, I\n>>> think, something like \"end-of-available-WAL\"/\"all-WAL\", \"end-of-WAL\" is\n>>> a way to go.\n>> What happens if this parameter is set to latest in the standby mode?\n>> Or the combination of those settings should be prohibited?\n>\n>\n> Currently it will behave just like regular standby, so I think, to \n> avoid confusion and keep things simple, the combination of them should \n> be prohibited.\n> Thank you for pointing this out, I will work on it.\n\nAttached new patch revision, now it is impossible to use recovery_target \n'latest' in standby mode.\nTAP test is updated to reflect this behavior.\n\n\n>\n> The other way around, as I see it, is to define RECOVERY_TARGET_LATEST \n> as something more complex, for example, the latest possible endptr in \n> latest WAL segment. But it is tricky, because WAL archive may keeps \n> growing as recovery is progressing or, in case of standby, master \n> keeps sending more and more WAL.\n>\n>>\n>> Regards,\n>>\n-- \nGrigory Smolkin\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 7 Nov 2019 02:28:39 +0300",
"msg_from": "Grigory Smolkin <g.smolkin@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: [proposal] recovery_target \"latest\""
},
{
"msg_contents": "At Thu, 7 Nov 2019 02:28:39 +0300, Grigory Smolkin <g.smolkin@postgrespro.ru> wrote in \n> On 11/6/19 1:55 PM, Grigory Smolkin wrote:\n> >\n> > On 11/6/19 12:56 PM, Fujii Masao wrote:\n> >> On Wed, Nov 6, 2019 at 6:33 PM Grigory Smolkin\n> >> <g.smolkin@postgrespro.ru> wrote:\n> >>>\n> >>> On 11/6/19 10:39 AM, Peter Eisentraut wrote:\n> >>>> This seems to also be related to this discussion:\n> >>>> <https://www.postgresql.org/message-id/flat/993736dd3f1713ec1f63fc3b653839f5@lako.no>\n> >>> Yes, in a way. Strengthening current lax recovery behavior is a very\n> >>> good idea.\n> >>>\n> >>>> I like this idea.\n> >>>>\n> >>>> I don't like the name \"latest\". What does that mean? Other\n> >>>> documentation talks about the \"end of the archive\". What does that\n> >>>> mean? It means until restore_command errors. Let's think of a name\n> >>>> that reflects that better. Maybe \"all_archive\" or something like\n> >>>> that.\n> >>> As with \"immediate\", \"latest\" reflects the latest possible state this\n> >>> PostgreSQL instance can achieve when using PITR. I think it is simple\n> >>> and easy to understand for an end user, which sees PITR as a way to go\n> >>> from one state to another. In my experience, at least, which is, of\n> >>> course, subjective.\n> >>>\n> >>> But if we want an argument name to be technically accurate, then, I\n> >>> think, something like \"end-of-available-WAL\"/\"all-WAL\", \"end-of-WAL\"\n> >>> is\n> >>> a way to go.\n> >> What happens if this parameter is set to latest in the standby mode?\n> >> Or the combination of those settings should be prohibited?\n> >\n> >\n> > Currently it will behave just like regular standby, so I think, to\n> > avoid confusion and keep things simple, the combination of them should\n> > be prohibited.\n> > Thank you for pointing this out, I will work on it.\n> \n> Attached new patch revision, now it is impossible to use\n> recovery_target 'latest' in standby mode.\n> TAP test is updated to reflect this behavior.\n\nIn the first place, latest (or anything it could be named as) is\ndefined as the explit label for the default behavior. Thus the latest\nshould work as if nothing is set to recovery_target* following the\ndefinition. That might seems somewhat strange but I think at least it\nis harmless.\n\nrecovery_target=immediate + r_t_action=shutdown for a standby works as\ncommanded. Do we need to inhibit that, too?\n\n> > The other way around, as I see it, is to define RECOVERY_TARGET_LATEST\n> > as something more complex, for example, the latest possible endptr in\n> > latest WAL segment. But it is tricky, because WAL archive may keeps\n> > growing as recovery is progressing or, in case of standby, master\n> > keeps sending more and more WAL.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 07 Nov 2019 14:36:24 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [proposal] recovery_target \"latest\""
},
{
"msg_contents": "\nOn 11/7/19 8:36 AM, Kyotaro Horiguchi wrote:\n> At Thu, 7 Nov 2019 02:28:39 +0300, Grigory Smolkin <g.smolkin@postgrespro.ru> wrote in\n>> On 11/6/19 1:55 PM, Grigory Smolkin wrote:\n>>> On 11/6/19 12:56 PM, Fujii Masao wrote:\n>>>> On Wed, Nov 6, 2019 at 6:33 PM Grigory Smolkin\n>>>> <g.smolkin@postgrespro.ru> wrote:\n>>>>> On 11/6/19 10:39 AM, Peter Eisentraut wrote:\n>>>>>> This seems to also be related to this discussion:\n>>>>>> <https://www.postgresql.org/message-id/flat/993736dd3f1713ec1f63fc3b653839f5@lako.no>\n>>>>> Yes, in a way. Strengthening current lax recovery behavior is a very\n>>>>> good idea.\n>>>>>\n>>>>>> I like this idea.\n>>>>>>\n>>>>>> I don't like the name \"latest\". What does that mean? Other\n>>>>>> documentation talks about the \"end of the archive\". What does that\n>>>>>> mean? It means until restore_command errors. Let's think of a name\n>>>>>> that reflects that better. Maybe \"all_archive\" or something like\n>>>>>> that.\n>>>>> As with \"immediate\", \"latest\" reflects the latest possible state this\n>>>>> PostgreSQL instance can achieve when using PITR. I think it is simple\n>>>>> and easy to understand for an end user, which sees PITR as a way to go\n>>>>> from one state to another. In my experience, at least, which is, of\n>>>>> course, subjective.\n>>>>>\n>>>>> But if we want an argument name to be technically accurate, then, I\n>>>>> think, something like \"end-of-available-WAL\"/\"all-WAL\", \"end-of-WAL\"\n>>>>> is\n>>>>> a way to go.\n>>>> What happens if this parameter is set to latest in the standby mode?\n>>>> Or the combination of those settings should be prohibited?\n>>>\n>>> Currently it will behave just like regular standby, so I think, to\n>>> avoid confusion and keep things simple, the combination of them should\n>>> be prohibited.\n>>> Thank you for pointing this out, I will work on it.\n>> Attached new patch revision, now it is impossible to use\n>> recovery_target 'latest' in standby mode.\n>> TAP test is updated to reflect this behavior.\n> In the first place, latest (or anything it could be named as) is\n> defined as the explit label for the default behavior. Thus the latest\n> should work as if nothing is set to recovery_target* following the\n> definition. That might seems somewhat strange but I think at least it\n> is harmless.\n\n\nWell, it was more about getting default behavior by using some explicit \nrecovery_target, not the other way around. Because it will break some \n3rd party backup and replication applications, that may rely on old \nbehavior of ignoring recovery_target_action when no recovery_target is \nprovided.\nBut you think that it is worth pursuing, I can do that.\n\n\n> recovery_target=immediate + r_t_action=shutdown for a standby works as\n> commanded. Do we need to inhibit that, too?\n\nWhy something, that work as expected, should be inhibited?\n\n\n>\n>>> The other way around, as I see it, is to define RECOVERY_TARGET_LATEST\n>>> as something more complex, for example, the latest possible endptr in\n>>> latest WAL segment. But it is tricky, because WAL archive may keeps\n>>> growing as recovery is progressing or, in case of standby, master\n>>> keeps sending more and more WAL.\n> regards.\n>\n-- \nGrigory Smolkin\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Thu, 7 Nov 2019 12:22:28 +0300",
"msg_from": "Grigory Smolkin <g.smolkin@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: [proposal] recovery_target \"latest\""
},
{
"msg_contents": "At Thu, 7 Nov 2019 12:22:28 +0300, Grigory Smolkin <g.smolkin@postgrespro.ru> wrote in \n> \n> On 11/7/19 8:36 AM, Kyotaro Horiguchi wrote:\n> > At Thu, 7 Nov 2019 02:28:39 +0300, Grigory Smolkin\n> > <g.smolkin@postgrespro.ru> wrote in\n> >> On 11/6/19 1:55 PM, Grigory Smolkin wrote:\n> >>> On 11/6/19 12:56 PM, Fujii Masao wrote:\n> >>>> What happens if this parameter is set to latest in the standby mode?\n> >>>> Or the combination of those settings should be prohibited?\n> >>>\n> >>> Currently it will behave just like regular standby, so I think, to\n> >>> avoid confusion and keep things simple, the combination of them should\n> >>> be prohibited.\n> >>> Thank you for pointing this out, I will work on it.\n> >> Attached new patch revision, now it is impossible to use\n> >> recovery_target 'latest' in standby mode.\n> >> TAP test is updated to reflect this behavior.\n> > In the first place, latest (or anything it could be named as) is\n> > defined as the explit label for the default behavior. Thus the latest\n> > should work as if nothing is set to recovery_target* following the\n> > definition. That might seems somewhat strange but I think at least it\n> > is harmless.\n> \n> \n> Well, it was more about getting default behavior by using some\n> explicit recovery_target, not the other way around. Because it will\n> break some 3rd party backup and replication applications, that may\n> rely on old behavior of ignoring recovery_target_action when no\n> recovery_target is provided.\n> But you think that it is worth pursuing, I can do that.\n\nAh. Sorry for the misleading statement. What I had in my mind was\nsomewhat the mixture of them. I thought that recovery_target =''\nbehaves the same way as now, r_t_action is ignored. And 'latest' just\nmakes recovery_target_action work as the current non-empty\nrecovery_target's does. But I'm not confident that it is a good\ndesign.\n\n> > recovery_target=immediate + r_t_action=shutdown for a standby works as\n> > commanded. Do we need to inhibit that, too?\n> \n> Why something, that work as expected, should be inhibited?\n\nTo make sure, I don't think we should do that. I meant by the above\nthat standby mode is already accepting recovery_target_action so\ninhibiting that only for 'latest' is not orthogonal and could be more\nconfusing for users, and complicatig the code. So my opinion is we\nshouldn't inhibit 'latest' unless r_t_action harms.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 07 Nov 2019 18:56:13 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [proposal] recovery_target \"latest\""
},
{
"msg_contents": "\nOn 11/7/19 12:56 PM, Kyotaro Horiguchi wrote:\n> At Thu, 7 Nov 2019 12:22:28 +0300, Grigory Smolkin <g.smolkin@postgrespro.ru> wrote in\n>> On 11/7/19 8:36 AM, Kyotaro Horiguchi wrote:\n>>> At Thu, 7 Nov 2019 02:28:39 +0300, Grigory Smolkin\n>>> <g.smolkin@postgrespro.ru> wrote in\n>>>> On 11/6/19 1:55 PM, Grigory Smolkin wrote:\n>>>>> On 11/6/19 12:56 PM, Fujii Masao wrote:\n>>>>>> What happens if this parameter is set to latest in the standby mode?\n>>>>>> Or the combination of those settings should be prohibited?\n>>>>> Currently it will behave just like regular standby, so I think, to\n>>>>> avoid confusion and keep things simple, the combination of them should\n>>>>> be prohibited.\n>>>>> Thank you for pointing this out, I will work on it.\n>>>> Attached new patch revision, now it is impossible to use\n>>>> recovery_target 'latest' in standby mode.\n>>>> TAP test is updated to reflect this behavior.\n>>> In the first place, latest (or anything it could be named as) is\n>>> defined as the explit label for the default behavior. Thus the latest\n>>> should work as if nothing is set to recovery_target* following the\n>>> definition. That might seems somewhat strange but I think at least it\n>>> is harmless.\n>>\n>> Well, it was more about getting default behavior by using some\n>> explicit recovery_target, not the other way around. Because it will\n>> break some 3rd party backup and replication applications, that may\n>> rely on old behavior of ignoring recovery_target_action when no\n>> recovery_target is provided.\n>> But you think that it is worth pursuing, I can do that.\n> Ah. Sorry for the misleading statement. What I had in my mind was\n> somewhat the mixture of them. I thought that recovery_target =''\n> behaves the same way as now, r_t_action is ignored. And 'latest' just\n> makes recovery_target_action work as the current non-empty\n> recovery_target's does. But I'm not confident that it is a good\n> design.\n>\n>>> recovery_target=immediate + r_t_action=shutdown for a standby works as\n>>> commanded. Do we need to inhibit that, too?\n>> Why something, that work as expected, should be inhibited?\n> To make sure, I don't think we should do that. I meant by the above\n> that standby mode is already accepting recovery_target_action so\n> inhibiting that only for 'latest' is not orthogonal and could be more\n> confusing for users, and complicatig the code. So my opinion is we\n> shouldn't inhibit 'latest' unless r_t_action harms.\n\nI gave it some thought and now think that prohibiting recovery_target \n'latest' and standby was a bad idea.\nAll recovery_targets follow the same pattern of usage, so \nrecovery_target \"latest\" also must be capable of working in standby mode.\nAll recovery_targets have a clear deterministic 'target' where recovery \nshould end.\nIn case of recovery_target \"latest\" this target is the end of available \nWAL, therefore the end of available WAL must be more clearly defined.\nI will work on it.\n\nThank you for a feedback.\n\n\n>\n> regards.\n>\n-- \nGrigory Smolkin\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Thu, 7 Nov 2019 16:36:09 +0300",
"msg_from": "Grigory Smolkin <g.smolkin@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: [proposal] recovery_target \"latest\""
},
{
"msg_contents": "On 11/7/19 4:36 PM, Grigory Smolkin wrote:\n>\n> On 11/7/19 12:56 PM, Kyotaro Horiguchi wrote:\n>> At Thu, 7 Nov 2019 12:22:28 +0300, Grigory Smolkin \n>> <g.smolkin@postgrespro.ru> wrote in\n>>> On 11/7/19 8:36 AM, Kyotaro Horiguchi wrote:\n>>>> At Thu, 7 Nov 2019 02:28:39 +0300, Grigory Smolkin\n>>>> <g.smolkin@postgrespro.ru> wrote in\n>>>>> On 11/6/19 1:55 PM, Grigory Smolkin wrote:\n>>>>>> On 11/6/19 12:56 PM, Fujii Masao wrote:\n>>>>>>> What happens if this parameter is set to latest in the standby \n>>>>>>> mode?\n>>>>>>> Or the combination of those settings should be prohibited?\n>>>>>> Currently it will behave just like regular standby, so I think, to\n>>>>>> avoid confusion and keep things simple, the combination of them \n>>>>>> should\n>>>>>> be prohibited.\n>>>>>> Thank you for pointing this out, I will work on it.\n>>>>> Attached new patch revision, now it is impossible to use\n>>>>> recovery_target 'latest' in standby mode.\n>>>>> TAP test is updated to reflect this behavior.\n>>>> In the first place, latest (or anything it could be named as) is\n>>>> defined as the explit label for the default behavior. Thus the latest\n>>>> should work as if nothing is set to recovery_target* following the\n>>>> definition. That might seems somewhat strange but I think at least it\n>>>> is harmless.\n>>>\n>>> Well, it was more about getting default behavior by using some\n>>> explicit recovery_target, not the other way around. Because it will\n>>> break some 3rd party backup and replication applications, that may\n>>> rely on old behavior of ignoring recovery_target_action when no\n>>> recovery_target is provided.\n>>> But you think that it is worth pursuing, I can do that.\n>> Ah. Sorry for the misleading statement. What I had in my mind was\n>> somewhat the mixture of them. I thought that recovery_target =''\n>> behaves the same way as now, r_t_action is ignored. And 'latest' just\n>> makes recovery_target_action work as the current non-empty\n>> recovery_target's does. But I'm not confident that it is a good\n>> design.\n>>\n>>>> recovery_target=immediate + r_t_action=shutdown for a standby works as\n>>>> commanded. Do we need to inhibit that, too?\n>>> Why something, that work as expected, should be inhibited?\n>> To make sure, I don't think we should do that. I meant by the above\n>> that standby mode is already accepting recovery_target_action so\n>> inhibiting that only for 'latest' is not orthogonal and could be more\n>> confusing for users, and complicatig the code. So my opinion is we\n>> shouldn't inhibit 'latest' unless r_t_action harms.\n>\n> I gave it some thought and now think that prohibiting recovery_target \n> 'latest' and standby was a bad idea.\n> All recovery_targets follow the same pattern of usage, so \n> recovery_target \"latest\" also must be capable of working in standby mode.\n> All recovery_targets have a clear deterministic 'target' where \n> recovery should end.\n> In case of recovery_target \"latest\" this target is the end of \n> available WAL, therefore the end of available WAL must be more clearly \n> defined.\n> I will work on it.\n>\n> Thank you for a feedback.\n\n\nAttached new patch revision, now end of available WAL is defined as the \nfact of missing required WAL.\nIn case of standby, the end of WAL is defined as 2 consecutive switches \nof WAL source, that didn`t provided requested record.\nIn case of streaming standby, each switch of WAL source is forced after \n3 failed attempts to get new data from walreceiver.\n\nAll constants are taken off the top of my head and serves as proof of \nconcept.\nTAP test is updated.\n\n\n>\n>\n>>\n>> regards.\n>>\n-- \nGrigory Smolkin\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 8 Nov 2019 07:00:24 +0300",
"msg_from": "Grigory Smolkin <g.smolkin@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: [proposal] recovery_target \"latest\""
},
{
"msg_contents": "On 11/8/19 7:00 AM, Grigory Smolkin wrote:\n>\n> On 11/7/19 4:36 PM, Grigory Smolkin wrote:\n>>\n>> On 11/7/19 12:56 PM, Kyotaro Horiguchi wrote:\n>>> At Thu, 7 Nov 2019 12:22:28 +0300, Grigory Smolkin \n>>> <g.smolkin@postgrespro.ru> wrote in\n>>>> On 11/7/19 8:36 AM, Kyotaro Horiguchi wrote:\n>>>>> At Thu, 7 Nov 2019 02:28:39 +0300, Grigory Smolkin\n>>>>> <g.smolkin@postgrespro.ru> wrote in\n>>>>>> On 11/6/19 1:55 PM, Grigory Smolkin wrote:\n>>>>>>> On 11/6/19 12:56 PM, Fujii Masao wrote:\n>>>>>>>> What happens if this parameter is set to latest in the standby \n>>>>>>>> mode?\n>>>>>>>> Or the combination of those settings should be prohibited?\n>>>>>>> Currently it will behave just like regular standby, so I think, to\n>>>>>>> avoid confusion and keep things simple, the combination of them \n>>>>>>> should\n>>>>>>> be prohibited.\n>>>>>>> Thank you for pointing this out, I will work on it.\n>>>>>> Attached new patch revision, now it is impossible to use\n>>>>>> recovery_target 'latest' in standby mode.\n>>>>>> TAP test is updated to reflect this behavior.\n>>>>> In the first place, latest (or anything it could be named as) is\n>>>>> defined as the explit label for the default behavior. Thus the latest\n>>>>> should work as if nothing is set to recovery_target* following the\n>>>>> definition. That might seems somewhat strange but I think at \n>>>>> least it\n>>>>> is harmless.\n>>>>\n>>>> Well, it was more about getting default behavior by using some\n>>>> explicit recovery_target, not the other way around. Because it will\n>>>> break some 3rd party backup and replication applications, that may\n>>>> rely on old behavior of ignoring recovery_target_action when no\n>>>> recovery_target is provided.\n>>>> But you think that it is worth pursuing, I can do that.\n>>> Ah. Sorry for the misleading statement. What I had in my mind was\n>>> somewhat the mixture of them. I thought that recovery_target =''\n>>> behaves the same way as now, r_t_action is ignored. And 'latest' just\n>>> makes recovery_target_action work as the current non-empty\n>>> recovery_target's does. But I'm not confident that it is a good\n>>> design.\n>>>\n>>>>> recovery_target=immediate + r_t_action=shutdown for a standby \n>>>>> works as\n>>>>> commanded. Do we need to inhibit that, too?\n>>>> Why something, that work as expected, should be inhibited?\n>>> To make sure, I don't think we should do that. I meant by the above\n>>> that standby mode is already accepting recovery_target_action so\n>>> inhibiting that only for 'latest' is not orthogonal and could be more\n>>> confusing for users, and complicatig the code. So my opinion is we\n>>> shouldn't inhibit 'latest' unless r_t_action harms.\n>>\n>> I gave it some thought and now think that prohibiting recovery_target \n>> 'latest' and standby was a bad idea.\n>> All recovery_targets follow the same pattern of usage, so \n>> recovery_target \"latest\" also must be capable of working in standby \n>> mode.\n>> All recovery_targets have a clear deterministic 'target' where \n>> recovery should end.\n>> In case of recovery_target \"latest\" this target is the end of \n>> available WAL, therefore the end of available WAL must be more \n>> clearly defined.\n>> I will work on it.\n>>\n>> Thank you for a feedback.\n>\n>\n> Attached new patch revision, now end of available WAL is defined as \n> the fact of missing required WAL.\n> In case of standby, the end of WAL is defined as 2 consecutive \n> switches of WAL source, that didn`t provided requested record.\n> In case of streaming standby, each switch of WAL source is forced \n> after 3 failed attempts to get new data from walreceiver.\n>\n> All constants are taken off the top of my head and serves as proof of \n> concept.\n> TAP test is updated.\n>\nAttached new revision, it contains some minor refactoring.\n\nWhile working on it, I stumbled upon something strange:\n\nwhy DisownLatch(&XLogCtl->recoveryWakeupLatch) is called before \nReadRecord(xlogreader, LastRec, PANIC, false) ?\n\nIsn`t this latch may be accessed in WaitForWALToBecomeAvailable() if \nstreaming standby gets promoted?\n\n\n>\n>\n>>\n>>\n>>>\n>>> regards.\n>>>\n-- \nGrigory Smolkin\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 8 Nov 2019 16:08:47 +0300",
"msg_from": "Grigory Smolkin <g.smolkin@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: [proposal] recovery_target \"latest\""
},
{
"msg_contents": "At Fri, 8 Nov 2019 16:08:47 +0300, Grigory Smolkin <g.smolkin@postgrespro.ru> wrote in \n> While working on it, I stumbled upon something strange:\n> \n> why DisownLatch(&XLogCtl->recoveryWakeupLatch) is called before\n> ReadRecord(xlogreader, LastRec, PANIC, false) ?\n> Isn`t this latch may be accessed in WaitForWALToBecomeAvailable() if\n> streaming standby gets promoted?\n\nThe DisownLatch is just for the sake of tidiness and can be placed\nanywhere after the ShutdownWalRcv() call but the current place (just\nbefore \"StandbyMode = false\") seems natural. The ReadRecord call\ndoesn't launch another wal receiver because we cleard StandbyMode just\nbefore.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 13 Nov 2019 11:51:02 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [proposal] recovery_target \"latest\""
},
{
"msg_contents": "At Fri, 8 Nov 2019 16:08:47 +0300, Grigory Smolkin <g.smolkin@postgrespro.ru> wrote in \n> \n> On 11/8/19 7:00 AM, Grigory Smolkin wrote:\n> >\n> > On 11/7/19 4:36 PM, Grigory Smolkin wrote:\n> >> I gave it some thought and now think that prohibiting recovery_target\n> >> 'latest' and standby was a bad idea.\n> >> All recovery_targets follow the same pattern of usage, so\n> >> recovery_target \"latest\" also must be capable of working in standby\n> >> mode.\n> >> All recovery_targets have a clear deterministic 'target' where\n> >> recovery should end.\n> >> In case of recovery_target \"latest\" this target is the end of\n> >> available WAL, therefore the end of available WAL must be more clearly\n> >> defined.\n> >> I will work on it.\n> >>\n> >> Thank you for a feedback.\n> >\n> >\n> > Attached new patch revision, now end of available WAL is defined as\n> > the fact of missing required WAL.\n> > In case of standby, the end of WAL is defined as 2 consecutive\n> > switches of WAL source, that didn`t provided requested record.\n> > In case of streaming standby, each switch of WAL source is forced\n> > after 3 failed attempts to get new data from walreceiver.\n> >\n> > All constants are taken off the top of my head and serves as proof of\n> > concept.\n> > TAP test is updated.\n> >\n> Attached new revision, it contains some minor refactoring.\n\nThanks for the new patch. I found that it needs more than I thought,\nbut it seems a bit too complicated and less stable.\n\nAs the patch does, WaitForWALToBecomeAvailable needs to exit when\navaiable sources are exhausted. However, we don't need to count\nfailures to do that. It is enough that the function have two more exit\npoint. When streaming timeout fires, and when we found that streaming\nis not set up when archive/wal failed.\n\nIn my opinion, it is better that we have less dependency to global\nvariables in such deep levels in a call hierachy. Such nformation can\nbe stored in XLogPageReadPrivate.\n\nI think The doc needs to exiplain on the difference between default\nand latest.\n\nPlease find the attached, which illustrates the first two points of\nthe aboves.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 13 Nov 2019 16:55:15 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [proposal] recovery_target \"latest\""
},
{
"msg_contents": "On 2019-11-08 05:00, Grigory Smolkin wrote:\n> Attached new patch revision, now end of available WAL is defined as the\n> fact of missing required WAL.\n> In case of standby, the end of WAL is defined as 2 consecutive switches\n> of WAL source, that didn`t provided requested record.\n> In case of streaming standby, each switch of WAL source is forced after\n> 3 failed attempts to get new data from walreceiver.\n> \n> All constants are taken off the top of my head and serves as proof of\n> concept.\n\nWell, this is now changing the meaning of the patch quite a bit. I'm on \nboard with making the existing default behavior explicit. (This is \nsimilar to how we added recovery_target_timeline = 'current' in PG12.) \nStill not a fan of the name yet, but that's trivial.\n\nIf, however, you want to change the default behavior or introduce a new \nbehavior, as you are suggesting here, that should be a separate discussion.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 21 Nov 2019 11:46:57 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [proposal] recovery_target \"latest\""
},
{
"msg_contents": "On 11/21/19 1:46 PM, Peter Eisentraut wrote:\n> On 2019-11-08 05:00, Grigory Smolkin wrote:\n>> Attached new patch revision, now end of available WAL is defined as the\n>> fact of missing required WAL.\n>> In case of standby, the end of WAL is defined as 2 consecutive switches\n>> of WAL source, that didn`t provided requested record.\n>> In case of streaming standby, each switch of WAL source is forced after\n>> 3 failed attempts to get new data from walreceiver.\n>>\n>> All constants are taken off the top of my head and serves as proof of\n>> concept.\n>\n> Well, this is now changing the meaning of the patch quite a bit. I'm \n> on board with making the existing default behavior explicit. (This is \n> similar to how we added recovery_target_timeline = 'current' in PG12.) \n> Still not a fan of the name yet, but that's trivial.\n>\n> If, however, you want to change the default behavior or introduce a \n> new behavior, as you are suggesting here, that should be a separate \n> discussion.\n\nNo, default behavior is not to be changed. As I previously mentioned, it \nmay break the backward compatibility for 3rd party backup and \nreplication applications.\n\n> At Fri, 8 Nov 2019 16:08:47 +0300, Grigory Smolkin<g.smolkin@postgrespro.ru> wrote in\n>> On 11/8/19 7:00 AM, Grigory Smolkin wrote:\n>>> On 11/7/19 4:36 PM, Grigory Smolkin wrote:\n>>>> I gave it some thought and now think that prohibiting recovery_target\n>>>> 'latest' and standby was a bad idea.\n>>>> All recovery_targets follow the same pattern of usage, so\n>>>> recovery_target \"latest\" also must be capable of working in standby\n>>>> mode.\n>>>> All recovery_targets have a clear deterministic 'target' where\n>>>> recovery should end.\n>>>> In case of recovery_target \"latest\" this target is the end of\n>>>> available WAL, therefore the end of available WAL must be more clearly\n>>>> defined.\n>>>> I will work on it.\n>>>>\n>>>> Thank you for a feedback.\n>>> Attached new patch revision, now end of available WAL is defined as\n>>> the fact of missing required WAL.\n>>> In case of standby, the end of WAL is defined as 2 consecutive\n>>> switches of WAL source, that didn`t provided requested record.\n>>> In case of streaming standby, each switch of WAL source is forced\n>>> after 3 failed attempts to get new data from walreceiver.\n>>>\n>>> All constants are taken off the top of my head and serves as proof of\n>>> concept.\n>>> TAP test is updated.\n>>>\n>> Attached new revision, it contains some minor refactoring.\n> Thanks for the new patch. I found that it needs more than I thought,\n> but it seems a bit too complicated and less stable.\n>\n> As the patch does, WaitForWALToBecomeAvailable needs to exit when\n> avaiable sources are exhausted. However, we don't need to count\n> failures to do that. It is enough that the function have two more exit\n> point. When streaming timeout fires, and when we found that streaming\n> is not set up when archive/wal failed.\n>\n> In my opinion, it is better that we have less dependency to global\n> variables in such deep levels in a call hierachy. Such nformation can\n> be stored in XLogPageReadPrivate.\n\nMany thanks!\nIt looks much better that my approach with global variables.\n\nI`ve tested it and have some thoughts/concerns:\n\n1. Recovery should report the exact reason why it has been forced to \nstop. In case of recovering to the end of WAL, standby promotion request \nreceived during recovery could be mistaken for reaching the end of WAL \nand reported as such. To avoid this, I think that reachedEndOfWal \nvariable should be introduced.\n\nIn attached patch it is added as a global variable, but maybe something \nmore clever may be devised. I was not sure that reachedEndOfWal could be \nplaced in XLogPageReadPrivate. Because we need to access it at the \nhigher level than ReadRecord(), and I was under impression placing it in \nXLogPageReadPrivate could violate abstraction level of XLogPageReadPrivate.\n\n2. During the testing, I`ve stumbled upon assertion failure in case of \nrecovering in standby mode to the the end of WAL coupled with \nrecovery_target_action as \"promote\", caused by the WAL source in state \nmachine not been changed after reaching the recovery target (script to \nreproduce is attached):\n\n2019-12-07 13:45:54.468 MSK [23067] LOG: starting PostgreSQL 13devel on \nx86_64-pc-linux-gnu, compiled by gcc (GCC) 9.2.1 20190827 (Red Hat \n9.2.1-1), 64-bit\n2019-12-07 13:45:54.468 MSK [23067] LOG: listening on IPv4 address \n\"127.0.0.1\", port 15433\n2019-12-07 13:45:54.470 MSK [23067] LOG: listening on Unix socket \n\"/tmp/.s.PGSQL.15433\"\n2019-12-07 13:45:54.475 MSK [23068] LOG: database system was \ninterrupted; last known up at 2019-12-07 13:45:49 MSK\ncp: cannot stat '/home/gsmol/task/13_devel/archive/00000002.history': No \nsuch file or directory\n2019-12-07 13:45:54.602 MSK [23068] LOG: entering standby mode\n2019-12-07 13:45:54.614 MSK [23068] LOG: restored log file \n\"000000010000000000000002\" from archive\n2019-12-07 13:45:54.679 MSK [23068] LOG: redo starts at 0/2000028\n2019-12-07 13:45:54.682 MSK [23068] LOG: consistent recovery state \nreached at 0/2000100\n2019-12-07 13:45:54.682 MSK [23067] LOG: database system is ready to \naccept read only connections\n2019-12-07 13:45:54.711 MSK [23068] LOG: restored log file \n\"000000010000000000000003\" from archive\n2019-12-07 13:45:54.891 MSK [23068] LOG: restored log file \n\"000000010000000000000004\" from archive\n2019-12-07 13:45:55.046 MSK [23068] LOG: restored log file \n\"000000010000000000000005\" from archive\n2019-12-07 13:45:55.210 MSK [23068] LOG: restored log file \n\"000000010000000000000006\" from archive\n2019-12-07 13:45:55.377 MSK [23068] LOG: restored log file \n\"000000010000000000000007\" from archive\n2019-12-07 13:45:55.566 MSK [23068] LOG: restored log file \n\"000000010000000000000008\" from archive\n2019-12-07 13:45:55.737 MSK [23068] LOG: restored log file \n\"000000010000000000000009\" from archive\ncp: cannot stat \n'/home/gsmol/task/13_devel/archive/00000001000000000000000A': No such \nfile or directory\n2019-12-07 13:45:56.233 MSK [23083] LOG: started streaming WAL from \nprimary at 0/A000000 on timeline 1\n2019-12-07 13:45:56.365 MSK [23068] LOG: recovery stopping after \nreaching the end of available WAL\n2019-12-07 13:45:56.365 MSK [23068] LOG: redo done at 0/9FFC670\n2019-12-07 13:45:56.365 MSK [23068] LOG: last completed transaction was \nat log time 2019-12-07 13:45:53.627746+03\n2019-12-07 13:45:56.365 MSK [23083] FATAL: terminating walreceiver \nprocess due to administrator command\nTRAP: FailedAssertion(\"StandbyMode\", File: \"xlog.c\", Line: 12032)\npostgres: startup waiting for \n00000001000000000000000A(ExceptionalCondition+0xa8)[0xa88b55]\npostgres: startup waiting for 00000001000000000000000A[0x573417]\npostgres: startup waiting for 00000001000000000000000A[0x572b68]\npostgres: startup waiting for 00000001000000000000000A[0x579066]\npostgres: startup waiting for \n00000001000000000000000A(XLogReadRecord+0xe3)[0x5788ac]\npostgres: startup waiting for 00000001000000000000000A[0x5651f8]\npostgres: startup waiting for \n00000001000000000000000A(StartupXLOG+0x23aa)[0x56b26e]\npostgres: startup waiting for \n00000001000000000000000A(StartupProcessMain+0xc7)[0x8642a1]\npostgres: startup waiting for \n00000001000000000000000A(AuxiliaryProcessMain+0x5b8)[0x5802ad]\npostgres: startup waiting for 00000001000000000000000A[0x863175]\npostgres: startup waiting for \n00000001000000000000000A(PostmasterMain+0x1214)[0x85e0a4]\npostgres: startup waiting for 00000001000000000000000A[0x76f247]\n/lib64/libc.so.6(__libc_start_main+0xf3)[0x7f8867958f33]\npostgres: startup waiting for \n00000001000000000000000A(_start+0x2e)[0x47afee]\n\n#0 0x00007f886796ce75 in raise () from /lib64/libc.so.6\n#1 0x00007f8867957895 in abort () from /lib64/libc.so.6\n#2 0x0000000000a88b82 in ExceptionalCondition (conditionName=0xb24acc \n\"StandbyMode\", errorType=0xb208a7 \"FailedAssertion\",\n fileName=0xb208a0 \"xlog.c\", lineNumber=12032) at assert.c:67\n#3 0x0000000000573417 in WaitForWALToBecomeAvailable (RecPtr=151003136, \nrandAccess=true, fetching_ckpt=false, tliRecPtr=167757424,\n return_on_eow=true) at xlog.c:12032\n#4 0x0000000000572b68 in XLogPageRead (xlogreader=0xf08ed8, \ntargetPagePtr=150994944, reqLen=8192, targetRecPtr=167757424,\n readBuf=0xf37d50 \"\\002\\321\\005\") at xlog.c:11611\n#5 0x0000000000579066 in ReadPageInternal (state=0xf08ed8, \npageptr=167755776, reqLen=1672) at xlogreader.c:579\n#6 0x00000000005788ac in XLogReadRecord (state=0xf08ed8, \nRecPtr=167757424, errormsg=0x7fff1f5cdeb8) at xlogreader.c:300\n#7 0x00000000005651f8 in ReadRecord (xlogreader=0xf08ed8, \nRecPtr=167757424, emode=22, fetching_ckpt=false) at xlog.c:4271\n#8 0x000000000056b26e in StartupXLOG () at xlog.c:7373\n#9 0x00000000008642a1 in StartupProcessMain () at startup.c:196\n#10 0x00000000005802ad in AuxiliaryProcessMain (argc=2, \nargv=0x7fff1f5cea80) at bootstrap.c:451\n#11 0x0000000000863175 in StartChildProcess (type=StartupProcess) at \npostmaster.c:5461\n#12 0x000000000085e0a4 in PostmasterMain (argc=3, argv=0xf082b0) at \npostmaster.c:1392\n#13 0x000000000076f247 in main (argc=3, argv=0xf082b0) at main.c:210\n(gdb) frame 8\n#8 0x000000000056b26e in StartupXLOG () at xlog.c:7373\n7373 record = ReadRecord(xlogreader, LastRec, PANIC, false);\n(gdb) p StandbyMode\n$1 = false\n(gdb) list\n7368\n7369 /*\n7370 * Re-fetch the last valid or last applied record, so we \ncan identify the\n7371 * exact endpoint of what we consider the valid portion \nof WAL.\n7372 */\n7373 record = ReadRecord(xlogreader, LastRec, PANIC, false);\n7374 EndOfLog = EndRecPtr;\n7375\n7376 /*\n7377 * EndOfLogTLI is the TLI in the filename of the XLOG \nsegment containing\n\n\nBoth issues are fixed in the new patch version.\nAny review and thoughts on the matters would be much appreciated.\n\n\n>\n> I think The doc needs to exiplain on the difference between default\n> and latest.\nSure, I will work on it.\n>\n> Please find the attached, which illustrates the first two points of\n> the aboves.\n>\n> regards.\n\n\n-- \nGrigory Smolkin\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Sun, 8 Dec 2019 04:03:01 +0300",
"msg_from": "Grigory Smolkin <g.smolkin@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: [proposal] recovery_target \"latest\""
},
{
"msg_contents": "Thanks for the new version.\n\nAt Sun, 8 Dec 2019 04:03:01 +0300, Grigory Smolkin <g.smolkin@postgrespro.ru> wrote in \n> On 11/21/19 1:46 PM, Peter Eisentraut wrote:\n> > On 2019-11-08 05:00, Grigory Smolkin wrote:\n> I`ve tested it and have some thoughts/concerns:\n> \n> 1. Recovery should report the exact reason why it has been forced to\n> stop. In case of recovering to the end of WAL, standby promotion\n> request received during recovery could be mistaken for reaching the\n> end of WAL and reported as such. To avoid this, I think that\n> reachedEndOfWal variable should be introduced.\n>\n> In attached patch it is added as a global variable, but maybe\n> something more clever may be devised. I was not sure that\n> reachedEndOfWal could be placed in XLogPageReadPrivate. Because we\n> need to access it at the higher level than ReadRecord(), and I was\n> under impression placing it in XLogPageReadPrivate could violate\n> abstraction level of XLogPageReadPrivate.\n\nCheckForStandbyTrigger() always returns true once the trigger is\npulled. We don't care whether end-of-WAL is reached if promote is\nalready triggered. Thus, we can tell the promote case by asking\nCheckForStandbyTrigger() when we exited the redo main loop with\nrecoveryTarget = RECOVERY_TARGET_LATEST. Is this works as you expect?\n\n> 2. During the testing, I`ve stumbled upon assertion failure in case of\n> recovering in standby mode to the the end of WAL coupled with\n> recovery_target_action as \"promote\", caused by the WAL source in state\n> machine not been changed after reaching the recovery target (script to\n> reproduce is attached):\n...\n> TRAP: FailedAssertion(\"StandbyMode\", File: \"xlog.c\", Line: 12032)\n...\n> #2 0x0000000000a88b82 in ExceptionalCondition (conditionName=0xb24acc\n> #\"StandbyMode\", errorType=0xb208a7 \"FailedAssertion\",\n> fileName=0xb208a0 \"xlog.c\", lineNumber=12032) at assert.c:67\n> #3 0x0000000000573417 in WaitForWALToBecomeAvailable\n> #(RecPtr=151003136, randAccess=true, fetching_ckpt=false,\n> #tliRecPtr=167757424,\n> return_on_eow=true) at xlog.c:12032\n...\n> #7 0x00000000005651f8 in ReadRecord (xlogreader=0xf08ed8,\n> #RecPtr=167757424, emode=22, fetching_ckpt=false) at xlog.c:4271\n..\n\nReadRecord is called with currentSource=STREAM after StandbyMode was\nturned off. I suppose the fix means the \"currentSource =\nXLOG_FROM_PG_WAL\" line but I don't think it not the right way.\n\nStreaming timeout means failure when return_on_eow is true. Thus the\nright thing to do there is setting lastSourceFailed to true. The first\nhalf of WaitForWALToBecomeAvailable handles failure of the current\nsource thus source transition happens only there. The second half just\nreports failure to the first half.\n\n> Both issues are fixed in the new patch version.\n> Any review and thoughts on the matters would be much appreciated.\n> \n> \n> >\n> > I think The doc needs to exiplain on the difference between default\n> > and latest.\n> Sure, I will work on it.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 10 Dec 2019 15:39:23 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [proposal] recovery_target \"latest\""
},
{
"msg_contents": "Hi,\n\nthis patch was waiting on author without any update/response since early\nDecember, so I've marked it as returned with feedback. Feel free to\nre-submit an updated version to a future CF.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 1 Feb 2020 12:44:11 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [proposal] recovery_target \"latest\""
}
] |
[
{
"msg_contents": "It's time to start the next commitfest. I seem to recall somebody\nsaying back in September that they'd run the next one, but I forget\nwho. Anyway, we need a volunteer to be chief nagger.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 04 Nov 2019 10:54:52 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Do we have a CF manager for November?"
},
{
"msg_contents": "On Mon, Nov 04, 2019 at 10:54:52AM -0500, Tom Lane wrote:\n> It's time to start the next commitfest. I seem to recall somebody\n> saying back in September that they'd run the next one, but I forget\n> who. Anyway, we need a volunteer to be chief nagger.\n\nThat may have been me. I can take this one if there is nobody else\naround.\n\nNote: I have switched the app as in progress a couple of days ago,\nafter AoE was on the 1st of November of course.\n--\nMichael",
"msg_date": "Tue, 5 Nov 2019 11:18:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Do we have a CF manager for November?"
},
{
"msg_contents": "Hi Michael,\n\n\nOn Tue, Nov 5, 2019 at 7:18 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Mon, Nov 04, 2019 at 10:54:52AM -0500, Tom Lane wrote:\n> > It's time to start the next commitfest. I seem to recall somebody\n> > saying back in September that they'd run the next one, but I forget\n> > who. Anyway, we need a volunteer to be chief nagger.\n>\n> That may have been me. I can take this one if there is nobody else\n> around.\n>\n> I am ready to help you with this.\n\n\n> Note: I have switched the app as in progress a couple of days ago,\n> after AoE was on the 1st of November of course.\n> --\n> Michael\n>\n\n\n-- \nIbrar Ahmed\n\nHi Michael,On Tue, Nov 5, 2019 at 7:18 AM Michael Paquier <michael@paquier.xyz> wrote:On Mon, Nov 04, 2019 at 10:54:52AM -0500, Tom Lane wrote:\n> It's time to start the next commitfest. I seem to recall somebody\n> saying back in September that they'd run the next one, but I forget\n> who. Anyway, we need a volunteer to be chief nagger.\n\nThat may have been me. I can take this one if there is nobody else\naround.\nI am ready to help you with this. \nNote: I have switched the app as in progress a couple of days ago,\nafter AoE was on the 1st of November of course.\n--\nMichael\n-- Ibrar Ahmed",
"msg_date": "Tue, 5 Nov 2019 20:50:54 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Do we have a CF manager for November?"
},
{
"msg_contents": "On Tue, Nov 05, 2019 at 08:50:54PM +0500, Ibrar Ahmed wrote:\n> On Tue, Nov 5, 2019 at 7:18 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> That may have been me. I can take this one if there is nobody else\n>> around.\n\nOkay, so it is. I have begun browsing the patch history, and we have\na loooot of work ahead.\n\n> I am ready to help you with this.\n\nAny help is welcome, thanks.\n--\nMichael",
"msg_date": "Thu, 7 Nov 2019 17:11:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Do we have a CF manager for November?"
},
{
"msg_contents": "On Tue, Nov 05, 2019 at 11:18:00AM +0900, Michael Paquier wrote:\n> That may have been me. I can take this one if there is nobody else\n> around.\n> \n> Note: I have switched the app as in progress a couple of days ago,\n> after AoE was on the 1st of November of course.\n\nSo, we are close to the end of this commit fest, and I have done a\nfirst pass on something like one third of the entries, mainly updating\nincorrect patch status, bumping them into next CF or closing stale\nitems waiting on author. As things stand, the progress is not that\ngood taking the total number of patches:\nNeeds review: 109.\nReady for Committer: 7.\nCommitted: 36.\nMoved to next CF: 40.\nWithdrawn: 4. Rejected: 6.\nReturned with Feedback: 19\nTotal: 221. \n\nSo we still have a lot of patches in need of review, and most of them\nwill likely get bumped to the next CF. The not-so-good news is that\nnumbers tend to be comparable with the last last CF. An actual bad\nnews is that, based on what I have looked at until now, I have noticed\nthat a certain number of patches had an incorrect status for *weeks*\nso it is actually necessary to go through each patch to make sure that\nthings are in a correct state in the CF app, which increases quite a\nbit the classification work burden.\n\nWhen looking at a patch, I usually try to use the following rules for\nclassification in a current CF:\n- If patch is in \"Needs Review\" state, bump it to the next CF.\n- If patch is in \"Waiting on Author\", with a state not updated for at\nleast two weeks, mark it as returned with feedback.\n- If patch is in \"Waiting on Author\", with a state updated recently\n(aka within two weeks), bump it to next CF with the same state. This\nrequires more manual steps than the rest as the CF app does not allow\nmoving a patch to next CF waiting on author, but that's not right\neither to bump out a patch without giving time to the author to answer\nback. Using half of the CF period looks rather right regarding that.\n\nIf you are registered as a patch author, reviewer or even committer,\nit would be nice to look at each item you are involved in, and make\nthen sure that the patch you are looking at is in a correct state to\nprevent any errors. If you can move it to next CF or mark it as RwF\nby yourself, this also saves cycles to the CFMs. (Thanks Ibrar I\nhave noticed your activity!).\n\nThanks,\n--\nMichael",
"msg_date": "Thu, 28 Nov 2019 16:02:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Do we have a CF manager for November?"
},
{
"msg_contents": "On Thu, Nov 28, 2019 at 04:02:55PM +0900, Michael Paquier wrote:\n> So, we are close to the end of this commit fest, and I have done a\n> first pass on something like one third of the entries, mainly updating\n> incorrect patch status, bumping them into next CF or closing stale\n> items waiting on author.\n\nI have worked more on the CF. And here are the final results:\nCommitted: 36.\nMoved to next CF: 146.\nWithdrawn: 4.\nRejected: 6.\nReturned with Feedback: 29.\nTotal: 221. \n\nThe results are very comparable with the last CF, where 39 were marked\nas committed and 28 as returned with feedback. I know that we are\nstill a couple of hours before officially being in December AoE\n(exactly 9), so I am a bit ahead. My apologies about that.\n\nI have been able to go through each patch, and pinged each author\nwhere needed. By the way, the cfbot has been extremely helpful in\nthis exercise: \nhttp://commitfest.cputube.org/\n\nThanks,\n--\nMichael",
"msg_date": "Sun, 1 Dec 2019 12:53:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Do we have a CF manager for November?"
},
{
"msg_contents": "On Sun, Dec 1, 2019 at 9:23 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Thu, Nov 28, 2019 at 04:02:55PM +0900, Michael Paquier wrote:\n> > So, we are close to the end of this commit fest, and I have done a\n> > first pass on something like one third of the entries, mainly updating\n> > incorrect patch status, bumping them into next CF or closing stale\n> > items waiting on author.\n>\n> I have worked more on the CF. And here are the final results:\n> Committed: 36.\n> Moved to next CF: 146.\n> Withdrawn: 4.\n> Rejected: 6.\n> Returned with Feedback: 29.\n> Total: 221.\n>\n> The results are very comparable with the last CF, where 39 were marked\n> as committed and 28 as returned with feedback. I know that we are\n> still a couple of hours before officially being in December AoE\n> (exactly 9), so I am a bit ahead. My apologies about that.\n>\nI have been able to go through each patch, and pinged each author\n> where needed.\n\n\nThank you for your efforts.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Sun, Dec 1, 2019 at 9:23 AM Michael Paquier <michael@paquier.xyz> wrote:On Thu, Nov 28, 2019 at 04:02:55PM +0900, Michael Paquier wrote:\n> So, we are close to the end of this commit fest, and I have done a\n> first pass on something like one third of the entries, mainly updating\n> incorrect patch status, bumping them into next CF or closing stale\n> items waiting on author.\n\nI have worked more on the CF. And here are the final results:\nCommitted: 36.\nMoved to next CF: 146.\nWithdrawn: 4.\nRejected: 6.\nReturned with Feedback: 29.\nTotal: 221. \n\nThe results are very comparable with the last CF, where 39 were marked\nas committed and 28 as returned with feedback. I know that we are\nstill a couple of hours before officially being in December AoE\n(exactly 9), so I am a bit ahead. My apologies about that. \nI have been able to go through each patch, and pinged each author\nwhere needed.Thank you for your efforts.-- With Regards,Amit Kapila.EnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Sun, 1 Dec 2019 09:32:49 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Do we have a CF manager for November?"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> I have worked more on the CF. And here are the final results:\n> Committed: 36.\n> Moved to next CF: 146.\n> Withdrawn: 4.\n> Rejected: 6.\n> Returned with Feedback: 29.\n> Total: 221. \n\nAs always, many thanks for doing this tedious work!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 01 Dec 2019 10:31:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Do we have a CF manager for November?"
}
] |
[
{
"msg_contents": "Hi!\n\nThread [1] about support for .datetime() jsonpath method raises a\nquestion about standard-conforming parising for Y, YY, YYY and RR\ndatetime template patterns.\n\nAccording to standard YYY, YY and Y should get higher digits from\ncurrent year. Our current implementation gets higher digits so that\nthe result is closest to 2020.\n\nWe currently don't support RR. According to standard RR behavior is\nimplementation-defined and should select marching 4-digit year in the\ninterval [CY - 100; CY + 100], where CY is current year. So, our\ncurrent implementation of YY is more like RR according to standard.\n\nThe open question are:\n1) Do we like to make our datetime parsing to depend on current\ntimestamp? I guess no. But how to parse one-digit year? If we\nhardcode constant it would outdate in decade. Thankfully, no one in\nthe right mind wouldn't use Y pattern, but still.\n2) How do we like to parse RR? Standard lives us a lot of freedom\nhere. Do we like to parse it as do we parse YY now? It looks\nreasonable to select a closest matching year. Since PG 13 is going to\nbe released in 2020, our algorithm would be perfect fit at release\ntime.\n3) Do we like to change behavior to_date()/to_timestamp()? Or just\njsonpath .datetime() and future CAST(... AS ... FORMAT ...) defined in\nSQL 2016?\n\nAttached patch solve the questions above as following. YYY, YY and Y\npatterns get higher digits from 2020. So, results for Y would become\ninconsistent since 2030. RR select matching year closest to 2020 as\nYY does for now. It changes behavior for both\nto_date()/to_timestamp() and jsonpath .datetime().\n\nAny thoughts?\n\nLinks\n1. https://www.postgresql.org/message-id/CAPpHfdsZgYEra_PeCLGNoXOWYx6iU-S3wF8aX0ObQUcZU%2B4XTw%40mail.gmail.com\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 5 Nov 2019 04:45:43 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Standard-conforming datetime years parsing"
},
{
"msg_contents": "On 05/11/2019 02:45, Alexander Korotkov wrote:\n> 3) Do we like to change behavior to_date()/to_timestamp()? Or just\n> jsonpath .datetime() and future CAST(... AS ... FORMAT ...) defined in\n> SQL 2016?\n\n\nI don't want to hijack this thread, but I would like the CAST feature to\ncall to_timestamp() and to_char(), even if they aren't 100% standard\ncompliant today.\n\n\nI see a new column on pg_cast where users can define the function to do\nthe cast with format.\n\n-- \n\nVik\n\n\n\n",
"msg_date": "Tue, 5 Nov 2019 19:32:09 +0100",
"msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Standard-conforming datetime years parsing"
}
] |
[
{
"msg_contents": "Deal Hackers.\n\nThe value of ssl_passphrase_command is set so that an external command\nis called when the passphrase for decrypting an SSL file such as a\nprivate key is obtained.\nTherefore, easily set to work with echo \"passphrase\" or call to\nanother get of passphrase application.\n\nI think that this GUC value doesn't contain very sensitive data,\nbut just in case, it's dangerous to be visible to all users.\nI think do not possible these cases, but if a used echo external\ncommands or another external command, know what application used to\nget the password, maybe we can't be convinced that there's the safety\nof using abuse by backtracking on applications.\nSo I think to the need only superusers or users with the default role\nof pg_read_all_settings should see these values.\n\nPatch is very simple.\nHow do you think about my thoughts like this?\n\nBest regards.\nMoon.",
"msg_date": "Tue, 5 Nov 2019 17:14:41 +0900",
"msg_from": "\"Moon, Insung\" <tsukiwamoon.pgsql@gmail.com>",
"msg_from_op": true,
"msg_subject": "Exposure related to GUC value of ssl_passphrase_command"
},
{
"msg_contents": "Hello.\n\nOn Tue, Nov 5, 2019 at 5:15 PM Moon, Insung <tsukiwamoon.pgsql@gmail.com> wrote:\n> Deal Hackers.\n>\n> The value of ssl_passphrase_command is set so that an external command\n> is called when the passphrase for decrypting an SSL file such as a\n> private key is obtained.\n> Therefore, easily set to work with echo \"passphrase\" or call to\n> another get of passphrase application.\n>\n> I think that this GUC value doesn't contain very sensitive data,\n> but just in case, it's dangerous to be visible to all users.\n> I think do not possible these cases, but if a used echo external\n> commands or another external command, know what application used to\n> get the password, maybe we can't be convinced that there's the safety\n> of using abuse by backtracking on applications.\n> So I think to the need only superusers or users with the default role\n> of pg_read_all_settings should see these values.\n>\n> Patch is very simple.\n> How do you think about my thoughts like this?\n\nI'm hardly an expert on this topic, but reading this blog post about\nssl_passphrase_command:\n\nhttps://www.2ndquadrant.com/en/blog/postgresql-passphrase-protected-ssl-keys-systemd/\n\nwhich mentions that some users might go with the very naive\nconfiguration such as:\n\nssl_passphrase_command = 'echo \"secret\"'\n\nmaybe it makes sense to protect its value from everyone but superusers.\n\nSo +1.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Fri, 8 Nov 2019 16:24:10 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Exposure related to GUC value of ssl_passphrase_command"
},
{
"msg_contents": "On Fri, Nov 8, 2019 at 4:24 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> Hello.\n>\n> On Tue, Nov 5, 2019 at 5:15 PM Moon, Insung <tsukiwamoon.pgsql@gmail.com> wrote:\n> > Deal Hackers.\n> >\n> > The value of ssl_passphrase_command is set so that an external command\n> > is called when the passphrase for decrypting an SSL file such as a\n> > private key is obtained.\n> > Therefore, easily set to work with echo \"passphrase\" or call to\n> > another get of passphrase application.\n> >\n> > I think that this GUC value doesn't contain very sensitive data,\n> > but just in case, it's dangerous to be visible to all users.\n> > I think do not possible these cases, but if a used echo external\n> > commands or another external command, know what application used to\n> > get the password, maybe we can't be convinced that there's the safety\n> > of using abuse by backtracking on applications.\n> > So I think to the need only superusers or users with the default role\n> > of pg_read_all_settings should see these values.\n> >\n> > Patch is very simple.\n> > How do you think about my thoughts like this?\n>\n> I'm hardly an expert on this topic, but reading this blog post about\n> ssl_passphrase_command:\n>\n> https://www.2ndquadrant.com/en/blog/postgresql-passphrase-protected-ssl-keys-systemd/\n>\n> which mentions that some users might go with the very naive\n> configuration such as:\n>\n> ssl_passphrase_command = 'echo \"secret\"'\n>\n> maybe it makes sense to protect its value from everyone but superusers.\n>\n> So +1.\n\nSeems this proposal is reasonable.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Thu, 13 Feb 2020 02:37:29 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Exposure related to GUC value of ssl_passphrase_command"
},
{
"msg_contents": "At Thu, 13 Feb 2020 02:37:29 +0900, Fujii Masao <masao.fujii@gmail.com> wrote in \n> On Fri, Nov 8, 2019 at 4:24 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> >\n> > Hello.\n> >\n> > On Tue, Nov 5, 2019 at 5:15 PM Moon, Insung <tsukiwamoon.pgsql@gmail.com> wrote:\n> > > Deal Hackers.\n> > >\n> > > The value of ssl_passphrase_command is set so that an external command\n> > > is called when the passphrase for decrypting an SSL file such as a\n> > > private key is obtained.\n> > > Therefore, easily set to work with echo \"passphrase\" or call to\n> > > another get of passphrase application.\n> > >\n> > > I think that this GUC value doesn't contain very sensitive data,\n> > > but just in case, it's dangerous to be visible to all users.\n> > > I think do not possible these cases, but if a used echo external\n> > > commands or another external command, know what application used to\n> > > get the password, maybe we can't be convinced that there's the safety\n> > > of using abuse by backtracking on applications.\n> > > So I think to the need only superusers or users with the default role\n> > > of pg_read_all_settings should see these values.\n> > >\n> > > Patch is very simple.\n> > > How do you think about my thoughts like this?\n> >\n> > I'm hardly an expert on this topic, but reading this blog post about\n> > ssl_passphrase_command:\n> >\n> > https://www.2ndquadrant.com/en/blog/postgresql-passphrase-protected-ssl-keys-systemd/\n> >\n> > which mentions that some users might go with the very naive\n> > configuration such as:\n> >\n> > ssl_passphrase_command = 'echo \"secret\"'\n> >\n> > maybe it makes sense to protect its value from everyone but superusers.\n> >\n> > So +1.\n> \n> Seems this proposal is reasonable.\n\nI think it is reasonable.\n\nBy the way, I'm not sure the criteria of setting a GUC variable as\nGUC_SUPERUSER_ONLY, but for example, ssl_max/min_protocol_version,\ndynamic_library_path, log_directory, krb_server_keyfile,\ndata_directory and config_file are GUC_SUPERUSER_ONLY. So, it seems to\nme very strange that ssl_*_file are not. Don't we need to mark them\nmaybe and some of the other ssl_* as the same?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 13 Feb 2020 11:28:05 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Exposure related to GUC value of ssl_passphrase_command"
},
{
"msg_contents": "On Thu, Feb 13, 2020 at 11:28:05AM +0900, Kyotaro Horiguchi wrote:\n> I think it is reasonable.\n\nIndeed, that makes sense to me as well. I am adding Peter Eisentraut\nin CC as the author/committer of 8a3d942 to comment on that.\n\n> By the way, I'm not sure the criteria of setting a GUC variable as\n> GUC_SUPERUSER_ONLY, but for example, ssl_max/min_protocol_version,\n> dynamic_library_path, log_directory, krb_server_keyfile,\n> data_directory and config_file are GUC_SUPERUSER_ONLY. So, it seems to\n> me very strange that ssl_*_file are not. Don't we need to mark them\n> maybe and some of the other ssl_* as the same?\n\nThis should be a separate discussion IMO. Perhaps there is a point in\nsoftening or hardening some of them.\n--\nMichael",
"msg_date": "Thu, 13 Feb 2020 12:38:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Exposure related to GUC value of ssl_passphrase_command"
},
{
"msg_contents": "On 2020-02-13 04:38, Michael Paquier wrote:\n> On Thu, Feb 13, 2020 at 11:28:05AM +0900, Kyotaro Horiguchi wrote:\n>> I think it is reasonable.\n> \n> Indeed, that makes sense to me as well. I am adding Peter Eisentraut\n> in CC as the author/committer of 8a3d942 to comment on that.\n\nI'm OK with changing that.\n\n>> By the way, I'm not sure the criteria of setting a GUC variable as\n>> GUC_SUPERUSER_ONLY, but for example, ssl_max/min_protocol_version,\n>> dynamic_library_path, log_directory, krb_server_keyfile,\n>> data_directory and config_file are GUC_SUPERUSER_ONLY. So, it seems to\n>> me very strange that ssl_*_file are not. Don't we need to mark them\n>> maybe and some of the other ssl_* as the same?\n> \n> This should be a separate discussion IMO. Perhaps there is a point in\n> softening or hardening some of them.\n\nI think some of this makes sense, and we should have a discussion about it.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 13 Feb 2020 10:11:07 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Exposure related to GUC value of ssl_passphrase_command"
},
{
"msg_contents": "Dear Hackers.\n\nThank you for an response.\nI registered this entry in commifest of 2020-03.\n# I registered in the security part, but if it is wrong, sincerely\napologize for this.\n\nAnd I'd like to review show authority to ssl_ * later and discuss it\nin a separate thread.\n\nBest regards.\nMoon.\n\nOn Thu, Feb 13, 2020 at 6:11 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2020-02-13 04:38, Michael Paquier wrote:\n> > On Thu, Feb 13, 2020 at 11:28:05AM +0900, Kyotaro Horiguchi wrote:\n> >> I think it is reasonable.\n> >\n> > Indeed, that makes sense to me as well. I am adding Peter Eisentraut\n> > in CC as the author/committer of 8a3d942 to comment on that.\n>\n> I'm OK with changing that.\n>\n> >> By the way, I'm not sure the criteria of setting a GUC variable as\n> >> GUC_SUPERUSER_ONLY, but for example, ssl_max/min_protocol_version,\n> >> dynamic_library_path, log_directory, krb_server_keyfile,\n> >> data_directory and config_file are GUC_SUPERUSER_ONLY. So, it seems to\n> >> me very strange that ssl_*_file are not. Don't we need to mark them\n> >> maybe and some of the other ssl_* as the same?\n> >\n> > This should be a separate discussion IMO. Perhaps there is a point in\n> > softening or hardening some of them.\n>\n> I think some of this makes sense, and we should have a discussion about it.\n>\n> --\n> Peter Eisentraut http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n\n\n",
"msg_date": "Fri, 14 Feb 2020 10:31:45 +0900",
"msg_from": "\"Moon, Insung\" <tsukiwamoon.pgsql@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Exposure related to GUC value of ssl_passphrase_command"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: not tested\n\nI tested the patch on the master branch (addd034) and it works fine.\r\n\r\nI think that test case which non-superuser can't see this parameter is unnecessary. \r\nThere is a similar test for pg_read_all_settings role.\n\nThe new status of this patch is: Ready for Committer\n",
"msg_date": "Fri, 06 Mar 2020 07:20:21 +0000",
"msg_from": "keisuke kuroda <keisuke.kuroda.3862@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Exposure related to GUC value of ssl_passphrase_command"
},
{
"msg_contents": "\n\nOn 2020/02/14 10:31, Moon, Insung wrote:\n> Dear Hackers.\n> \n> Thank you for an response.\n> I registered this entry in commifest of 2020-03.\n> # I registered in the security part, but if it is wrong, sincerely\n> apologize for this.\n> \n> And I'd like to review show authority to ssl_ * later and discuss it\n> in a separate thread.\n\nSo, you are planning to start new discussion about this?\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Mon, 9 Mar 2020 11:43:20 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Exposure related to GUC value of ssl_passphrase_command"
},
{
"msg_contents": "\n\nOn 2020/03/06 16:20, keisuke kuroda wrote:\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: tested, passed\n> Documentation: not tested\n> \n> I tested the patch on the master branch (addd034) and it works fine.\n> \n> I think that test case which non-superuser can't see this parameter is unnecessary.\n> There is a similar test for pg_read_all_settings role.\n> \n> The new status of this patch is: Ready for Committer\n\nPushed! Thanks!\n\nRegards,\n\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Mon, 9 Mar 2020 11:43:52 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Exposure related to GUC value of ssl_passphrase_command"
},
{
"msg_contents": "Dear Kuroda-san, Fujii-san\nThank you for review and commit!\n#Oops.. Sorry..This mail thread has been spammed in Gmail.\n\nI'll go to submit a new discussion after found which case could leak\nabout the GUC parameters related to ssl_*.\nPlease wait a bit.\n\nBest regards.\nMoon.\n\nOn Mon, Mar 9, 2020 at 11:43 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/02/14 10:31, Moon, Insung wrote:\n> > Dear Hackers.\n> >\n> > Thank you for an response.\n> > I registered this entry in commifest of 2020-03.\n> > # I registered in the security part, but if it is wrong, sincerely\n> > apologize for this.\n> >\n> > And I'd like to review show authority to ssl_ * later and discuss it\n> > in a separate thread.\n>\n> So, you are planning to start new discussion about this?\n>\n> Regards,\n>\n> --\n> Fujii Masao\n> NTT DATA CORPORATION\n> Advanced Platform Technology Group\n> Research and Development Headquarters\n\n\n",
"msg_date": "Mon, 9 Mar 2020 14:23:53 +0900",
"msg_from": "\"Moon, Insung\" <tsukiwamoon.pgsql@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Exposure related to GUC value of ssl_passphrase_command"
}
] |
[
{
"msg_contents": "While messing around, I noticed that SET CONSTRAINTS ... DEFERRED\ndoes not work with partitioned tables. I had some code to cover this\ncase, but it has a bug that prevents it from working at all: the sanity\ncheck that verifies whether triggers exist fails.\n\nThe attached patch fixes this problem: it merely removes the sanity\ncheck. With that, everything works.\n\n(Another approach I tried was to split out constraints in partitioned\ntables vs. constraints in regular ones. That's indeed workable, but it\nrequires us to do two additional syscache access per partition for\nget_rel_relkind, which seems excessive.)\n\nThe UNIQUE DEFERRABLE case works after the patch. (I didn't try without\nthe patch.)\n\n-- \n�lvaro Herrera Developer, https://www.PostgreSQL.org/",
"msg_date": "Tue, 5 Nov 2019 16:19:48 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "deferrable FK constraints on partitioned rels"
},
{
"msg_contents": "On 2019-Nov-05, Alvaro Herrera wrote:\n\n> While messing around, I noticed that SET CONSTRAINTS ... DEFERRED\n> does not work with partitioned tables. I had some code to cover this\n> case, but it has a bug that prevents it from working at all: the sanity\n> check that verifies whether triggers exist fails.\n> \n> The attached patch fixes this problem: it merely removes the sanity\n> check. With that, everything works.\n> \n> (Another approach I tried was to split out constraints in partitioned\n> tables vs. constraints in regular ones. That's indeed workable, but it\n> requires us to do two additional syscache access per partition for\n> get_rel_relkind, which seems excessive.)\n\nUh, somehow I posted a previous version of the patch that implements my\nrejected approach, instead of the final version I described. Here's the\nreal patch (which also includes tests).\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 5 Nov 2019 18:29:16 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: deferrable FK constraints on partitioned rels"
},
{
"msg_contents": "On 2019-Nov-05, Alvaro Herrera wrote:\n\n> Uh, somehow I posted a previous version of the patch that implements my\n> rejected approach, instead of the final version I described. Here's the\n> real patch (which also includes tests).\n\nThis was broken in pg11 also. Pushed to all branches.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 7 Nov 2019 14:29:27 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: deferrable FK constraints on partitioned rels"
}
] |
[
{
"msg_contents": "Hi,\n\nThere's a few errors that we issue that are, often, much less bad than\nthey sound. The most common cases I immediately can recall are:\n\n\n1) Mentioning crash, once for each backend, when shutting down\nimmediately. Currently the log output for that, with just two sessions\nconnected, is the following:\n\n2019-11-05 15:09:52.634 PST [9340][] LOG: 00000: received immediate shutdown request\n2019-11-05 15:09:52.634 PST [9340][] LOCATION: pmdie, postmaster.c:2883\n2019-11-05 15:09:52.634 PST [23199][4/0] WARNING: 57P02: terminating connection because of crash of another server process\n2019-11-05 15:09:52.634 PST [23199][4/0] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.\n2019-11-05 15:09:52.634 PST [23199][4/0] HINT: In a moment you should be able to reconnect to the database and repeat your command.\n2019-11-05 15:09:52.634 PST [23199][4/0] LOCATION: quickdie, postgres.c:2734\n2019-11-05 15:09:52.634 PST [23187][3/0] WARNING: 57P02: terminating connection because of crash of another server process\n2019-11-05 15:09:52.634 PST [23187][3/0] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.\n2019-11-05 15:09:52.634 PST [23187][3/0] HINT: In a moment you should be able to reconnect to the database and repeat your command.\n2019-11-05 15:09:52.634 PST [23187][3/0] LOCATION: quickdie, postgres.c:2734\n2019-11-05 15:09:52.634 PST [9345][1/0] WARNING: 57P02: terminating connection because of crash of another server process\n2019-11-05 15:09:52.634 PST [9345][1/0] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.\n2019-11-05 15:09:52.634 PST [9345][1/0] HINT: In a moment you should be able to reconnect to the database and repeat your command.\n2019-11-05 15:09:52.634 PST [9345][1/0] LOCATION: quickdie, postgres.c:2734\n2019-11-05 15:09:52.644 PST [9340][] LOG: 00000: database system is shut down\n2019-11-05 15:09:52.644 PST [9340][] LOCATION: UnlinkLockFiles, miscinit.c:859\n\n(23199, 23187 are backends, 9345 is the autovacuum launcher)\n\nI think there's multiple things wrong with this. For one, reading that\nthe server has (no might in there) crashed, is scary, when all that's\nhappening is an intentional shutdown. But also the sheer log volume is\nbad - on a busy server there may be a *lot* of these lines.\n\nI think the log volume is bad even for an actual PANIC. I've spent *way*\ntoo much time scrolling through pages and pages of the above lines, just\nto find the one or two lines indicating an actual error.\n\n\nIt seems like we ought to be able to somehow\n\na) Signal that the server has been shut down in immediate mode, rather\nthan actually crashed, and issue a different log message to the user.\n\nb) Stop issuing, to the server log, the same message over and over. We\ninstead just ought to send the message to the client. We however need to\nbe careful that we don't make it harder to debug a SIGQUIT sent directly\nto backend processes.\n\n\n2) At the end of crash recovery, and often when the startup processes\nswitches between WAL sources, we get scary messages like:\n\n2019-11-05 15:20:21.907 PST [23407][] <> <> LOG: 00000: invalid record length at F/48C0A500: wanted 24, got 0\n2019-11-05 15:20:21.907 PST [23407][] <> <> LOCATION: ReadRecord, xlog.c:4282\nor\n2019-11-05 15:35:03.321 PST [28518][1/0] LOG: invalid resource manager ID 52 at 3/7CD0B1B8\n\nor any of the other types of xlogreader errors.\n\nI've seen countless tickets being raised because PG users looked at\ntheir logs and got scared. We want them to look at the logs, so this\nseems counter-productive.\n\nIt seems we, at the very least, could add an error context or something\nexplaining that a LOG message about the log end is to be expected. Or\nperhaps we could should reformulate the message to something like\n'ELEVEL: reached end of valid WAL at XX/XX'\n'DETAIL: end determined due to invalid record length: wanted 24, got 0'\n\nperhaps with a HINT in the elevel < ERROR case indicating that this is\nnot a failure.\n\n\n3) When a standby node is shutdown in immediate mode, we issue:\n\n2019-11-05 15:45:58.722 PST [30321][] LOG: database system was interrupted while in recovery at log time 2019-11-05 15:37:43 PST\n2019-11-05 15:45:58.722 PST [30321][] HINT: If this has occurred more than once some data might be corrupted and you might need to choose an earlier recovery target.\n\nWhich imo doesn't really make sense for at least standbys, which are\nexpected to be in recovery forever. There's more than enough scenarios\nwhere one would shut down a standby in immediate mode for good reasons,\nwe shouldn't issue such warnings in that case.\n\n\nThe tricky part in improving this would be how to detect when the\nmessage should still be issued for a standby. One idea, which is not\nbullet proof but might be good enough, would be to record in the control\nfile which position recovery was started from last time, and only issue\nthe error when recovery would start from the same point.\n\n\n\nI'm sure there are more types of messages in this category, these are\njust the ones I could immediately recall from memory as having scared\nactual users unnecessarily.\n\n\nI don't plan on fixing these immediately myself, even if we were to\nagree on something, so if anybody is interested in helping...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 5 Nov 2019 15:54:22 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Should we make scary sounding, but actually routine, errors less\n scary?"
},
{
"msg_contents": "On 11/05/19 18:54, Andres Freund wrote:\n> Hi,\n> \n> There's a few errors that we issue that are, often, much less bad than\n> they sound. The most common cases I immediately can recall are:\n> \n> \n> 1) Mentioning crash, once for each backend, when shutting down\n> immediately. Currently the log output for that, with just two sessions\n> connected, is the following:\n\nWhile on the topic ... this may be more a property of particular\npackagings of the server, to run under systemd etc., but often there\nis a process during startup trying periodically to open a connection\nto the server to confirm that it has successfully started, and the\nresult is a dozen or so log messages that say \"FATAL: the server is\nstarting\" ... which is amusing once you get what it's doing, but a bit\ndisconcerting until then.\n\nNot sure how that could be changed ... maybe a connection-time option\ntrial_connection that would suppress the fatal ereport on rejecting\nthe connection?\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Tue, 5 Nov 2019 22:00:58 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Should we make scary sounding, but actually routine, errors less\n scary?"
},
{
"msg_contents": "At Tue, 5 Nov 2019 15:54:22 -0800, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n> \n> There's a few errors that we issue that are, often, much less bad than\n> they sound. The most common cases I immediately can recall are:\n> \n> \n> 1) Mentioning crash, once for each backend, when shutting down\n> immediately. Currently the log output for that, with just two sessions\n> connected, is the following:\n> \n> 2019-11-05 15:09:52.634 PST [9340][] LOG: 00000: received immediate shutdown request\n> 2019-11-05 15:09:52.634 PST [9340][] LOCATION: pmdie, postmaster.c:2883\n> 2019-11-05 15:09:52.634 PST [23199][4/0] WARNING: 57P02: terminating connection because of crash of another server process\n> 2019-11-05 15:09:52.634 PST [23199][4/0] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.\n> 2019-11-05 15:09:52.634 PST [23199][4/0] HINT: In a moment you should be able to reconnect to the database and repeat your command.\n> 2019-11-05 15:09:52.634 PST [23199][4/0] LOCATION: quickdie, postgres.c:2734\n> 2019-11-05 15:09:52.634 PST [23187][3/0] WARNING: 57P02: terminating connection because of crash of another server process\n> 2019-11-05 15:09:52.634 PST [23187][3/0] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.\n> 2019-11-05 15:09:52.634 PST [23187][3/0] HINT: In a moment you should be able to reconnect to the database and repeat your command.\n> 2019-11-05 15:09:52.634 PST [23187][3/0] LOCATION: quickdie, postgres.c:2734\n> 2019-11-05 15:09:52.634 PST [9345][1/0] WARNING: 57P02: terminating connection because of crash of another server process\n> 2019-11-05 15:09:52.634 PST [9345][1/0] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.\n> 2019-11-05 15:09:52.634 PST [9345][1/0] HINT: In a moment you should be able to reconnect to the database and repeat your command.\n> 2019-11-05 15:09:52.634 PST [9345][1/0] LOCATION: quickdie, postgres.c:2734\n> 2019-11-05 15:09:52.644 PST [9340][] LOG: 00000: database system is shut down\n> 2019-11-05 15:09:52.644 PST [9340][] LOCATION: UnlinkLockFiles, miscinit.c:859\n> \n> (23199, 23187 are backends, 9345 is the autovacuum launcher)\n> \n> I think there's multiple things wrong with this. For one, reading that\n> the server has (no might in there) crashed, is scary, when all that's\n> happening is an intentional shutdown. But also the sheer log volume is\n> bad - on a busy server there may be a *lot* of these lines.\n> \n> I think the log volume is bad even for an actual PANIC. I've spent *way*\n> too much time scrolling through pages and pages of the above lines, just\n> to find the one or two lines indicating an actual error.\n>\n> It seems like we ought to be able to somehow\n> \n> a) Signal that the server has been shut down in immediate mode, rather\n> than actually crashed, and issue a different log message to the user.\n> \n> b) Stop issuing, to the server log, the same message over and over. We\n> instead just ought to send the message to the client. We however need to\n> be careful that we don't make it harder to debug a SIGQUIT sent directly\n> to backend processes.\n\nI doubt that different messages for server log and client worth\ndoing. Isn't it enough moving the cause description from backend\nmessage to postmaster one?\n\nAddition to that, I don't see such a message on connecting psql. It\njust reports as \"server closed the connection unexpectedly\" without a\nserver message. If I'm not missing something the HINT message is\nuseless..\n\n\n 2019-11-05 15:09:52.634 PST [9340][] LOG: 00000: received immediate shutdown request\n 2019-11-05 15:09:52.634 PST [9340][] LOCATION: pmdie, postmaster.c:2883\n+ 2019-11-05 15:09:52.634 PST [9340][] LOG: terminating all active server processes\n- 2019-11-05 15:09:52.634 PST [23199][4/0] WARNING: 57P02: terminating connection because of crash of another server process\n+ 2019-11-05 15:09:52.634 PST [23199][4/0] WARNING: 57P02: terminating connection due to command by postmaster\n- 2019-11-05 15:09:52.634 PST [23199][1/0] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.\n- 2019-11-05 15:09:52.634 PST [23199][1/0] HINT: In a moment you should be able to reconnect to the database and repeat your command.\n\nIf some process has crashed, it looks like:\n\n2019-11-06 10:47:11.387 JST [19737] LOG: 00000: server process (PID 19774) was terminated by signal 11: Segmentation fault\n2019-11-06 10:47:11.387 JST [19737] DETAIL: Failed process was running: select pg_sleep(1200);\n2019-11-06 10:47:11.387 JST [19737] LOCATION: LogChildExit, postmaster.c:3749\n2019-11-06 10:47:11.387 JST [19737] LOG: 00000: terminating any other active server processes, because a server process exited abnormally and possibly corrupted shared memory.\n2019-11-06 10:47:11.387 JST [19737] LOCATION: HandleChildCrash, postmaster.c:3469\n2019-11-06 10:47:11.387 JST [19800] WARNING: 57P02: terminating connection due to command by postmaster\n2019-11-06 10:47:11.387 JST [19800] LOCATION: quickdie, postgres.c:2736\n\n> 2) At the end of crash recovery, and often when the startup processes\n> switches between WAL sources, we get scary messages like:\n> \n> 2019-11-05 15:20:21.907 PST [23407][] <> <> LOG: 00000: invalid record length at F/48C0A500: wanted 24, got 0\n> 2019-11-05 15:20:21.907 PST [23407][] <> <> LOCATION: ReadRecord, xlog.c:4282\n> or\n> 2019-11-05 15:35:03.321 PST [28518][1/0] LOG: invalid resource manager ID 52 at 3/7CD0B1B8\n> \n> or any of the other types of xlogreader errors.\n> \n> I've seen countless tickets being raised because PG users looked at\n> their logs and got scared. We want them to look at the logs, so this\n> seems counter-productive.\n> \n> It seems we, at the very least, could add an error context or something\n> explaining that a LOG message about the log end is to be expected. Or\n> perhaps we could should reformulate the message to something like\n> 'ELEVEL: reached end of valid WAL at XX/XX'\n> 'DETAIL: end determined due to invalid record length: wanted 24, got 0'\n> \n> perhaps with a HINT in the elevel < ERROR case indicating that this is\n> not a failure.\n\nThe proposed message seems far less scary. +1.\n\n> 3) When a standby node is shutdown in immediate mode, we issue:\n> \n> 2019-11-05 15:45:58.722 PST [30321][] LOG: database system was interrupted while in recovery at log time 2019-11-05 15:37:43 PST\n> 2019-11-05 15:45:58.722 PST [30321][] HINT: If this has occurred more than once some data might be corrupted and you might need to choose an earlier recovery target.\n> \n> Which imo doesn't really make sense for at least standbys, which are\n> expected to be in recovery forever. There's more than enough scenarios\n> where one would shut down a standby in immediate mode for good reasons,\n> we shouldn't issue such warnings in that case.\n> \n> \n> The tricky part in improving this would be how to detect when the\n> message should still be issued for a standby. One idea, which is not\n> bullet proof but might be good enough, would be to record in the control\n> file which position recovery was started from last time, and only issue\n> the error when recovery would start from the same point.\n\nRecovery always starts from the latest REDO point. Maybe\nMinRecoveryPoint is better for the use.\n\n> I'm sure there are more types of messages in this category, these are\n> just the ones I could immediately recall from memory as having scared\n> actual users unnecessarily.\n\nI often see inquiries on \"FATAL: the database system is starting\nup\". It is actually FATAL for backends internally but it is also\noverly scary for users.\n\n> I don't plan on fixing these immediately myself, even if we were to\n> agree on something, so if anybody is interested in helping...\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 06 Nov 2019 17:36:09 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we make scary sounding, but actually routine, errors\n less scary?"
},
{
"msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> There's a few errors that we issue that are, often, much less bad than\n> they sound. The most common cases I immediately can recall are:\n\nAgreed.\n\n> 1) Mentioning crash, once for each backend, when shutting down\n> immediately. Currently the log output for that, with just two sessions\n> connected, is the following:\n\nEh, +1 or +2.\n\n> It seems like we ought to be able to somehow\n> \n> a) Signal that the server has been shut down in immediate mode, rather\n> than actually crashed, and issue a different log message to the user.\n\nThat'd be nice, but I anticipate the question coming up of \"how much is\ntoo much to do before an immediate shutdown?\"\n\n> b) Stop issuing, to the server log, the same message over and over. We\n> instead just ought to send the message to the client. We however need to\n> be careful that we don't make it harder to debug a SIGQUIT sent directly\n> to backend processes.\n\nWould be nice if we could improve that, agreed.\n\n> 2) At the end of crash recovery, and often when the startup processes\n> switches between WAL sources, we get scary messages like:\n\n+1000\n\n> I've seen countless tickets being raised because PG users looked at\n> their logs and got scared. We want them to look at the logs, so this\n> seems counter-productive.\n\nYes, agreed, this happens all the time and would be good to improve.\n\n> It seems we, at the very least, could add an error context or something\n> explaining that a LOG message about the log end is to be expected. Or\n> perhaps we could should reformulate the message to something like\n> 'ELEVEL: reached end of valid WAL at XX/XX'\n> 'DETAIL: end determined due to invalid record length: wanted 24, got 0'\n> \n> perhaps with a HINT in the elevel < ERROR case indicating that this is\n> not a failure.\n\nSomething like that does look like an improvement.\n\n> 3) When a standby node is shutdown in immediate mode, we issue:\n> \n> 2019-11-05 15:45:58.722 PST [30321][] LOG: database system was interrupted while in recovery at log time 2019-11-05 15:37:43 PST\n> 2019-11-05 15:45:58.722 PST [30321][] HINT: If this has occurred more than once some data might be corrupted and you might need to choose an earlier recovery target.\n> \n> Which imo doesn't really make sense for at least standbys, which are\n> expected to be in recovery forever. There's more than enough scenarios\n> where one would shut down a standby in immediate mode for good reasons,\n> we shouldn't issue such warnings in that case.\n> \n> \n> The tricky part in improving this would be how to detect when the\n> message should still be issued for a standby. One idea, which is not\n> bullet proof but might be good enough, would be to record in the control\n> file which position recovery was started from last time, and only issue\n> the error when recovery would start from the same point.\n\nYeah... This sounds like it would be more difficult to tackle, though I\nagree it'd be nice to improve on this too.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 6 Nov 2019 08:21:59 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Should we make scary sounding, but actually routine, errors less\n scary?"
},
{
"msg_contents": "Hi,\n\nOn 2019-11-05 22:00:58 -0500, Chapman Flack wrote:\n> On 11/05/19 18:54, Andres Freund wrote:\n> > Hi,\n> > \n> > There's a few errors that we issue that are, often, much less bad than\n> > they sound. The most common cases I immediately can recall are:\n> > \n> > \n> > 1) Mentioning crash, once for each backend, when shutting down\n> > immediately. Currently the log output for that, with just two sessions\n> > connected, is the following:\n> \n> While on the topic ... this may be more a property of particular\n> packagings of the server, to run under systemd etc., but often there\n> is a process during startup trying periodically to open a connection\n> to the server to confirm that it has successfully started, and the\n> result is a dozen or so log messages that say \"FATAL: the server is\n> starting\" ... which is amusing once you get what it's doing, but a bit\n> disconcerting until then.\n\nI think that is best solved by using pg_ctl's logic to look at the\npostmaster state file, rather than connecting before the server is\nready. For one connecting requires to actually be able to connect, which\nisn't always a given. If using pg_ctl is problematic for some reason,\nit'd imo be better to extract the relevant logic into its own tool.\n\n\n> Not sure how that could be changed ... maybe a connection-time option\n> trial_connection that would suppress the fatal ereport on rejecting\n> the connection?\n\nI think that'd be a recipe for hard to debug issues. Imagine somebody\nDOSing the server and setting that option - you'd have no way to\nactually see what's happening.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 7 Nov 2019 15:46:13 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Should we make scary sounding, but actually routine, errors less\n scary?"
},
{
"msg_contents": "Hi,\n\nOn 2019-11-06 17:36:09 +0900, Kyotaro Horiguchi wrote:\n> At Tue, 5 Nov 2019 15:54:22 -0800, Andres Freund <andres@anarazel.de> wrote in \n> > Hi,\n> > \n> > There's a few errors that we issue that are, often, much less bad than\n> > they sound. The most common cases I immediately can recall are:\n> > \n> > \n> > 1) Mentioning crash, once for each backend, when shutting down\n> > immediately. Currently the log output for that, with just two sessions\n> > connected, is the following:\n> > \n> > 2019-11-05 15:09:52.634 PST [9340][] LOG: 00000: received immediate shutdown request\n> > 2019-11-05 15:09:52.634 PST [9340][] LOCATION: pmdie, postmaster.c:2883\n> > 2019-11-05 15:09:52.634 PST [23199][4/0] WARNING: 57P02: terminating connection because of crash of another server process\n> > 2019-11-05 15:09:52.634 PST [23199][4/0] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.\n> > 2019-11-05 15:09:52.634 PST [23199][4/0] HINT: In a moment you should be able to reconnect to the database and repeat your command.\n> > 2019-11-05 15:09:52.634 PST [23199][4/0] LOCATION: quickdie, postgres.c:2734\n> > 2019-11-05 15:09:52.634 PST [23187][3/0] WARNING: 57P02: terminating connection because of crash of another server process\n> > 2019-11-05 15:09:52.634 PST [23187][3/0] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.\n> > 2019-11-05 15:09:52.634 PST [23187][3/0] HINT: In a moment you should be able to reconnect to the database and repeat your command.\n> > 2019-11-05 15:09:52.634 PST [23187][3/0] LOCATION: quickdie, postgres.c:2734\n> > 2019-11-05 15:09:52.634 PST [9345][1/0] WARNING: 57P02: terminating connection because of crash of another server process\n> > 2019-11-05 15:09:52.634 PST [9345][1/0] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.\n> > 2019-11-05 15:09:52.634 PST [9345][1/0] HINT: In a moment you should be able to reconnect to the database and repeat your command.\n> > 2019-11-05 15:09:52.634 PST [9345][1/0] LOCATION: quickdie, postgres.c:2734\n> > 2019-11-05 15:09:52.644 PST [9340][] LOG: 00000: database system is shut down\n> > 2019-11-05 15:09:52.644 PST [9340][] LOCATION: UnlinkLockFiles, miscinit.c:859\n> > \n> > (23199, 23187 are backends, 9345 is the autovacuum launcher)\n> > \n> > I think there's multiple things wrong with this. For one, reading that\n> > the server has (no might in there) crashed, is scary, when all that's\n> > happening is an intentional shutdown. But also the sheer log volume is\n> > bad - on a busy server there may be a *lot* of these lines.\n> > \n> > I think the log volume is bad even for an actual PANIC. I've spent *way*\n> > too much time scrolling through pages and pages of the above lines, just\n> > to find the one or two lines indicating an actual error.\n> >\n> > It seems like we ought to be able to somehow\n> > \n> > a) Signal that the server has been shut down in immediate mode, rather\n> > than actually crashed, and issue a different log message to the user.\n> > \n> > b) Stop issuing, to the server log, the same message over and over. We\n> > instead just ought to send the message to the client. We however need to\n> > be careful that we don't make it harder to debug a SIGQUIT sent directly\n> > to backend processes.\n> \n> I doubt that different messages for server log and client worth\n> doing. Isn't it enough moving the cause description from backend\n> message to postmaster one?\n\nI'm not quite following what you're suggesting?\n\n\n> Addition to that, I don't see such a message on connecting psql. It\n> just reports as \"server closed the connection unexpectedly\" without a\n> server message. If I'm not missing something the HINT message is\n> useless..\n\nIt depends on the state of the connection IIRC, whether you'll get the\nerror message or not. There's some cases where we might get notified\nabout the connection having been closed, without actually reading the\nerror message. There were some libpq fixes around this not too long\nago, is it possible that you're running a version of psql linked against\nan older libpq version (e.g. from the OS)?\n\n\nI'm not a fan of the above error message libpq generates - it logs this\nin plenty cases where there was a network error. Talking about server\ncrashes when simple network issues may be the reason imo is not a great\nidea.\n\n\n\n\n> > 3) When a standby node is shutdown in immediate mode, we issue:\n> > \n> > 2019-11-05 15:45:58.722 PST [30321][] LOG: database system was interrupted while in recovery at log time 2019-11-05 15:37:43 PST\n> > 2019-11-05 15:45:58.722 PST [30321][] HINT: If this has occurred more than once some data might be corrupted and you might need to choose an earlier recovery target.\n> > \n> > Which imo doesn't really make sense for at least standbys, which are\n> > expected to be in recovery forever. There's more than enough scenarios\n> > where one would shut down a standby in immediate mode for good reasons,\n> > we shouldn't issue such warnings in that case.\n> > \n> > \n> > The tricky part in improving this would be how to detect when the\n> > message should still be issued for a standby. One idea, which is not\n> > bullet proof but might be good enough, would be to record in the control\n> > file which position recovery was started from last time, and only issue\n> > the error when recovery would start from the same point.\n> \n> Recovery always starts from the latest REDO point.\n\nWell, not quite always, e.g. when restoring from a base backup. But\notherwise, yea. But I'm not quite sure what your point is? We store the\nlast checkpoint/redo position in the control file, and update it every\ncheckpoint/restartpoint - so the last REDO pointer stored there, would\nnot necessarily be the same as the redo pointer we started up from last?\n\nAm I misunderstanding?\n\n\n> Maybe MinRecoveryPoint is better for the use.\n\nWhat MinRecoveryPoint gets set is not deterministic - it depends\ne.g. which buffers get written out at what time. I don't quite see how\nwe could make reliable use of it.\n\n\n> > I'm sure there are more types of messages in this category, these are\n> > just the ones I could immediately recall from memory as having scared\n> > actual users unnecessarily.\n> \n> I often see inquiries on \"FATAL: the database system is starting\n> up\". It is actually FATAL for backends internally but it is also\n> overly scary for users.\n\nWe could probably just reformulate that error message and help users,\nwithout any larger changes...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 7 Nov 2019 16:00:11 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Should we make scary sounding, but actually routine, errors less\n scary?"
}
] |
[
{
"msg_contents": "Hackers,\n\nplease find attached a patch fixing a problem previously discussed [1] \nabout the code inappropriately ignoring the return value from SPI_execute.\n\nI will be adding this to https://commitfest.postgresql.org/26/ shortly.\n\nMark Dilger\n\n[1] https://www.postgresql.org/message-id/24753.1558141935%40sss.pgh.pa.us",
"msg_date": "Tue, 5 Nov 2019 17:21:25 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": true,
"msg_subject": "Checking return value of SPI_execute"
},
{
"msg_contents": "On Tue, Nov 05, 2019 at 05:21:25PM -0800, Mark Dilger wrote:\n> please find attached a patch fixing a problem previously discussed [1] about\n> the code inappropriately ignoring the return value from SPI_execute.\n> \n> I will be adding this to https://commitfest.postgresql.org/26/\n> shortly.\n\nYes, this should be fixed.\n\n> -\tSPI_execute(query, true, 0);\n> +\tspi_result = SPI_execute(query, true, 0);\n> +\tif (spi_result < 0)\n> +\t\telog(ERROR, \"SPI_execute returned %s\", SPI_result_code_string(spi_result));\n\nAny queries processed in xml.c are plain SELECT queries, so it seems\nto me that you need to check after SPI_OK_SELECT as only valid\nresult.\n--\nMichael",
"msg_date": "Wed, 6 Nov 2019 13:27:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Checking return value of SPI_execute"
},
{
"msg_contents": "st 6. 11. 2019 v 5:28 odesílatel Michael Paquier <michael@paquier.xyz>\nnapsal:\n\n> On Tue, Nov 05, 2019 at 05:21:25PM -0800, Mark Dilger wrote:\n> > please find attached a patch fixing a problem previously discussed [1]\n> about\n> > the code inappropriately ignoring the return value from SPI_execute.\n> >\n> > I will be adding this to https://commitfest.postgresql.org/26/\n> > shortly.\n>\n> Yes, this should be fixed.\n>\n> > - SPI_execute(query, true, 0);\n> > + spi_result = SPI_execute(query, true, 0);\n> > + if (spi_result < 0)\n> > + elog(ERROR, \"SPI_execute returned %s\",\n> SPI_result_code_string(spi_result));\n>\n> Any queries processed in xml.c are plain SELECT queries, so it seems\n> to me that you need to check after SPI_OK_SELECT as only valid\n> result.\n>\n\nIs generic question if this exception should not be raised somewhere in\nspi.c - maybe at SPI_execute\n\nWhen you look to SPI_execute_plan, then checked errors has a character +/-\nassertions. All SQL errors are ended by a exception. This API is not too\nconsistent after years what is used.\n\nI agree so this result code should be tested for better code quality. But\nthis API is not consistent now, and should be refactored to use a\nexceptions instead result codes. Or instead error checking, a assertions\nshould be used.\n\nWhat do you think about it?\n\nPavel\n\n\n\n--\n> Michael\n>\n\nst 6. 11. 2019 v 5:28 odesílatel Michael Paquier <michael@paquier.xyz> napsal:On Tue, Nov 05, 2019 at 05:21:25PM -0800, Mark Dilger wrote:\n> please find attached a patch fixing a problem previously discussed [1] about\n> the code inappropriately ignoring the return value from SPI_execute.\n> \n> I will be adding this to https://commitfest.postgresql.org/26/\n> shortly.\n\nYes, this should be fixed.\n\n> - SPI_execute(query, true, 0);\n> + spi_result = SPI_execute(query, true, 0);\n> + if (spi_result < 0)\n> + elog(ERROR, \"SPI_execute returned %s\", SPI_result_code_string(spi_result));\n\nAny queries processed in xml.c are plain SELECT queries, so it seems\nto me that you need to check after SPI_OK_SELECT as only valid\nresult.Is generic question if this exception should not be raised somewhere in spi.c - maybe at SPI_executeWhen you look to SPI_execute_plan, then checked errors has a character +/- assertions. All SQL errors are ended by a exception. This API is not too consistent after years what is used.I agree so this result code should be tested for better code quality. But this API is not consistent now, and should be refactored to use a exceptions instead result codes. Or instead error checking, a assertions should be used.What do you think about it?Pavel\n--\nMichael",
"msg_date": "Wed, 6 Nov 2019 06:54:16 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Checking return value of SPI_execute"
},
{
"msg_contents": "On Wed, Nov 06, 2019 at 06:54:16AM +0100, Pavel Stehule wrote:\n> Is generic question if this exception should not be raised somewhere in\n> spi.c - maybe at SPI_execute.\n> \n> When you look to SPI_execute_plan, then checked errors has a character +/-\n> assertions. All SQL errors are ended by a exception. This API is not too\n> consistent after years what is used.\n> \n> I agree so this result code should be tested for better code quality. But\n> this API is not consistent now, and should be refactored to use a\n> exceptions instead result codes. Or instead error checking, a assertions\n> should be used.\n>\n> What do you think about it?\n\nI am not sure what you are proposing here, nor am I sure to what kind\nof assertions you are referring to in spi.c. If we were to change the\nerror reporting, what of the external and existing consumers of this\nroutine? They would not expect to bump on an exception and perhaps\nneed to handle error code paths by themselves, no?\n\nAnyway, any callers of SPI_execute() (tablefunc.c, matview.c) we have\nnow in-core react based on a status or a set of statuses they expect,\nso based on that fixing this caller in xml.c sounds fine to me.\n--\nMichael",
"msg_date": "Wed, 6 Nov 2019 16:56:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Checking return value of SPI_execute"
},
{
"msg_contents": "st 6. 11. 2019 v 8:56 odesílatel Michael Paquier <michael@paquier.xyz>\nnapsal:\n\n> On Wed, Nov 06, 2019 at 06:54:16AM +0100, Pavel Stehule wrote:\n> > Is generic question if this exception should not be raised somewhere in\n> > spi.c - maybe at SPI_execute.\n> >\n> > When you look to SPI_execute_plan, then checked errors has a character\n> +/-\n> > assertions. All SQL errors are ended by a exception. This API is not too\n> > consistent after years what is used.\n> >\n> > I agree so this result code should be tested for better code quality. But\n> > this API is not consistent now, and should be refactored to use a\n> > exceptions instead result codes. Or instead error checking, a assertions\n> > should be used.\n> >\n> > What do you think about it?\n>\n> I am not sure what you are proposing here, nor am I sure to what kind\n> of assertions you are referring to in spi.c. If we were to change the\n> error reporting, what of the external and existing consumers of this\n> routine? They would not expect to bump on an exception and perhaps\n> need to handle error code paths by themselves, no?\n>\n\n> Anyway, any callers of SPI_execute() (tablefunc.c, matview.c) we have\n> now in-core react based on a status or a set of statuses they expect,\n> so based on that fixing this caller in xml.c sounds fine to me.\n>\n\nThis fix is correct.\n\nMy comment was about maybe obsolescence of this API. Probably it was\ndesigned before exception introduction.\n\nFor example - syntax error is ended by exception. Wrong numbers of argument\nis signalized by error status. I didn't study this code, but maybe was much\nmore effective to raise exceptions inside SPI instead return status code.\nThese errors are finished by exceptions, but these exceptions coming from\ndifferent places. For me it looks strange, if some functions returns error\nstatus, but can be ended by exception too.\n\nPavel\n\n> --\n> Michael\n>\n\nst 6. 11. 2019 v 8:56 odesílatel Michael Paquier <michael@paquier.xyz> napsal:On Wed, Nov 06, 2019 at 06:54:16AM +0100, Pavel Stehule wrote:\n> Is generic question if this exception should not be raised somewhere in\n> spi.c - maybe at SPI_execute.\n> \n> When you look to SPI_execute_plan, then checked errors has a character +/-\n> assertions. All SQL errors are ended by a exception. This API is not too\n> consistent after years what is used.\n> \n> I agree so this result code should be tested for better code quality. But\n> this API is not consistent now, and should be refactored to use a\n> exceptions instead result codes. Or instead error checking, a assertions\n> should be used.\n>\n> What do you think about it?\n\nI am not sure what you are proposing here, nor am I sure to what kind\nof assertions you are referring to in spi.c. If we were to change the\nerror reporting, what of the external and existing consumers of this\nroutine? They would not expect to bump on an exception and perhaps\nneed to handle error code paths by themselves, no? \n\nAnyway, any callers of SPI_execute() (tablefunc.c, matview.c) we have\nnow in-core react based on a status or a set of statuses they expect,\nso based on that fixing this caller in xml.c sounds fine to me.This fix is correct. My comment was about maybe obsolescence of this API. Probably it was designed before exception introduction. For example - syntax error is ended by exception. Wrong numbers of argument is signalized by error status. I didn't study this code, but maybe was much more effective to raise exceptions inside SPI instead return status code. These errors are finished by exceptions, but these exceptions coming from different places. For me it looks strange, if some functions returns error status, but can be ended by exception too.Pavel\n--\nMichael",
"msg_date": "Wed, 6 Nov 2019 10:40:14 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Checking return value of SPI_execute"
},
{
"msg_contents": "On 2019-Nov-06, Pavel Stehule wrote:\n\n> My comment was about maybe obsolescence of this API. Probably it was\n> designed before exception introduction.\n> \n> For example - syntax error is ended by exception. Wrong numbers of argument\n> is signalized by error status. I didn't study this code, but maybe was much\n> more effective to raise exceptions inside SPI instead return status code.\n> These errors are finished by exceptions, but these exceptions coming from\n> different places. For me it looks strange, if some functions returns error\n> status, but can be ended by exception too.\n\nYeah, I think I'd rather have more status codes and less exceptions,\nthan the other way around. The problem with throwing exceptions for\nevery kind of error is that we don't allow exceptions to be caught (per\nproject policy) except to be rethrown. It seems like for errors where\nthe SPI code can clean up its own resources (free memory, close portals\netc), it should do such cleanup then return SPI_SYNTAX_ERROR or whatever\nand the caller can decide whether to turn this into an exception or\nhandle in a different way; whereas for exceptions thrown by callees (say\nOOM) it would just propagate the exception. This mean callers are\nforced into adding code to check for return codes, but it allows more\nflexibility.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 6 Nov 2019 12:11:12 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Checking return value of SPI_execute"
},
{
"msg_contents": "\n\nOn 11/5/19 8:27 PM, Michael Paquier wrote:\n> On Tue, Nov 05, 2019 at 05:21:25PM -0800, Mark Dilger wrote:\n>> please find attached a patch fixing a problem previously discussed [1] about\n>> the code inappropriately ignoring the return value from SPI_execute.\n>>\n>> I will be adding this to https://commitfest.postgresql.org/26/\n>> shortly.\n> \n> Yes, this should be fixed.\n> \n>> -\tSPI_execute(query, true, 0);\n>> +\tspi_result = SPI_execute(query, true, 0);\n>> +\tif (spi_result < 0)\n>> +\t\telog(ERROR, \"SPI_execute returned %s\", SPI_result_code_string(spi_result));\n> \n> Any queries processed in xml.c are plain SELECT queries, so it seems\n> to me that you need to check after SPI_OK_SELECT as only valid\n> result.\n\nOther code that checks the return value from an SPI function is \ninconsistent about whether it checks for SPI_OK_SELECT or simply checks \nfor a negative result. I was on the fence about which precedent to \nfollow, and was just slightly in favor of testing for negative rather \nthan SPI_OK_SELECT due to this function, query_to_oid_list, taking the \nquery string as an argument and not controlling whether that argument is \nindeed a plain SELECT.\n\nI don't feel strongly about it.\n\nMark Dilger\n\n\n",
"msg_date": "Wed, 6 Nov 2019 07:35:18 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Checking return value of SPI_execute"
},
{
"msg_contents": "\n\nOn 11/5/19 9:54 PM, Pavel Stehule wrote:\n> \n> \n> st 6. 11. 2019 v 5:28 odesílatel Michael Paquier <michael@paquier.xyz \n> <mailto:michael@paquier.xyz>> napsal:\n> \n> On Tue, Nov 05, 2019 at 05:21:25PM -0800, Mark Dilger wrote:\n> > please find attached a patch fixing a problem previously\n> discussed [1] about\n> > the code inappropriately ignoring the return value from SPI_execute.\n> >\n> > I will be adding this to https://commitfest.postgresql.org/26/\n> > shortly.\n> \n> Yes, this should be fixed.\n> \n> > - SPI_execute(query, true, 0);\n> > + spi_result = SPI_execute(query, true, 0);\n> > + if (spi_result < 0)\n> > + elog(ERROR, \"SPI_execute returned %s\",\n> SPI_result_code_string(spi_result));\n> \n> Any queries processed in xml.c are plain SELECT queries, so it seems\n> to me that you need to check after SPI_OK_SELECT as only valid\n> result.\n> \n> \n> Is generic question if this exception should not be raised somewhere in \n> spi.c - maybe at SPI_execute\n> \n> When you look to SPI_execute_plan, then checked errors has a character \n> +/- assertions. All SQL errors are ended by a exception. This API is not \n> too consistent after years what is used.\n> \n> I agree so this result code should be tested for better code quality. \n> But this API is not consistent now, and should be refactored to use a \n> exceptions instead result codes. Or instead error checking, a assertions \n> should be used.\n> \n> What do you think about it?\n\nI am creating another patch which removes most of the error codes from \nthe interface and uses elog(ERROR) or ereport(ERROR) instead, but I \nanticipate a lot of debate about that design and wanted to get this \nsimpler patch into the queue. I don't think we need to reject this \npatch in favor of redesigning the entire SPI API. Instead, we can apply \nthis patch as a simple bug fix, and then if it gets removed later when \nthe other, larger patch is committed, so be it.\n\nDoes that plan seem acceptable?\n\nMark Dilger\n\n\n",
"msg_date": "Wed, 6 Nov 2019 07:38:27 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Checking return value of SPI_execute"
},
{
"msg_contents": "st 6. 11. 2019 v 16:38 odesílatel Mark Dilger <hornschnorter@gmail.com>\nnapsal:\n\n>\n>\n> On 11/5/19 9:54 PM, Pavel Stehule wrote:\n> >\n> >\n> > st 6. 11. 2019 v 5:28 odesílatel Michael Paquier <michael@paquier.xyz\n> > <mailto:michael@paquier.xyz>> napsal:\n> >\n> > On Tue, Nov 05, 2019 at 05:21:25PM -0800, Mark Dilger wrote:\n> > > please find attached a patch fixing a problem previously\n> > discussed [1] about\n> > > the code inappropriately ignoring the return value from\n> SPI_execute.\n> > >\n> > > I will be adding this to https://commitfest.postgresql.org/26/\n> > > shortly.\n> >\n> > Yes, this should be fixed.\n> >\n> > > - SPI_execute(query, true, 0);\n> > > + spi_result = SPI_execute(query, true, 0);\n> > > + if (spi_result < 0)\n> > > + elog(ERROR, \"SPI_execute returned %s\",\n> > SPI_result_code_string(spi_result));\n> >\n> > Any queries processed in xml.c are plain SELECT queries, so it seems\n> > to me that you need to check after SPI_OK_SELECT as only valid\n> > result.\n> >\n> >\n> > Is generic question if this exception should not be raised somewhere in\n> > spi.c - maybe at SPI_execute\n> >\n> > When you look to SPI_execute_plan, then checked errors has a character\n> > +/- assertions. All SQL errors are ended by a exception. This API is not\n> > too consistent after years what is used.\n> >\n> > I agree so this result code should be tested for better code quality.\n> > But this API is not consistent now, and should be refactored to use a\n> > exceptions instead result codes. Or instead error checking, a assertions\n> > should be used.\n> >\n> > What do you think about it?\n>\n> I am creating another patch which removes most of the error codes from\n> the interface and uses elog(ERROR) or ereport(ERROR) instead, but I\n> anticipate a lot of debate about that design and wanted to get this\n> simpler patch into the queue. I don't think we need to reject this\n> patch in favor of redesigning the entire SPI API. Instead, we can apply\n> this patch as a simple bug fix, and then if it gets removed later when\n> the other, larger patch is committed, so be it.\n>\n> Does that plan seem acceptable?\n>\n\nI am not against these fix.\n\nRegards\n\nPavel\n\n>\n> Mark Dilger\n>\n\nst 6. 11. 2019 v 16:38 odesílatel Mark Dilger <hornschnorter@gmail.com> napsal:\n\nOn 11/5/19 9:54 PM, Pavel Stehule wrote:\n> \n> \n> st 6. 11. 2019 v 5:28 odesílatel Michael Paquier <michael@paquier.xyz \n> <mailto:michael@paquier.xyz>> napsal:\n> \n> On Tue, Nov 05, 2019 at 05:21:25PM -0800, Mark Dilger wrote:\n> > please find attached a patch fixing a problem previously\n> discussed [1] about\n> > the code inappropriately ignoring the return value from SPI_execute.\n> >\n> > I will be adding this to https://commitfest.postgresql.org/26/\n> > shortly.\n> \n> Yes, this should be fixed.\n> \n> > - SPI_execute(query, true, 0);\n> > + spi_result = SPI_execute(query, true, 0);\n> > + if (spi_result < 0)\n> > + elog(ERROR, \"SPI_execute returned %s\",\n> SPI_result_code_string(spi_result));\n> \n> Any queries processed in xml.c are plain SELECT queries, so it seems\n> to me that you need to check after SPI_OK_SELECT as only valid\n> result.\n> \n> \n> Is generic question if this exception should not be raised somewhere in \n> spi.c - maybe at SPI_execute\n> \n> When you look to SPI_execute_plan, then checked errors has a character \n> +/- assertions. All SQL errors are ended by a exception. This API is not \n> too consistent after years what is used.\n> \n> I agree so this result code should be tested for better code quality. \n> But this API is not consistent now, and should be refactored to use a \n> exceptions instead result codes. Or instead error checking, a assertions \n> should be used.\n> \n> What do you think about it?\n\nI am creating another patch which removes most of the error codes from \nthe interface and uses elog(ERROR) or ereport(ERROR) instead, but I \nanticipate a lot of debate about that design and wanted to get this \nsimpler patch into the queue. I don't think we need to reject this \npatch in favor of redesigning the entire SPI API. Instead, we can apply \nthis patch as a simple bug fix, and then if it gets removed later when \nthe other, larger patch is committed, so be it.\n\nDoes that plan seem acceptable?I am not against these fix. RegardsPavel\n\nMark Dilger",
"msg_date": "Wed, 6 Nov 2019 16:57:47 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Checking return value of SPI_execute"
},
{
"msg_contents": "On Wed, Nov 06, 2019 at 07:35:18AM -0800, Mark Dilger wrote:\n> Other code that checks the return value from an SPI function is inconsistent\n> about whether it checks for SPI_OK_SELECT or simply checks for a negative\n> result. I was on the fence about which precedent to follow, and was just\n> slightly in favor of testing for negative rather than SPI_OK_SELECT due to\n> this function, query_to_oid_list, taking the query string as an argument and\n> not controlling whether that argument is indeed a plain SELECT.\n> \n> I don't feel strongly about it.\n\nThe code relies on SELECT queries now to fetch a list of relation\nOIDs and it is read-only. If it happens that another query type makes\nsense for this code path, then the person using the routine will need\nto think about what to do when seeing the new error. The current code\nexists for ages, so I have applied your change only on HEAD.\n--\nMichael",
"msg_date": "Thu, 7 Nov 2019 11:13:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Checking return value of SPI_execute"
},
{
"msg_contents": "\n\nOn 11/6/19 7:11 AM, Alvaro Herrera wrote:\n> On 2019-Nov-06, Pavel Stehule wrote:\n> \n>> My comment was about maybe obsolescence of this API. Probably it was\n>> designed before exception introduction.\n>>\n>> For example - syntax error is ended by exception. Wrong numbers of argument\n>> is signalized by error status. I didn't study this code, but maybe was much\n>> more effective to raise exceptions inside SPI instead return status code.\n>> These errors are finished by exceptions, but these exceptions coming from\n>> different places. For me it looks strange, if some functions returns error\n>> status, but can be ended by exception too.\n> \n> Yeah, I think I'd rather have more status codes and less exceptions,\n> than the other way around. The problem with throwing exceptions for\n> every kind of error is that we don't allow exceptions to be caught (per\n> project policy) except to be rethrown. It seems like for errors where\n> the SPI code can clean up its own resources (free memory, close portals\n> etc), it should do such cleanup then return SPI_SYNTAX_ERROR or whatever\n> and the caller can decide whether to turn this into an exception or\n> handle in a different way; whereas for exceptions thrown by callees (say\n> OOM) it would just propagate the exception. This mean callers are\n> forced into adding code to check for return codes, but it allows more\n> flexibility.\n> \n\nI like to distinguish between (a) errors that can happen when a well \nwritten bit of C code passes possibly bad SQL through SPI, and (b) \nerrors that can only happen when SPI is called from a poorly written C \nprogram.\n\nExamples of (a) are SPI_ERROR_COPY and SPI_ERROR_TRANSACTION, which can \nboth happen from disallowed actions within a plpgsql function.\n\nAn example of (b) is SPI_ERROR_PARAM, which only gets returned if the \ncaller passed into SPI a plan which has nargs > 0 but then negligently \npassed in NULL for the args and/or argtypes.\n\nI'd like to keep the status codes for (a) but deprecate error codes for \n(b) in favor of elog(ERROR). I don't see that these elogs should ever \nbe a problem, since getting one in testing would indicate the need to \nfix bad C code, not the need to catch an exception and take remedial \naction at run time. Does this adequately address your concern?\n\nMy research so far indicates that most return codes are either totally \nunused or of type (b), with only a few of type (a).\n\n-- \nMark Dilger\n\n\n",
"msg_date": "Thu, 7 Nov 2019 09:05:53 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Checking return value of SPI_execute"
},
{
"msg_contents": "On 2019-Nov-07, Mark Dilger wrote:\n\n> I'd like to keep the status codes for (a) but deprecate error codes for (b)\n> in favor of elog(ERROR). I don't see that these elogs should ever be a\n> problem, since getting one in testing would indicate the need to fix bad C\n> code, not the need to catch an exception and take remedial action at run\n> time. Does this adequately address your concern?\n\nYes, I think it does.\n\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 7 Nov 2019 14:38:29 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Checking return value of SPI_execute"
}
] |
[
{
"msg_contents": "Hi,\n\nI found couple of crashes in reorderbuffer while review/testing of\nlogical_work_mem and logical streaming of large in-progress\ntransactions. Stack trace of the same are given below:\nIssue 1:\n#0 0x00007f985c7d8337 in raise () from /lib64/libc.so.6\n#1 0x00007f985c7d9a28 in abort () from /lib64/libc.so.6\n#2 0x0000000000ec514d in ExceptionalCondition\n(conditionName=0x10eab34 \"!dlist_is_empty(head)\", errorType=0x10eab24\n\"FailedAssertion\",\n fileName=0x10eab00 \"../../../../src/include/lib/ilist.h\",\nlineNumber=458) at assert.c:54\n#3 0x0000000000b4fd13 in dlist_tail_element_off (head=0x338fe60,\noff=48) at ../../../../src/include/lib/ilist.h:458\n#4 0x0000000000b547b7 in ReorderBufferAbortOld (rb=0x32ae7a0,\noldestRunningXid=895) at reorderbuffer.c:1910\n#5 0x0000000000b3cb5e in DecodeStandbyOp (ctx=0x33424b0,\nbuf=0x7fff7e7b1e40) at decode.c:332\n#6 0x0000000000b3c363 in LogicalDecodingProcessRecord (ctx=0x33424b0,\nrecord=0x3342770) at decode.c:121\n#7 0x0000000000b704b2 in XLogSendLogical () at walsender.c:2845\n#8 0x0000000000b6e9f8 in WalSndLoop (send_data=0xb7038b\n<XLogSendLogical>) at walsender.c:2199\n#9 0x0000000000b6bbf5 in StartLogicalReplication (cmd=0x33167a8) at\nwalsender.c:1128\n#10 0x0000000000b6ce83 in exec_replication_command\n(cmd_string=0x328a0a0 \"START_REPLICATION SLOT \\\"sub1\\\" LOGICAL 0/0\n(proto_version '1', publication_names '\\\"pub1\\\"')\")\n at walsender.c:1545\n#11 0x0000000000c39f85 in PostgresMain (argc=1, argv=0x32b51c0,\ndbname=0x32b50e0 \"testdb\", username=0x32b50c0 \"user1\") at\npostgres.c:4256\n#12 0x0000000000b10dc7 in BackendRun (port=0x32ad890) at postmaster.c:4498\n#13 0x0000000000b0ff3e in BackendStartup (port=0x32ad890) at postmaster.c:4189\n#14 0x0000000000b08505 in ServerLoop () at postmaster.c:1727\n#15 0x0000000000b0781a in PostmasterMain (argc=3, argv=0x3284cb0) at\npostmaster.c:1400\n#16 0x000000000097492d in main (argc=3, argv=0x3284cb0) at main.c:210\n\nIssue 2:\n#0 0x00007f1d7ddc4337 in raise () from /lib64/libc.so.6\n#1 0x00007f1d7ddc5a28 in abort () from /lib64/libc.so.6\n#2 0x0000000000ec4e1d in ExceptionalCondition\n(conditionName=0x10ead30 \"txn->final_lsn != InvalidXLogRecPtr\",\nerrorType=0x10ea284 \"FailedAssertion\",\n fileName=0x10ea2d0 \"reorderbuffer.c\", lineNumber=3052) at assert.c:54\n#3 0x0000000000b577e0 in ReorderBufferRestoreCleanup (rb=0x2ae36b0,\ntxn=0x2bafb08) at reorderbuffer.c:3052\n#4 0x0000000000b52b1c in ReorderBufferCleanupTXN (rb=0x2ae36b0,\ntxn=0x2bafb08) at reorderbuffer.c:1318\n#5 0x0000000000b5279d in ReorderBufferCleanupTXN (rb=0x2ae36b0,\ntxn=0x2b9d778) at reorderbuffer.c:1257\n#6 0x0000000000b5475c in ReorderBufferAbortOld (rb=0x2ae36b0,\noldestRunningXid=3835) at reorderbuffer.c:1973\n#7 0x0000000000b3ca03 in DecodeStandbyOp (ctx=0x2b676d0,\nbuf=0x7ffcbc74cc00) at decode.c:332\n#8 0x0000000000b3c208 in LogicalDecodingProcessRecord (ctx=0x2b676d0,\nrecord=0x2b67990) at decode.c:121\n#9 0x0000000000b70b2b in XLogSendLogical () at walsender.c:2845\n\n From initial analysis it looks like:\nIssue1 it seems like if all the reorderbuffer has been flushed and\nthen the server restarts. This problem occurs.\nIssue 2 it seems like if there are many subtransactions present and\nthen the server restarts. This problem occurs. The subtransaction's\nfinal_lsn is not being set and when ReorderBufferRestoreCleanup is\ncalled the assert fails. May be for this we might have to set the\nsubtransaction's final_lsn before cleanup(not sure).\n\nI could not reproduce this issue consistently with a test case, But I\nfelt this looks like a problem from review.\n\nFor issue1, I could reproduce by the following steps:\n1) Change ReorderBufferCheckSerializeTXN so that it gets flushed always.\n2) Have many open transactions with subtransactions open.\n3) Attach one of the transaction from gdb and call abort().\n\nI'm not sure of the fix for this. If I get time I will try to spend\nmore time to find out the fix.\nThoughts?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 6 Nov 2019 17:20:02 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Reorderbuffer crash during recovery"
},
{
"msg_contents": "On Wed, Nov 6, 2019 at 5:20 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Hi,\n>\n> I found couple of crashes in reorderbuffer while review/testing of\n> logical_work_mem and logical streaming of large in-progress\n> transactions. Stack trace of the same are given below:\n> Issue 1:\n> #0 0x00007f985c7d8337 in raise () from /lib64/libc.so.6\n> #1 0x00007f985c7d9a28 in abort () from /lib64/libc.so.6\n> #2 0x0000000000ec514d in ExceptionalCondition\n> (conditionName=0x10eab34 \"!dlist_is_empty(head)\", errorType=0x10eab24\n> \"FailedAssertion\",\n> fileName=0x10eab00 \"../../../../src/include/lib/ilist.h\",\n> lineNumber=458) at assert.c:54\n> #3 0x0000000000b4fd13 in dlist_tail_element_off (head=0x338fe60,\n> off=48) at ../../../../src/include/lib/ilist.h:458\n> #4 0x0000000000b547b7 in ReorderBufferAbortOld (rb=0x32ae7a0,\n> oldestRunningXid=895) at reorderbuffer.c:1910\n> #5 0x0000000000b3cb5e in DecodeStandbyOp (ctx=0x33424b0,\n> buf=0x7fff7e7b1e40) at decode.c:332\n> #6 0x0000000000b3c363 in LogicalDecodingProcessRecord (ctx=0x33424b0,\n> record=0x3342770) at decode.c:121\n> #7 0x0000000000b704b2 in XLogSendLogical () at walsender.c:2845\n> #8 0x0000000000b6e9f8 in WalSndLoop (send_data=0xb7038b\n> <XLogSendLogical>) at walsender.c:2199\n> #9 0x0000000000b6bbf5 in StartLogicalReplication (cmd=0x33167a8) at\n> walsender.c:1128\n> #10 0x0000000000b6ce83 in exec_replication_command\n> (cmd_string=0x328a0a0 \"START_REPLICATION SLOT \\\"sub1\\\" LOGICAL 0/0\n> (proto_version '1', publication_names '\\\"pub1\\\"')\")\n> at walsender.c:1545\n> #11 0x0000000000c39f85 in PostgresMain (argc=1, argv=0x32b51c0,\n> dbname=0x32b50e0 \"testdb\", username=0x32b50c0 \"user1\") at\n> postgres.c:4256\n> #12 0x0000000000b10dc7 in BackendRun (port=0x32ad890) at postmaster.c:4498\n> #13 0x0000000000b0ff3e in BackendStartup (port=0x32ad890) at postmaster.c:4189\n> #14 0x0000000000b08505 in ServerLoop () at postmaster.c:1727\n> #15 0x0000000000b0781a in PostmasterMain (argc=3, argv=0x3284cb0) at\n> postmaster.c:1400\n> #16 0x000000000097492d in main (argc=3, argv=0x3284cb0) at main.c:210\n>\n> Issue 2:\n> #0 0x00007f1d7ddc4337 in raise () from /lib64/libc.so.6\n> #1 0x00007f1d7ddc5a28 in abort () from /lib64/libc.so.6\n> #2 0x0000000000ec4e1d in ExceptionalCondition\n> (conditionName=0x10ead30 \"txn->final_lsn != InvalidXLogRecPtr\",\n> errorType=0x10ea284 \"FailedAssertion\",\n> fileName=0x10ea2d0 \"reorderbuffer.c\", lineNumber=3052) at assert.c:54\n> #3 0x0000000000b577e0 in ReorderBufferRestoreCleanup (rb=0x2ae36b0,\n> txn=0x2bafb08) at reorderbuffer.c:3052\n> #4 0x0000000000b52b1c in ReorderBufferCleanupTXN (rb=0x2ae36b0,\n> txn=0x2bafb08) at reorderbuffer.c:1318\n> #5 0x0000000000b5279d in ReorderBufferCleanupTXN (rb=0x2ae36b0,\n> txn=0x2b9d778) at reorderbuffer.c:1257\n> #6 0x0000000000b5475c in ReorderBufferAbortOld (rb=0x2ae36b0,\n> oldestRunningXid=3835) at reorderbuffer.c:1973\n> #7 0x0000000000b3ca03 in DecodeStandbyOp (ctx=0x2b676d0,\n> buf=0x7ffcbc74cc00) at decode.c:332\n> #8 0x0000000000b3c208 in LogicalDecodingProcessRecord (ctx=0x2b676d0,\n> record=0x2b67990) at decode.c:121\n> #9 0x0000000000b70b2b in XLogSendLogical () at walsender.c:2845\n>\n> From initial analysis it looks like:\n> Issue1 it seems like if all the reorderbuffer has been flushed and\n> then the server restarts. This problem occurs.\n> Issue 2 it seems like if there are many subtransactions present and\n> then the server restarts. This problem occurs. The subtransaction's\n> final_lsn is not being set and when ReorderBufferRestoreCleanup is\n> called the assert fails. May be for this we might have to set the\n> subtransaction's final_lsn before cleanup(not sure).\n>\n> I could not reproduce this issue consistently with a test case, But I\n> felt this looks like a problem from review.\n>\n> For issue1, I could reproduce by the following steps:\n> 1) Change ReorderBufferCheckSerializeTXN so that it gets flushed always.\n> 2) Have many open transactions with subtransactions open.\n> 3) Attach one of the transaction from gdb and call abort().\n\nDo you need subtransactions for the issue1? It appears that after the\nrestart if the changes list is empty it will hit the assert. Am I\nmissing something?\n\n>\n> I'm not sure of the fix for this. If I get time I will try to spend\n> more time to find out the fix.\n> Thoughts?\n>\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 6 Nov 2019 17:40:52 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reorderbuffer crash during recovery"
},
{
"msg_contents": "On Wed, Nov 6, 2019 at 5:41 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Nov 6, 2019 at 5:20 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > I found couple of crashes in reorderbuffer while review/testing of\n> > logical_work_mem and logical streaming of large in-progress\n> > transactions. Stack trace of the same are given below:\n> > Issue 1:\n> > #0 0x00007f985c7d8337 in raise () from /lib64/libc.so.6\n> > #1 0x00007f985c7d9a28 in abort () from /lib64/libc.so.6\n> > #2 0x0000000000ec514d in ExceptionalCondition\n> > (conditionName=0x10eab34 \"!dlist_is_empty(head)\", errorType=0x10eab24\n> > \"FailedAssertion\",\n> > fileName=0x10eab00 \"../../../../src/include/lib/ilist.h\",\n> > lineNumber=458) at assert.c:54\n> > #3 0x0000000000b4fd13 in dlist_tail_element_off (head=0x338fe60,\n> > off=48) at ../../../../src/include/lib/ilist.h:458\n> > #4 0x0000000000b547b7 in ReorderBufferAbortOld (rb=0x32ae7a0,\n> > oldestRunningXid=895) at reorderbuffer.c:1910\n> > #5 0x0000000000b3cb5e in DecodeStandbyOp (ctx=0x33424b0,\n> > buf=0x7fff7e7b1e40) at decode.c:332\n> > #6 0x0000000000b3c363 in LogicalDecodingProcessRecord (ctx=0x33424b0,\n> > record=0x3342770) at decode.c:121\n> > #7 0x0000000000b704b2 in XLogSendLogical () at walsender.c:2845\n> > #8 0x0000000000b6e9f8 in WalSndLoop (send_data=0xb7038b\n> > <XLogSendLogical>) at walsender.c:2199\n> > #9 0x0000000000b6bbf5 in StartLogicalReplication (cmd=0x33167a8) at\n> > walsender.c:1128\n> > #10 0x0000000000b6ce83 in exec_replication_command\n> > (cmd_string=0x328a0a0 \"START_REPLICATION SLOT \\\"sub1\\\" LOGICAL 0/0\n> > (proto_version '1', publication_names '\\\"pub1\\\"')\")\n> > at walsender.c:1545\n> > #11 0x0000000000c39f85 in PostgresMain (argc=1, argv=0x32b51c0,\n> > dbname=0x32b50e0 \"testdb\", username=0x32b50c0 \"user1\") at\n> > postgres.c:4256\n> > #12 0x0000000000b10dc7 in BackendRun (port=0x32ad890) at postmaster.c:4498\n> > #13 0x0000000000b0ff3e in BackendStartup (port=0x32ad890) at postmaster.c:4189\n> > #14 0x0000000000b08505 in ServerLoop () at postmaster.c:1727\n> > #15 0x0000000000b0781a in PostmasterMain (argc=3, argv=0x3284cb0) at\n> > postmaster.c:1400\n> > #16 0x000000000097492d in main (argc=3, argv=0x3284cb0) at main.c:210\n> >\n> > Issue 2:\n> > #0 0x00007f1d7ddc4337 in raise () from /lib64/libc.so.6\n> > #1 0x00007f1d7ddc5a28 in abort () from /lib64/libc.so.6\n> > #2 0x0000000000ec4e1d in ExceptionalCondition\n> > (conditionName=0x10ead30 \"txn->final_lsn != InvalidXLogRecPtr\",\n> > errorType=0x10ea284 \"FailedAssertion\",\n> > fileName=0x10ea2d0 \"reorderbuffer.c\", lineNumber=3052) at assert.c:54\n> > #3 0x0000000000b577e0 in ReorderBufferRestoreCleanup (rb=0x2ae36b0,\n> > txn=0x2bafb08) at reorderbuffer.c:3052\n> > #4 0x0000000000b52b1c in ReorderBufferCleanupTXN (rb=0x2ae36b0,\n> > txn=0x2bafb08) at reorderbuffer.c:1318\n> > #5 0x0000000000b5279d in ReorderBufferCleanupTXN (rb=0x2ae36b0,\n> > txn=0x2b9d778) at reorderbuffer.c:1257\n> > #6 0x0000000000b5475c in ReorderBufferAbortOld (rb=0x2ae36b0,\n> > oldestRunningXid=3835) at reorderbuffer.c:1973\n> > #7 0x0000000000b3ca03 in DecodeStandbyOp (ctx=0x2b676d0,\n> > buf=0x7ffcbc74cc00) at decode.c:332\n> > #8 0x0000000000b3c208 in LogicalDecodingProcessRecord (ctx=0x2b676d0,\n> > record=0x2b67990) at decode.c:121\n> > #9 0x0000000000b70b2b in XLogSendLogical () at walsender.c:2845\n> >\n> > From initial analysis it looks like:\n> > Issue1 it seems like if all the reorderbuffer has been flushed and\n> > then the server restarts. This problem occurs.\n> > Issue 2 it seems like if there are many subtransactions present and\n> > then the server restarts. This problem occurs. The subtransaction's\n> > final_lsn is not being set and when ReorderBufferRestoreCleanup is\n> > called the assert fails. May be for this we might have to set the\n> > subtransaction's final_lsn before cleanup(not sure).\n> >\n> > I could not reproduce this issue consistently with a test case, But I\n> > felt this looks like a problem from review.\n> >\n> > For issue1, I could reproduce by the following steps:\n> > 1) Change ReorderBufferCheckSerializeTXN so that it gets flushed always.\n> > 2) Have many open transactions with subtransactions open.\n> > 3) Attach one of the transaction from gdb and call abort().\n>\n> Do you need subtransactions for the issue1? It appears that after the\n> restart if the changes list is empty it will hit the assert. Am I\n> missing something?\n>\n\nWhen I had reported this issue I could reproduce this issue with\nsub-transactions. Now I have tried without using sub-transactions and\ncould still reproduce this issue. You are right Issue 1 will appear in\nboth the cases with and without subtransactions.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 7 Nov 2019 09:55:16 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reorderbuffer crash during recovery"
},
{
"msg_contents": "On Thu, Nov 7, 2019 at 9:55 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Wed, Nov 6, 2019 at 5:41 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Wed, Nov 6, 2019 at 5:20 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > Hi,\n> > >\n> > > I found couple of crashes in reorderbuffer while review/testing of\n> > > logical_work_mem and logical streaming of large in-progress\n> > > transactions. Stack trace of the same are given below:\n> > > Issue 1:\n> > > #0 0x00007f985c7d8337 in raise () from /lib64/libc.so.6\n> > > #1 0x00007f985c7d9a28 in abort () from /lib64/libc.so.6\n> > > #2 0x0000000000ec514d in ExceptionalCondition\n> > > (conditionName=0x10eab34 \"!dlist_is_empty(head)\", errorType=0x10eab24\n> > > \"FailedAssertion\",\n> > > fileName=0x10eab00 \"../../../../src/include/lib/ilist.h\",\n> > > lineNumber=458) at assert.c:54\n> > > #3 0x0000000000b4fd13 in dlist_tail_element_off (head=0x338fe60,\n> > > off=48) at ../../../../src/include/lib/ilist.h:458\n> > > #4 0x0000000000b547b7 in ReorderBufferAbortOld (rb=0x32ae7a0,\n> > > oldestRunningXid=895) at reorderbuffer.c:1910\n> > > #5 0x0000000000b3cb5e in DecodeStandbyOp (ctx=0x33424b0,\n> > > buf=0x7fff7e7b1e40) at decode.c:332\n> > > #6 0x0000000000b3c363 in LogicalDecodingProcessRecord (ctx=0x33424b0,\n> > > record=0x3342770) at decode.c:121\n> > > #7 0x0000000000b704b2 in XLogSendLogical () at walsender.c:2845\n> > > #8 0x0000000000b6e9f8 in WalSndLoop (send_data=0xb7038b\n> > > <XLogSendLogical>) at walsender.c:2199\n> > > #9 0x0000000000b6bbf5 in StartLogicalReplication (cmd=0x33167a8) at\n> > > walsender.c:1128\n> > > #10 0x0000000000b6ce83 in exec_replication_command\n> > > (cmd_string=0x328a0a0 \"START_REPLICATION SLOT \\\"sub1\\\" LOGICAL 0/0\n> > > (proto_version '1', publication_names '\\\"pub1\\\"')\")\n> > > at walsender.c:1545\n> > > #11 0x0000000000c39f85 in PostgresMain (argc=1, argv=0x32b51c0,\n> > > dbname=0x32b50e0 \"testdb\", username=0x32b50c0 \"user1\") at\n> > > postgres.c:4256\n> > > #12 0x0000000000b10dc7 in BackendRun (port=0x32ad890) at postmaster.c:4498\n> > > #13 0x0000000000b0ff3e in BackendStartup (port=0x32ad890) at postmaster.c:4189\n> > > #14 0x0000000000b08505 in ServerLoop () at postmaster.c:1727\n> > > #15 0x0000000000b0781a in PostmasterMain (argc=3, argv=0x3284cb0) at\n> > > postmaster.c:1400\n> > > #16 0x000000000097492d in main (argc=3, argv=0x3284cb0) at main.c:210\n> > >\n> > > Issue 2:\n> > > #0 0x00007f1d7ddc4337 in raise () from /lib64/libc.so.6\n> > > #1 0x00007f1d7ddc5a28 in abort () from /lib64/libc.so.6\n> > > #2 0x0000000000ec4e1d in ExceptionalCondition\n> > > (conditionName=0x10ead30 \"txn->final_lsn != InvalidXLogRecPtr\",\n> > > errorType=0x10ea284 \"FailedAssertion\",\n> > > fileName=0x10ea2d0 \"reorderbuffer.c\", lineNumber=3052) at assert.c:54\n> > > #3 0x0000000000b577e0 in ReorderBufferRestoreCleanup (rb=0x2ae36b0,\n> > > txn=0x2bafb08) at reorderbuffer.c:3052\n> > > #4 0x0000000000b52b1c in ReorderBufferCleanupTXN (rb=0x2ae36b0,\n> > > txn=0x2bafb08) at reorderbuffer.c:1318\n> > > #5 0x0000000000b5279d in ReorderBufferCleanupTXN (rb=0x2ae36b0,\n> > > txn=0x2b9d778) at reorderbuffer.c:1257\n> > > #6 0x0000000000b5475c in ReorderBufferAbortOld (rb=0x2ae36b0,\n> > > oldestRunningXid=3835) at reorderbuffer.c:1973\n> > > #7 0x0000000000b3ca03 in DecodeStandbyOp (ctx=0x2b676d0,\n> > > buf=0x7ffcbc74cc00) at decode.c:332\n> > > #8 0x0000000000b3c208 in LogicalDecodingProcessRecord (ctx=0x2b676d0,\n> > > record=0x2b67990) at decode.c:121\n> > > #9 0x0000000000b70b2b in XLogSendLogical () at walsender.c:2845\n> > >\n> > > From initial analysis it looks like:\n> > > Issue1 it seems like if all the reorderbuffer has been flushed and\n> > > then the server restarts. This problem occurs.\n> > > Issue 2 it seems like if there are many subtransactions present and\n> > > then the server restarts. This problem occurs. The subtransaction's\n> > > final_lsn is not being set and when ReorderBufferRestoreCleanup is\n> > > called the assert fails. May be for this we might have to set the\n> > > subtransaction's final_lsn before cleanup(not sure).\n> > >\n> > > I could not reproduce this issue consistently with a test case, But I\n> > > felt this looks like a problem from review.\n> > >\n> > > For issue1, I could reproduce by the following steps:\n> > > 1) Change ReorderBufferCheckSerializeTXN so that it gets flushed always.\n> > > 2) Have many open transactions with subtransactions open.\n> > > 3) Attach one of the transaction from gdb and call abort().\n> >\n> > Do you need subtransactions for the issue1? It appears that after the\n> > restart if the changes list is empty it will hit the assert. Am I\n> > missing something?\n> >\n>\n> When I had reported this issue I could reproduce this issue with\n> sub-transactions. Now I have tried without using sub-transactions and\n> could still reproduce this issue. You are right Issue 1 will appear in\n> both the cases with and without subtransactions.\n\nOkay, thanks for the confirmation.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 7 Nov 2019 11:01:17 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reorderbuffer crash during recovery"
},
{
"msg_contents": "On Thu, Nov 07, 2019 at 11:01:17AM +0530, Dilip Kumar wrote:\n>On Thu, Nov 7, 2019 at 9:55 AM vignesh C <vignesh21@gmail.com> wrote:\n>>\n>> On Wed, Nov 6, 2019 at 5:41 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>> >\n>> > On Wed, Nov 6, 2019 at 5:20 PM vignesh C <vignesh21@gmail.com> wrote:\n>> > >\n>> > > Hi,\n>> > >\n>> > > ...\n>> > >\n>> > > Issue1 it seems like if all the reorderbuffer has been flushed and\n>> > > then the server restarts. This problem occurs.\n>> > > Issue 2 it seems like if there are many subtransactions present and\n>> > > then the server restarts. This problem occurs. The subtransaction's\n>> > > final_lsn is not being set and when ReorderBufferRestoreCleanup is\n>> > > called the assert fails. May be for this we might have to set the\n>> > > subtransaction's final_lsn before cleanup(not sure).\n>> > >\n>> > > I could not reproduce this issue consistently with a test case, But I\n>> > > felt this looks like a problem from review.\n>> > >\n>> > > For issue1, I could reproduce by the following steps:\n>> > > 1) Change ReorderBufferCheckSerializeTXN so that it gets flushed always.\n>> > > 2) Have many open transactions with subtransactions open.\n>> > > 3) Attach one of the transaction from gdb and call abort().\n>> >\n>> > Do you need subtransactions for the issue1? It appears that after the\n>> > restart if the changes list is empty it will hit the assert. Am I\n>> > missing something?\n>> >\n>>\n>> When I had reported this issue I could reproduce this issue with\n>> sub-transactions. Now I have tried without using sub-transactions and\n>> could still reproduce this issue. You are right Issue 1 will appear in\n>> both the cases with and without subtransactions.\n>\n>Okay, thanks for the confirmation.\n>\n\nI'm a bit confused - does this happen only with the logical_work_mem\npatches, or with clean master too? If only with the patches, which\nversion exactly?\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 7 Nov 2019 12:18:23 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Reorderbuffer crash during recovery"
},
{
"msg_contents": "On Thu, Nov 7, 2019 at 4:48 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n>\n> I'm a bit confused - does this happen only with the logical_work_mem\n> patches, or with clean master too?\n>\n\nThis occurs with the clean master. This is a base code problem\nrevealed while doing stress testing of logical_work_mem patches.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 7 Nov 2019 17:03:44 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reorderbuffer crash during recovery"
},
{
"msg_contents": "Hi,\n\nOn 2019-11-07 17:03:44 +0530, Amit Kapila wrote:\n> On Thu, Nov 7, 2019 at 4:48 PM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n> >\n> > I'm a bit confused - does this happen only with the logical_work_mem\n> > patches, or with clean master too?\n> >\n> \n> This occurs with the clean master. This is a base code problem\n> revealed while doing stress testing of logical_work_mem patches.\n\nAs far as I can tell there are no repro steps included? Any chance to\nget those?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 7 Nov 2019 08:31:23 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Reorderbuffer crash during recovery"
},
{
"msg_contents": "On Thu, Nov 7, 2019 at 10:01 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2019-11-07 17:03:44 +0530, Amit Kapila wrote:\n> > On Thu, Nov 7, 2019 at 4:48 PM Tomas Vondra\n> > <tomas.vondra@2ndquadrant.com> wrote:\n> > >\n> > > I'm a bit confused - does this happen only with the logical_work_mem\n> > > patches, or with clean master too?\n> > >\n> >\n> > This occurs with the clean master. This is a base code problem\n> > revealed while doing stress testing of logical_work_mem patches.\n>\n> As far as I can tell there are no repro steps included? Any chance to\n> get those?\n>\n\nThis problem does not occur consistently. When I was reviewing and testing\n\"logical streaming for large in-progress transactions\" link [1] I found the\ncrashes.\n\nThis issue does not occur directly, meaning this issue will occur only when\nsome crash occurs in postgres process(not from reorderbuffer but due to\nsome other issue), after the original non-reorderbuffer crash this\nreorderbuffer crash appears.\n\nTo simplify the reorderbuffer crash, I used the following steps:\n1) Make replication setup with publisher/subscriber for some table\n2) Prepare a sql file with the below:\nbegin;\n4096 insert statements;\nselect pg_sleep(120)\n3) Execute the above script.\n4) Attach the postgres process when pg_sleep is in progress.\n5) call abort() from attached gdb.\n6) After sometime there will be many core files in publisher installation\ndata directory.\n\n[1] https://commitfest.postgresql.org/25/1927/\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Thu, Nov 7, 2019 at 10:01 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2019-11-07 17:03:44 +0530, Amit Kapila wrote:\n> > On Thu, Nov 7, 2019 at 4:48 PM Tomas Vondra\n> > <tomas.vondra@2ndquadrant.com> wrote:\n> > >\n> > > I'm a bit confused - does this happen only with the logical_work_mem\n> > > patches, or with clean master too?\n> > >\n> >\n> > This occurs with the clean master. This is a base code problem\n> > revealed while doing stress testing of logical_work_mem patches.\n>\n> As far as I can tell there are no repro steps included? Any chance to\n> get those?\n>\n> Greetings,\n>\n> Andres Freund\n>\n\nOn Thu, Nov 7, 2019 at 10:01 PM Andres Freund <andres@anarazel.de> wrote:>> Hi,>> On 2019-11-07 17:03:44 +0530, Amit Kapila wrote:> > On Thu, Nov 7, 2019 at 4:48 PM Tomas Vondra> > <tomas.vondra@2ndquadrant.com> wrote:> > >> > > I'm a bit confused - does this happen only with the logical_work_mem> > > patches, or with clean master too?> > >> >> > This occurs with the clean master. This is a base code problem> > revealed while doing stress testing of logical_work_mem patches.>> As far as I can tell there are no repro steps included? Any chance to> get those?>This problem does not occur consistently. When I was reviewing and testing \"logical streaming for large in-progress transactions\" link [1] I found the crashes.This issue does not occur directly, meaning this issue will occur only when some crash occurs in postgres process(not from reorderbuffer but due to some other issue), after the original non-reorderbuffer crash this reorderbuffer crash appears.To simplify the reorderbuffer crash, I used the following steps:1) Make replication setup with publisher/subscriber for some table2) Prepare a sql file with the below:begin;4096 insert statements; select pg_sleep(120)3) Execute the above script.4) Attach the postgres process when pg_sleep is in progress. 5) call abort() from attached gdb.6) After sometime there will be many core files in publisher installation data directory.[1] https://commitfest.postgresql.org/25/1927/Regards,VigneshEnterpriseDB: http://www.enterprisedb.comOn Thu, Nov 7, 2019 at 10:01 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2019-11-07 17:03:44 +0530, Amit Kapila wrote:\n> On Thu, Nov 7, 2019 at 4:48 PM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n> >\n> > I'm a bit confused - does this happen only with the logical_work_mem\n> > patches, or with clean master too?\n> >\n> \n> This occurs with the clean master. This is a base code problem\n> revealed while doing stress testing of logical_work_mem patches.\n\nAs far as I can tell there are no repro steps included? Any chance to\nget those?\n\nGreetings,\n\nAndres Freund",
"msg_date": "Fri, 8 Nov 2019 10:04:56 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reorderbuffer crash during recovery"
},
{
"msg_contents": "On Fri, Nov 8, 2019 at 10:05 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Thu, Nov 7, 2019 at 10:01 PM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2019-11-07 17:03:44 +0530, Amit Kapila wrote:\n> > > On Thu, Nov 7, 2019 at 4:48 PM Tomas Vondra\n> > > <tomas.vondra@2ndquadrant.com> wrote:\n> > > >\n> > > > I'm a bit confused - does this happen only with the logical_work_mem\n> > > > patches, or with clean master too?\n> > > >\n> > >\n> > > This occurs with the clean master. This is a base code problem\n> > > revealed while doing stress testing of logical_work_mem patches.\n> >\n> > As far as I can tell there are no repro steps included? Any chance to\n> > get those?\n> >\n\nI think it will be bit tricky to write a test case, but the callstack\nindicates the problem. For ex. for issue-1, see the below part of\ncallstack,\n\n#2 0x0000000000ec514d in ExceptionalCondition\n(conditionName=0x10eab34 \"!dlist_is_empty(head)\", errorType=0x10eab24\n\"FailedAssertion\",\n fileName=0x10eab00 \"../../../../src/include/lib/ilist.h\",\nlineNumber=458) at assert.c:54\n#3 0x0000000000b4fd13 in dlist_tail_element_off (head=0x338fe60,\noff=48) at ../../../../src/include/lib/ilist.h:458\n#4 0x0000000000b547b7 in ReorderBufferAbortOld (rb=0x32ae7a0,\noldestRunningXid=895) at reorderbuffer.c:1910\n#5 0x0000000000b3cb5e in DecodeStandbyOp (ctx=0x33424b0,\nbuf=0x7fff7e7b1e40) at decode.c:332\n\nThe Assertion failure indicates that the changes list is empty when we\nare trying to get 'lsn' of the last change. I think this is possible\nwhen immediately after serializing the transaction changes the server\ngot restarted (say it crashed or somebody switched off and restarted\nit). The reason is that after serializing the changes, the changes\nlist will be empty and serialized flag for txn will be set. I am not\nif there is any reason to set the final_lsn if the changes list is\nempty.\n\nSimilarly, if we see the call stack of issue-2, the problem is clear.\n\n#2 0x0000000000ec4e1d in ExceptionalCondition\n(conditionName=0x10ead30 \"txn->final_lsn != InvalidXLogRecPtr\",\nerrorType=0x10ea284 \"FailedAssertion\",\n fileName=0x10ea2d0 \"reorderbuffer.c\", lineNumber=3052) at assert.c:54\n#3 0x0000000000b577e0 in ReorderBufferRestoreCleanup (rb=0x2ae36b0,\ntxn=0x2bafb08) at reorderbuffer.c:3052\n#4 0x0000000000b52b1c in ReorderBufferCleanupTXN (rb=0x2ae36b0,\ntxn=0x2bafb08) at reorderbuffer.c:1318\n#5 0x0000000000b5279d in ReorderBufferCleanupTXN (rb=0x2ae36b0,\ntxn=0x2b9d778) at reorderbuffer.c:1257\n#6 0x0000000000b5475c in ReorderBufferAbortOld (rb=0x2ae36b0,\noldestRunningXid=3835) at reorderbuffer.c:1973\n\nI think this also has a symptom similar to the prior issue but for\nsub-transactions. The ReorderBufferAbortOld() tries to set the\nfinal_lsn of toplevel transaction, but not for subtransactions, later\nin that path ReorderBufferRestoreCleanup expects it to be set even for\nsubtransaction. Is that Assert in ReorderBufferRestoreCleanup()\nrequired, because immediately after assert, we are anyway setting the\nvalue of final_lsn.\n\n>\n> This problem does not occur consistently. When I was reviewing and testing \"logical streaming for large in-progress transactions\" link [1] I found the crashes.\n>\n> This issue does not occur directly, meaning this issue will occur only when some crash occurs in postgres process(not from reorderbuffer but due to some other issue), after the original non-reorderbuffer crash this reorderbuffer crash appears.\n>\n> To simplify the reorderbuffer crash, I used the following steps:\n> 1) Make replication setup with publisher/subscriber for some table\n> 2) Prepare a sql file with the below:\n> begin;\n> 4096 insert statements;\n> select pg_sleep(120)\n> 3) Execute the above script.\n> 4) Attach the postgres process when pg_sleep is in progress.\n> 5) call abort() from attached gdb.\n>\n\nIsn't it important to call this abort immediately after the changes\nare serialized as explained above?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 9 Nov 2019 17:07:42 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reorderbuffer crash during recovery"
},
{
"msg_contents": "On Sat, Nov 9, 2019 at 5:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Nov 8, 2019 at 10:05 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Thu, Nov 7, 2019 at 10:01 PM Andres Freund <andres@anarazel.de> wrote:\n> > >\n> > > Hi,\n> > >\n> > > On 2019-11-07 17:03:44 +0530, Amit Kapila wrote:\n> > > > On Thu, Nov 7, 2019 at 4:48 PM Tomas Vondra\n> > > > <tomas.vondra@2ndquadrant.com> wrote:\n> > > > >\n> > > > > I'm a bit confused - does this happen only with the logical_work_mem\n> > > > > patches, or with clean master too?\n> > > > >\n> > > >\n> > > > This occurs with the clean master. This is a base code problem\n> > > > revealed while doing stress testing of logical_work_mem patches.\n> > >\n> > > As far as I can tell there are no repro steps included? Any chance to\n> > > get those?\n> > >\n>\n> I think it will be bit tricky to write a test case, but the callstack\n> indicates the problem. For ex. for issue-1, see the below part of\n> callstack,\n>\n> #2 0x0000000000ec514d in ExceptionalCondition\n> (conditionName=0x10eab34 \"!dlist_is_empty(head)\", errorType=0x10eab24\n> \"FailedAssertion\",\n> fileName=0x10eab00 \"../../../../src/include/lib/ilist.h\",\n> lineNumber=458) at assert.c:54\n> #3 0x0000000000b4fd13 in dlist_tail_element_off (head=0x338fe60,\n> off=48) at ../../../../src/include/lib/ilist.h:458\n> #4 0x0000000000b547b7 in ReorderBufferAbortOld (rb=0x32ae7a0,\n> oldestRunningXid=895) at reorderbuffer.c:1910\n> #5 0x0000000000b3cb5e in DecodeStandbyOp (ctx=0x33424b0,\n> buf=0x7fff7e7b1e40) at decode.c:332\n>\n> The Assertion failure indicates that the changes list is empty when we\n> are trying to get 'lsn' of the last change. I think this is possible\n> when immediately after serializing the transaction changes the server\n> got restarted (say it crashed or somebody switched off and restarted\n> it). The reason is that after serializing the changes, the changes\n> list will be empty and serialized flag for txn will be set. I am not\n> if there is any reason to set the final_lsn if the changes list is\n> empty.\n>\n> Similarly, if we see the call stack of issue-2, the problem is clear.\n>\n> #2 0x0000000000ec4e1d in ExceptionalCondition\n> (conditionName=0x10ead30 \"txn->final_lsn != InvalidXLogRecPtr\",\n> errorType=0x10ea284 \"FailedAssertion\",\n> fileName=0x10ea2d0 \"reorderbuffer.c\", lineNumber=3052) at assert.c:54\n> #3 0x0000000000b577e0 in ReorderBufferRestoreCleanup (rb=0x2ae36b0,\n> txn=0x2bafb08) at reorderbuffer.c:3052\n> #4 0x0000000000b52b1c in ReorderBufferCleanupTXN (rb=0x2ae36b0,\n> txn=0x2bafb08) at reorderbuffer.c:1318\n> #5 0x0000000000b5279d in ReorderBufferCleanupTXN (rb=0x2ae36b0,\n> txn=0x2b9d778) at reorderbuffer.c:1257\n> #6 0x0000000000b5475c in ReorderBufferAbortOld (rb=0x2ae36b0,\n> oldestRunningXid=3835) at reorderbuffer.c:1973\n>\n> I think this also has a symptom similar to the prior issue but for\n> sub-transactions. The ReorderBufferAbortOld() tries to set the\n> final_lsn of toplevel transaction, but not for subtransactions, later\n> in that path ReorderBufferRestoreCleanup expects it to be set even for\n> subtransaction. Is that Assert in ReorderBufferRestoreCleanup()\n> required, because immediately after assert, we are anyway setting the\n> value of final_lsn.\n>\n\nThanks Amit for more analysis of the issues.\n\n> >\n> > This problem does not occur consistently. When I was reviewing and testing \"logical streaming for large in-progress transactions\" link [1] I found the crashes.\n> >\n> > This issue does not occur directly, meaning this issue will occur only when some crash occurs in postgres process(not from reorderbuffer but due to some other issue), after the original non-reorderbuffer crash this reorderbuffer crash appears.\n> >\n> > To simplify the reorderbuffer crash, I used the following steps:\n> > 1) Make replication setup with publisher/subscriber for some table\n> > 2) Prepare a sql file with the below:\n> > begin;\n> > 4096 insert statements;\n> > select pg_sleep(120)\n> > 3) Execute the above script.\n> > 4) Attach the postgres process when pg_sleep is in progress.\n> > 5) call abort() from attached gdb.\n> >\n>\n> Isn't it important to call this abort immediately after the changes\n> are serialized as explained above?\n>\n\nAs we are performing exactly 4096 insert statements on a fresh\nreplication setup it will get serialized immediately after 4096\ninserts. The variable max_changes_in_memory is initialized with 4096\n(reorderbuffer.c file), txn->nentries_mem gets incremented for every\ninsert statement in ReorderBufferQueueChange function. As we have\nexecuted 4096 insert statements, txn->nentries_mem will become 4096\nwhich results in the serializing from ReorderBufferCheckSerializeTXN\nfunction. Then attaching gdb and call abort() helped in reproducing\nthe issue consistently.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 10 Nov 2019 16:48:01 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reorderbuffer crash during recovery"
},
{
"msg_contents": "> I found couple of crashes in reorderbuffer while review/testing of\n> logical_work_mem and logical streaming of large in-progress\n> transactions. Stack trace of the same are given below:\n> Issue 1:\n> #0 0x00007f985c7d8337 in raise () from /lib64/libc.so.6\n> #1 0x00007f985c7d9a28 in abort () from /lib64/libc.so.6\n> #2 0x0000000000ec514d in ExceptionalCondition\n> (conditionName=0x10eab34 \"!dlist_is_empty(head)\", errorType=0x10eab24\n> \"FailedAssertion\",\n> fileName=0x10eab00 \"../../../../src/include/lib/ilist.h\",\n> lineNumber=458) at assert.c:54\n> #3 0x0000000000b4fd13 in dlist_tail_element_off (head=0x338fe60,\n> off=48) at ../../../../src/include/lib/ilist.h:458\n> #4 0x0000000000b547b7 in ReorderBufferAbortOld (rb=0x32ae7a0,\n> oldestRunningXid=895) at reorderbuffer.c:1910\n> #5 0x0000000000b3cb5e in DecodeStandbyOp (ctx=0x33424b0,\n> buf=0x7fff7e7b1e40) at decode.c:332\n> #6 0x0000000000b3c363 in LogicalDecodingProcessRecord (ctx=0x33424b0,\n> record=0x3342770) at decode.c:121\n> #7 0x0000000000b704b2 in XLogSendLogical () at walsender.c:2845\n> #8 0x0000000000b6e9f8 in WalSndLoop (send_data=0xb7038b\n> <XLogSendLogical>) at walsender.c:2199\n> #9 0x0000000000b6bbf5 in StartLogicalReplication (cmd=0x33167a8) at\n> walsender.c:1128\n> #10 0x0000000000b6ce83 in exec_replication_command\n> (cmd_string=0x328a0a0 \"START_REPLICATION SLOT \\\"sub1\\\" LOGICAL 0/0\n> (proto_version '1', publication_names '\\\"pub1\\\"')\")\n> at walsender.c:1545\n> #11 0x0000000000c39f85 in PostgresMain (argc=1, argv=0x32b51c0,\n> dbname=0x32b50e0 \"testdb\", username=0x32b50c0 \"user1\") at\n> postgres.c:4256\n> #12 0x0000000000b10dc7 in BackendRun (port=0x32ad890) at postmaster.c:4498\n> #13 0x0000000000b0ff3e in BackendStartup (port=0x32ad890) at postmaster.c:4189\n> #14 0x0000000000b08505 in ServerLoop () at postmaster.c:1727\n> #15 0x0000000000b0781a in PostmasterMain (argc=3, argv=0x3284cb0) at\n> postmaster.c:1400\n> #16 0x000000000097492d in main (argc=3, argv=0x3284cb0) at main.c:210\n>\n> Issue 2:\n> #0 0x00007f1d7ddc4337 in raise () from /lib64/libc.so.6\n> #1 0x00007f1d7ddc5a28 in abort () from /lib64/libc.so.6\n> #2 0x0000000000ec4e1d in ExceptionalCondition\n> (conditionName=0x10ead30 \"txn->final_lsn != InvalidXLogRecPtr\",\n> errorType=0x10ea284 \"FailedAssertion\",\n> fileName=0x10ea2d0 \"reorderbuffer.c\", lineNumber=3052) at assert.c:54\n> #3 0x0000000000b577e0 in ReorderBufferRestoreCleanup (rb=0x2ae36b0,\n> txn=0x2bafb08) at reorderbuffer.c:3052\n> #4 0x0000000000b52b1c in ReorderBufferCleanupTXN (rb=0x2ae36b0,\n> txn=0x2bafb08) at reorderbuffer.c:1318\n> #5 0x0000000000b5279d in ReorderBufferCleanupTXN (rb=0x2ae36b0,\n> txn=0x2b9d778) at reorderbuffer.c:1257\n> #6 0x0000000000b5475c in ReorderBufferAbortOld (rb=0x2ae36b0,\n> oldestRunningXid=3835) at reorderbuffer.c:1973\n> #7 0x0000000000b3ca03 in DecodeStandbyOp (ctx=0x2b676d0,\n> buf=0x7ffcbc74cc00) at decode.c:332\n> #8 0x0000000000b3c208 in LogicalDecodingProcessRecord (ctx=0x2b676d0,\n> record=0x2b67990) at decode.c:121\n> #9 0x0000000000b70b2b in XLogSendLogical () at walsender.c:2845\n>\n> From initial analysis it looks like:\n> Issue1 it seems like if all the reorderbuffer has been flushed and\n> then the server restarts. This problem occurs.\n> Issue 2 it seems like if there are many subtransactions present and\n> then the server restarts. This problem occurs. The subtransaction's\n> final_lsn is not being set and when ReorderBufferRestoreCleanup is\n> called the assert fails. May be for this we might have to set the\n> subtransaction's final_lsn before cleanup(not sure).\n>\n> I could not reproduce this issue consistently with a test case, But I\n> felt this looks like a problem from review.\n>\n> For issue1, I could reproduce by the following steps:\n> 1) Change ReorderBufferCheckSerializeTXN so that it gets flushed always.\n> 2) Have many open transactions with subtransactions open.\n> 3) Attach one of the transaction from gdb and call abort().\n>\n> I'm not sure of the fix for this. If I get time I will try to spend\n> more time to find out the fix.\n\nI have further analyzed the issue and found that:\nAfter abort is called, when the process restarts, it will clean the\nreorder information for the aborted transactions in\nReorderBufferAbortOld function. It crashes in the below code as there\nare no changes present currently and all the changes are serialized:\n.......\nif (txn->serialized && txn->final_lsn == 0))\n{\nReorderBufferChange *last =\ndlist_tail_element(ReorderBufferChange, node, &txn->changes);\n\ntxn->final_lsn = last->lsn;\n}\n.......\n\nIt sets the final_lsn here so that it can iterate from the start_lsn\nto final_lsn and cleanup the serialized files in\nReorderBufferRestoreCleanup function. One solution We were thinking\nwas to store the lsn of the last serialized change while serialiizing\nand set the final_lsn in the above case where it crashes like the\nbelow code:\n......\nif (txn->serialized && txn->final_lsn == 0 &&\n!dlist_is_empty(&txn->changes))\n{\nReorderBufferChange *last =\ndlist_tail_element(ReorderBufferChange, node, &txn->changes);\n\ntxn->final_lsn = last->lsn;\n}\nelse\n{\n/*\n* If there are no changes present as all of the changes were\n* serialized, use the last lsn that was serialized.\n*/\ntxn->final_lsn = txn->current_serialized_lsn;\n}\n......\n\nI have tested the same scenario and verified it to be working. The\npatch for the same is attached. I could not add a test case for this\nas it involves attaching to gdb and calling abort.\n\nThoughts?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 11 Dec 2019 11:13:04 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reorderbuffer crash during recovery"
},
{
"msg_contents": "On Wed, Dec 11, 2019 at 11:13 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n>\n> It sets the final_lsn here so that it can iterate from the start_lsn\n> to final_lsn and cleanup the serialized files in\n> ReorderBufferRestoreCleanup function. One solution We were thinking\n> was to store the lsn of the last serialized change while serialiizing\n> and set the final_lsn in the above case where it crashes like the\n> below code:\n\nSure, we can do something on the lines what you are suggesting, but\nwhy can't we update final_lsn at the time of serializing the changes?\nIf we do that then we don't even need to compute it separately during\nReorderBufferAbortOld.\n\nLet me try to explain the problem and proposed solutions for the same.\nCurrently, after serializing the changes we remove the 'changes' from\nReorderBufferTXN. Now, if the system crashes due to any reason after\nthat, we won't be able to compute final_lsn after the restart. And\nthat leads to access violation in ReorderBufferAbortOld which is\ntrying to access changes list from ReorderBufferTXN to compute\nfinal_lsn.\n\nWe could fix it by either tracking 'last_serialized_change' as\nproposed by Vignesh or we could update the final_lsn while we\nserialize the changes.\n\nIIUC, commit df9f682c7bf81674b6ae3900fd0146f35df0ae2e [1] tried to fix\nsome related issue which leads to this another problem. Alvaro,\nAndres, do you have any suggestions?\n\n\n[1] -\ncommit df9f682c7bf81674b6ae3900fd0146f35df0ae2e\nAuthor: Alvaro Herrera <alvherre@alvh.no-ip.org>\nDate: Fri Jan 5 12:17:10 2018 -0300\n\n Fix failure to delete spill files of aborted transactions\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 17 Dec 2019 14:32:10 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reorderbuffer crash during recovery"
},
{
"msg_contents": "On Tue, Dec 17, 2019 at 2:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Dec 11, 2019 at 11:13 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> >\n> > It sets the final_lsn here so that it can iterate from the start_lsn\n> > to final_lsn and cleanup the serialized files in\n> > ReorderBufferRestoreCleanup function. One solution We were thinking\n> > was to store the lsn of the last serialized change while serialiizing\n> > and set the final_lsn in the above case where it crashes like the\n> > below code:\n>\n> Sure, we can do something on the lines what you are suggesting, but\n> why can't we update final_lsn at the time of serializing the changes?\n> If we do that then we don't even need to compute it separately during\n> ReorderBufferAbortOld.\n>\n> Let me try to explain the problem and proposed solutions for the same.\n> Currently, after serializing the changes we remove the 'changes' from\n> ReorderBufferTXN. Now, if the system crashes due to any reason after\n> that, we won't be able to compute final_lsn after the restart. And\n> that leads to access violation in ReorderBufferAbortOld which is\n> trying to access changes list from ReorderBufferTXN to compute\n> final_lsn.\n>\n> We could fix it by either tracking 'last_serialized_change' as\n> proposed by Vignesh or we could update the final_lsn while we\n> serialize the changes.\n>\n\nI felt amit solution also solves the problem. Attached patch has the\nfix based on the solution proposed.\nThoughts?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 27 Dec 2019 13:50:15 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reorderbuffer crash during recovery"
},
{
"msg_contents": "On 2019-Dec-27, vignesh C wrote:\n\n> I felt amit solution also solves the problem. Attached patch has the\n> fix based on the solution proposed.\n> Thoughts?\n\nThis seems a sensible fix to me, though I didn't try to reproduce the\nfailure.\n\n> @@ -2472,6 +2457,7 @@ ReorderBufferSerializeTXN(ReorderBuffer *rb, ReorderBufferTXN *txn)\n> \t\t}\n> \n> \t\tReorderBufferSerializeChange(rb, txn, fd, change);\n> +\t\ttxn->final_lsn = change->lsn;\n> \t\tdlist_delete(&change->node);\n> \t\tReorderBufferReturnChange(rb, change);\n\nShould this be done insider ReorderBufferSerializeChange itself, instead\nof in its caller? Also, would it be sane to verify that the TXN\ndoesn't already have a newer final_lsn? Maybe as an Assert.\n\n> @@ -188,8 +188,7 @@ typedef struct ReorderBufferTXN\n> \t * * plain abort record\n> \t * * prepared transaction abort\n> \t * * error during decoding\n> -\t * * for a crashed transaction, the LSN of the last change, regardless of\n> -\t * what it was.\n> +\t * * last serialized lsn\n> \t * ----\n\nI propose \"for a transaction with serialized changes, the LSN of the\nlatest serialized one, unless one of the above cases.\"\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 27 Dec 2019 12:07:06 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Reorderbuffer crash during recovery"
},
{
"msg_contents": "On Fri, Dec 27, 2019 at 8:37 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Dec-27, vignesh C wrote:\n>\n> > I felt amit solution also solves the problem. Attached patch has the\n> > fix based on the solution proposed.\n> > Thoughts?\n>\n> This seems a sensible fix to me, though I didn't try to reproduce the\n> failure.\n>\n> > @@ -2472,6 +2457,7 @@ ReorderBufferSerializeTXN(ReorderBuffer *rb, ReorderBufferTXN *txn)\n> > }\n> >\n> > ReorderBufferSerializeChange(rb, txn, fd, change);\n> > + txn->final_lsn = change->lsn;\n> > dlist_delete(&change->node);\n> > ReorderBufferReturnChange(rb, change);\n>\n> Should this be done insider ReorderBufferSerializeChange itself, instead\n> of in its caller?\n>\n\nmakes sense. But, I think we should add a comment specifying the\nreason why it is important to set final_lsn while serializing the\nchange.\n\n> Also, would it be sane to verify that the TXN\n> doesn't already have a newer final_lsn? Maybe as an Assert.\n>\n\nI don't think this is a good idea because we update the final_lsn with\ncommit_lsn in ReorderBufferCommit after which we can try to serialize\nthe remaining changes. Instead, we should update it only if the\nchange_lsn value is greater than final_lsn.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 30 Dec 2019 11:17:08 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reorderbuffer crash during recovery"
},
{
"msg_contents": "On Mon, Dec 30, 2019 at 11:17 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Dec 27, 2019 at 8:37 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> >\n> > On 2019-Dec-27, vignesh C wrote:\n> >\n> > > I felt amit solution also solves the problem. Attached patch has the\n> > > fix based on the solution proposed.\n> > > Thoughts?\n> >\n> > This seems a sensible fix to me, though I didn't try to reproduce the\n> > failure.\n> >\n> > > @@ -2472,6 +2457,7 @@ ReorderBufferSerializeTXN(ReorderBuffer *rb, ReorderBufferTXN *txn)\n> > > }\n> > >\n> > > ReorderBufferSerializeChange(rb, txn, fd, change);\n> > > + txn->final_lsn = change->lsn;\n> > > dlist_delete(&change->node);\n> > > ReorderBufferReturnChange(rb, change);\n> >\n> > Should this be done insider ReorderBufferSerializeChange itself, instead\n> > of in its caller?\n> >\n>\n> makes sense. But, I think we should add a comment specifying the\n> reason why it is important to set final_lsn while serializing the\n> change.\n\nFixed\n\n> > Also, would it be sane to verify that the TXN\n> > doesn't already have a newer final_lsn? Maybe as an Assert.\n> >\n>\n> I don't think this is a good idea because we update the final_lsn with\n> commit_lsn in ReorderBufferCommit after which we can try to serialize\n> the remaining changes. Instead, we should update it only if the\n> change_lsn value is greater than final_lsn.\n>\n\nFixed.\nThanks Alvaro & Amit for your suggestions. I have made the changes\nbased on your suggestions. Please find the updated patch for the same.\nI have also verified the patch in back branches. Separate patch was\nrequired for Release-10 branch, patch for the same is attached as\n0001-Reorder-buffer-crash-while-aborting-old-transactions-REL_10.patch.\nThoughts?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 31 Dec 2019 11:35:38 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reorderbuffer crash during recovery"
},
{
"msg_contents": "On Tue, Dec 31, 2019 at 11:35 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Mon, Dec 30, 2019 at 11:17 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Dec 27, 2019 at 8:37 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > >\n> > > On 2019-Dec-27, vignesh C wrote:\n> > >\n> > > > I felt amit solution also solves the problem. Attached patch has the\n> > > > fix based on the solution proposed.\n> > > > Thoughts?\n> > >\n> > > This seems a sensible fix to me, though I didn't try to reproduce the\n> > > failure.\n> > >\n> > > > @@ -2472,6 +2457,7 @@ ReorderBufferSerializeTXN(ReorderBuffer *rb, ReorderBufferTXN *txn)\n> > > > }\n> > > >\n> > > > ReorderBufferSerializeChange(rb, txn, fd, change);\n> > > > + txn->final_lsn = change->lsn;\n> > > > dlist_delete(&change->node);\n> > > > ReorderBufferReturnChange(rb, change);\n> > >\n> > > Should this be done insider ReorderBufferSerializeChange itself, instead\n> > > of in its caller?\n> > >\n> >\n> > makes sense. But, I think we should add a comment specifying the\n> > reason why it is important to set final_lsn while serializing the\n> > change.\n>\n> Fixed\n>\n> > > Also, would it be sane to verify that the TXN\n> > > doesn't already have a newer final_lsn? Maybe as an Assert.\n> > >\n> >\n> > I don't think this is a good idea because we update the final_lsn with\n> > commit_lsn in ReorderBufferCommit after which we can try to serialize\n> > the remaining changes. Instead, we should update it only if the\n> > change_lsn value is greater than final_lsn.\n> >\n>\n> Fixed.\n> Thanks Alvaro & Amit for your suggestions. I have made the changes\n> based on your suggestions. Please find the updated patch for the same.\n> I have also verified the patch in back branches. Separate patch was\n> required for Release-10 branch, patch for the same is attached as\n> 0001-Reorder-buffer-crash-while-aborting-old-transactions-REL_10.patch.\n> Thoughts?\n\nOne minor comment. Otherwise, the patch looks fine to me.\n+ /*\n+ * We set final_lsn on a transaction when we decode its commit or abort\n+ * record, but we never see those records for crashed transactions. To\n+ * ensure cleanup of these transactions, set final_lsn to that of their\n+ * last change; this causes ReorderBufferRestoreCleanup to do the right\n+ * thing. Final_lsn would have been set with commit_lsn earlier when we\n+ * decode it commit, no need to update in that case\n+ */\n+ if (txn->final_lsn < change->lsn)\n+ txn->final_lsn = change->lsn;\n\n/decode it commit,/decode its commit,\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 16 Jan 2020 09:17:18 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reorderbuffer crash during recovery"
},
{
"msg_contents": "On Thu, Jan 16, 2020 at 9:17 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> One minor comment. Otherwise, the patch looks fine to me.\n> + /*\n> + * We set final_lsn on a transaction when we decode its commit or abort\n> + * record, but we never see those records for crashed transactions. To\n> + * ensure cleanup of these transactions, set final_lsn to that of their\n> + * last change; this causes ReorderBufferRestoreCleanup to do the right\n> + * thing. Final_lsn would have been set with commit_lsn earlier when we\n> + * decode it commit, no need to update in that case\n> + */\n> + if (txn->final_lsn < change->lsn)\n> + txn->final_lsn = change->lsn;\n>\n> /decode it commit,/decode its commit,\n>\n\nThanks Dilip for reviewing.\nI have fixed the comments you have suggested.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 17 Jan 2020 07:42:35 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reorderbuffer crash during recovery"
},
{
"msg_contents": "On Fri, Jan 17, 2020 at 7:42 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Thu, Jan 16, 2020 at 9:17 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > One minor comment. Otherwise, the patch looks fine to me.\n> > + /*\n> > + * We set final_lsn on a transaction when we decode its commit or abort\n> > + * record, but we never see those records for crashed transactions. To\n> > + * ensure cleanup of these transactions, set final_lsn to that of their\n> > + * last change; this causes ReorderBufferRestoreCleanup to do the right\n> > + * thing. Final_lsn would have been set with commit_lsn earlier when we\n> > + * decode it commit, no need to update in that case\n> > + */\n> > + if (txn->final_lsn < change->lsn)\n> > + txn->final_lsn = change->lsn;\n> >\n> > /decode it commit,/decode its commit,\n> >\n>\n> Thanks Dilip for reviewing.\n> I have fixed the comments you have suggested.\n>\nThanks for the updated patch. It looks fine to me.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 17 Jan 2020 08:43:28 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reorderbuffer crash during recovery"
},
{
"msg_contents": "On 2020-Jan-17, vignesh C wrote:\n\n> Thanks Dilip for reviewing.\n> I have fixed the comments you have suggested.\n\nI ended up rewording that comment completely; I thought the original was\nnot explaining things well enough.\n\nI also changed the comment for final_lsn in reorderbuffer.h: not only I\nremove the line that I added in df9f682c7bf8 (the previous bugfix), but\nI also remove the line that says \"error during decoding\", which came in\nwith the very first logical decoding commit (b89e151054a); I couldn't\nfind any evidence that final_lsn is being set on errors of any kind\n(other than transaction abort, which doesn't seem an \"error\" in that\nsense.)\n\nPlease give these comments a read; maybe I have misunderstood something\nand my comment is wrong.\n\nPushed now to all branches. Thanks, Vignesh, Amit and Dilip.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 17 Jan 2020 18:12:16 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Reorderbuffer crash during recovery"
},
{
"msg_contents": "On Sat, Jan 18, 2020 at 2:42 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2020-Jan-17, vignesh C wrote:\n>\n> > Thanks Dilip for reviewing.\n> > I have fixed the comments you have suggested.\n>\n> I ended up rewording that comment completely; I thought the original was\n> not explaining things well enough.\n>\n> I also changed the comment for final_lsn in reorderbuffer.h: not only I\n> remove the line that I added in df9f682c7bf8 (the previous bugfix), but\n> I also remove the line that says \"error during decoding\", which came in\n> with the very first logical decoding commit (b89e151054a); I couldn't\n> find any evidence that final_lsn is being set on errors of any kind\n> (other than transaction abort, which doesn't seem an \"error\" in that\n> sense.)\n>\n> Please give these comments a read; maybe I have misunderstood something\n> and my comment is wrong.\n>\n\nThe comments added by you look correct and good to me. Thanks.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 18 Jan 2020 10:59:49 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reorderbuffer crash during recovery"
},
{
"msg_contents": "On Sat, Jan 18, 2020 at 2:42 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2020-Jan-17, vignesh C wrote:\n>\n> > Thanks Dilip for reviewing.\n> > I have fixed the comments you have suggested.\n>\n> I ended up rewording that comment completely; I thought the original was\n> not explaining things well enough.\n>\n> I also changed the comment for final_lsn in reorderbuffer.h: not only I\n> remove the line that I added in df9f682c7bf8 (the previous bugfix), but\n> I also remove the line that says \"error during decoding\", which came in\n> with the very first logical decoding commit (b89e151054a); I couldn't\n> find any evidence that final_lsn is being set on errors of any kind\n> (other than transaction abort, which doesn't seem an \"error\" in that\n> sense.)\n>\n> Please give these comments a read; maybe I have misunderstood something\n> and my comment is wrong.\n>\n> Pushed now to all branches. Thanks, Vignesh, Amit and Dilip.\n>\n\nThanks Alvaro for pushing this patch.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 20 Jan 2020 06:06:39 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reorderbuffer crash during recovery"
}
] |
[
{
"msg_contents": "Hello\n\nIn several queries relying on views, I noticed that the optimizer miss a quite \nsimple to implement optimization. My views contain several branches, with \ndifferent paths that are simplified by the caller of the view. This \nsimplification is based on columns to be null or not.\n\nToday, even with a single table, the following (silly) query is not optimized \naway:\n\tSELECT * FROM test WHERE a IS NULL AND a IS NOT NULL;\n\nIn more complex cases, it of course isn't any better:\n\tSELECT * FROM (\n SELECT a, NULL::integer AS b FROM foo\n UNION ALL\n SELECT a, b FROM bar WHERE b IS NOT NULL\n\t) WHERE a = 1 AND b IS NULL;\n\nThe attached patch handles both situations. When flattening and simplifying \nthe AND clauses, a list of the NullChecks is built, and subsequent NullChecks \nare compared to the list. If opposite NullChecks on the same variable are \nfound, the whole AND is optimized away.\nThis lead to nice boosts, since instead of having 'never executed' branches, \nthe optimizer can go even further. Right now, the algorithmic complexity of \nthis optimization is not great: it is in O(n²), with n being the number of \nNullCheck in a given AND clause. But compared to the possible benefits, and \nthe very low risk of n being high enough to have a real planification-time \nimpact, I feel this optimization would be worth it.\n\n\nRegards\n\n Pierre",
"msg_date": "Wed, 06 Nov 2019 18:41:23 +0100",
"msg_from": "Pierre Ducroquet <p.psql@pinaraf.info>",
"msg_from_op": true,
"msg_subject": "[Patch] optimizer - simplify $VAR1 IS NULL AND $VAR1 IS NOT NULL"
},
{
"msg_contents": ">>>>> \"Pierre\" == Pierre Ducroquet <p.psql@pinaraf.info> writes:\n\n Pierre> Hello\n\n Pierre> In several queries relying on views, I noticed that the\n Pierre> optimizer miss a quite simple to implement optimization. My\n Pierre> views contain several branches, with different paths that are\n Pierre> simplified by the caller of the view. This simplification is\n Pierre> based on columns to be null or not.\n\n Pierre> Today, even with a single table, the following (silly) query is\n Pierre> not optimized away:\n\n Pierre> \tSELECT * FROM test WHERE a IS NULL AND a IS NOT NULL;\n\nActually it can be, but only if you set constraint_exclusion=on (rather\nthan the default, 'partition').\n\npostgres=# explain select * from foo where id is null and id is not null;\n QUERY PLAN \n-----------------------------------------------------\n Seq Scan on foo (cost=0.00..35.50 rows=13 width=4)\n Filter: ((id IS NULL) AND (id IS NOT NULL))\n(2 rows)\n\npostgres=# set constraint_exclusion=on;\nSET\n\npostgres=# explain select * from foo where id is null and id is not null;\n QUERY PLAN \n------------------------------------------\n Result (cost=0.00..0.00 rows=0 width=0)\n One-Time Filter: false\n(2 rows)\n\nIn fact when constraint_exclusion=on, the planner should detect any case\nwhere some condition in the query refutes another condition. There is\nsome downside, though, which is why it's not enabled by default:\nplanning may take longer.\n\n Pierre> The attached patch handles both situations. When flattening and\n Pierre> simplifying the AND clauses, a list of the NullChecks is built,\n Pierre> and subsequent NullChecks are compared to the list. If opposite\n Pierre> NullChecks on the same variable are found, the whole AND is\n Pierre> optimized away.\n\nThat's all very well but it's very specific to a single use-case. The\nexisting code, when you enable it, can detect a whole range of possible\nrefutations (e.g. foo > 1 AND foo < 1).\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Wed, 06 Nov 2019 18:15:41 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] optimizer - simplify $VAR1 IS NULL AND $VAR1 IS NOT NULL"
},
{
"msg_contents": "Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> \"Pierre\" == Pierre Ducroquet <p.psql@pinaraf.info> writes:\n> Pierre> The attached patch handles both situations. When flattening and\n> Pierre> simplifying the AND clauses, a list of the NullChecks is built,\n> Pierre> and subsequent NullChecks are compared to the list. If opposite\n> Pierre> NullChecks on the same variable are found, the whole AND is\n> Pierre> optimized away.\n\n> That's all very well but it's very specific to a single use-case. The\n> existing code, when you enable it, can detect a whole range of possible\n> refutations (e.g. foo > 1 AND foo < 1).\n\nYeah. Just for the record, if we were interested in taking a patch\nfor this purpose, simplify_and_arguments is a poor choice of where\nto do it anyway. That would only find contradictions between clauses\nthat were in the same expression at eval_const_expressions time, which\nis pretty early and will miss a lot of logically-equivalent situations\n(e.g. if one clause is in a JOIN...ON and the other is in WHERE).\nThe constraint exclusion code looks for contradictions between clauses\nthat have been pushed down to the same relation during jointree\ndeconstruction, ie they have the same set of referenced relations.\nThat would be a much better place for this type of logic.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 06 Nov 2019 13:34:02 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] optimizer - simplify $VAR1 IS NULL AND $VAR1 IS NOT NULL"
}
] |
[
{
"msg_contents": "Having ExprContext is it possible to know which ExprState is linked with it\n?\n\nHaving ExprContext is it possible to know which ExprState is linked with it ?",
"msg_date": "Wed, 6 Nov 2019 22:12:28 +0100",
"msg_from": "Andrzej Barszcz <abusinf@gmail.com>",
"msg_from_op": true,
"msg_subject": "question"
},
{
"msg_contents": "Hi,\n\nOn 2019-11-06 22:12:28 +0100, Andrzej Barszcz wrote:\n> Having ExprContext is it possible to know which ExprState is linked with it\n\nNo. And there commonly are multiple ExprState evaluated using the same\nExprContext.\n\nWhat's the reason for you asking?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 6 Nov 2019 13:49:51 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: question"
},
{
"msg_contents": "Hi,\n\nPlease keep the discussion on the list... Also, on PG lists we try to\nquote inline, and trim messages nicely.\n\n\nOn 2019-11-06 23:03:06 +0100, Andrzej Barszcz wrote:\n> Your answer make me a bit confused. I thought that ExprState is in one to\n> one relation with ExprContext.\n\nThey're definitely not 1:1. We commonly evaluate both a node's qual and\nprojection using the same ExprContext, but there's nodes doing more than\nthose. E.g. looking at part of the nestloop code:\n\n...\nstatic TupleTableSlot *\nExecNestLoop(PlanState *pstate)\n{\n...\n\tExprContext *econtext;\n...\n\tecontext = node->js.ps.ps_ExprContext;\n\n...\n\t\tif (ExecQual(joinqual, econtext))\n\t\t{\n\t\t\tnode->nl_MatchedOuter = true;\n\n\t\t\t/* In an antijoin, we never return a matched tuple */\n\t\t\tif (node->js.jointype == JOIN_ANTI)\n\t\t\t{\n\t\t\t\tnode->nl_NeedNewOuter = true;\n\t\t\t\tcontinue;\t\t/* return to top of loop */\n\t\t\t}\n\n\t\t\t/*\n\t\t\t * If we only need to join to the first matching inner tuple, then\n\t\t\t * consider returning this one, but after that continue with next\n\t\t\t * outer tuple.\n\t\t\t */\n\t\t\tif (node->js.single_match)\n\t\t\t\tnode->nl_NeedNewOuter = true;\n\n\t\t\tif (otherqual == NULL || ExecQual(otherqual, econtext))\n\t\t\t{\n\t\t\t\t/*\n\t\t\t\t * qualification was satisfied so we project and return the\n\t\t\t\t * slot containing the result tuple using ExecProject().\n\t\t\t\t */\n\t\t\t\tENL1_printf(\"qualification succeeded, projecting tuple\");\n\n\t\t\t\treturn ExecProject(node->js.ps.ps_ProjInfo);\n\t\t\t}\n\t\t\telse\n\t\t\t\tInstrCountFiltered2(node, 1);\n\t\t}\n\t\telse\n\t\t\tInstrCountFiltered1(node, 1);\n...\n\n\nNestLoopState *\nExecInitNestLoop(NestLoop *node, EState *estate, int eflags)\n{\n...\n\tExecAssignExprContext(estate, &nlstate->js.ps);\n...\n\tExecAssignProjectionInfo(&nlstate->js.ps, NULL);\n...\n\nvoid\nExecAssignProjectionInfo(PlanState *planstate,\n\t\t\t\t\t\t TupleDesc inputDesc)\n{\n\tplanstate->ps_ProjInfo =\n\t\tExecBuildProjectionInfo(planstate->plan->targetlist,\n\t\t\t\t\t\t\t\tplanstate->ps_ExprContext,\n\t\t\t\t\t\t\t\tplanstate->ps_ResultTupleSlot,\n\t\t\t\t\t\t\t\tplanstate,\n\t\t\t\t\t\t\t\tinputDesc);\n}\n\nwe're executing two quals (joinqual and otherqual), and the projection\nusing the same qual. There's cases that do more than that.\n\n\nAn ExprContext basically just contains the references to the external\ndata that may be referenced by an expression - e.g. for a join the tuple\nfrom the outer and from the inner side of the join - and a memory\ncontext which is used to evaluate expressions without leaking memory.\nThere's no restriction about the number of different ExprState's may be\nexecuted within one such context, and other operations than just\nexpression evaluation may be executed using it.\n\n\n> I made a patch \"function calls optimization\" at last. Regression test\n> shows no failes.\n\nHard to comment on without seeing the current version of that patch. I\nassume this is the one from\nhttps://www.postgresql.org/message-id/CAOUVqAzyoEzvKbjipiS4J3JnTR8sY%2B-x%2BNPQhbq-B4Bmo1k%3DZA%40mail.gmail.com\n?\n\nPlease don't start separate threads for the same topic. This now is the\nthird thread.\n\n\n> The main difficulty was reset ExprContext in qual evaluation.\n\nYou cannot reset an expression context within qual evaluation, there\nvery well may be life references to that memory.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 6 Nov 2019 14:51:16 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: question"
}
] |
[
{
"msg_contents": "Every once in awhile we get failures like this one:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gull&dt=2019-11-05%2008%3A27%3A27\n\ndiff -U3 /home/pgsql/build-farm/buildroot-clang/HEAD/pgsql.build/../pgsql/src/test/regress/expected/vacuum.out /home/pgsql/build-farm/buildroot-clang/HEAD/pgsql.build/src/test/regress/results/vacuum.out\n--- /home/pgsql/build-farm/buildroot-clang/HEAD/pgsql.build/../pgsql/src/test/regress/expected/vacuum.out\t2019-08-11 03:02:18.921535948 -0700\n+++ /home/pgsql/build-farm/buildroot-clang/HEAD/pgsql.build/src/test/regress/results/vacuum.out\t2019-11-05 00:50:42.381244885 -0800\n@@ -204,6 +204,7 @@\n -- SKIP_LOCKED option\n VACUUM (SKIP_LOCKED) vactst;\n VACUUM (SKIP_LOCKED, FULL) vactst;\n+WARNING: skipping vacuum of \"vactst\" --- lock not available\n ANALYZE (SKIP_LOCKED) vactst;\n -- ensure VACUUM and ANALYZE don't have a problem with serializable\n SET default_transaction_isolation = serializable;\n\n\nNo doubt this is a conflict with autovacuum. There are two reasonable\nways to remove the test instability:\n\n* Crank up client_min_messages to more than WARNING for this test\nstanza.\n\n* Downgrade the \"skipping\" messages to DEBUG1 or less.\n\nI kind of wonder why we are issuing a \"WARNING\" when the statement\ndoes exactly what you asked it to, anyway. At most I'd expect\nthat to be a NOTICE condition.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 06 Nov 2019 16:54:38 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "SKIP_LOCKED test causes random buildfarm failures"
},
{
"msg_contents": "Hi,\n\nOn 2019-11-06 16:54:38 -0500, Tom Lane wrote:\n> Every once in awhile we get failures like this one:\n> \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gull&dt=2019-11-05%2008%3A27%3A27\n> \n> diff -U3 /home/pgsql/build-farm/buildroot-clang/HEAD/pgsql.build/../pgsql/src/test/regress/expected/vacuum.out /home/pgsql/build-farm/buildroot-clang/HEAD/pgsql.build/src/test/regress/results/vacuum.out\n> --- /home/pgsql/build-farm/buildroot-clang/HEAD/pgsql.build/../pgsql/src/test/regress/expected/vacuum.out\t2019-08-11 03:02:18.921535948 -0700\n> +++ /home/pgsql/build-farm/buildroot-clang/HEAD/pgsql.build/src/test/regress/results/vacuum.out\t2019-11-05 00:50:42.381244885 -0800\n> @@ -204,6 +204,7 @@\n> -- SKIP_LOCKED option\n> VACUUM (SKIP_LOCKED) vactst;\n> VACUUM (SKIP_LOCKED, FULL) vactst;\n> +WARNING: skipping vacuum of \"vactst\" --- lock not available\n> ANALYZE (SKIP_LOCKED) vactst;\n> -- ensure VACUUM and ANALYZE don't have a problem with serializable\n> SET default_transaction_isolation = serializable;\n> \n> \n> No doubt this is a conflict with autovacuum. There are two reasonable\n> ways to remove the test instability:\n\nI assume you consider disabling autovacuum for that table not a\nreasonable approach? Due to the danger that it could end up still\nrunning, e.g. due to anti-wraparound or such?\n\n\n> * Crank up client_min_messages to more than WARNING for this test\n> stanza.\n\nHm.\n\n\n> * Downgrade the \"skipping\" messages to DEBUG1 or less.\n> \n> I kind of wonder why we are issuing a \"WARNING\" when the statement\n> does exactly what you asked it to, anyway. At most I'd expect\n> that to be a NOTICE condition.\n\nI don't know what lead us to doing so, but it doesn't seem reasonable to\nallow the user to see whether the table has actually been vacuumed. I\nwould assume that one uses SKIP_LOCKED partially to avoid unnecessary\nimpacts in production due to other tasks starting to block on e.g. a\nVACUUM FULL, even though without the \"ordered queueing\" everything could\njust go on working fine. I'm not sure that indicates whether WARNING or\nNOTICE is the best choice.\n\nSo I'd be inclined to go with the client_min_messages approach?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 6 Nov 2019 15:01:11 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: SKIP_LOCKED test causes random buildfarm failures"
},
{
"msg_contents": "On Wed, Nov 06, 2019 at 03:01:11PM -0800, Andres Freund wrote:\n> I don't know what lead us to doing so, but it doesn't seem reasonable to\n> allow the user to see whether the table has actually been vacuumed. I\n> would assume that one uses SKIP_LOCKED partially to avoid unnecessary\n> impacts in production due to other tasks starting to block on e.g. a\n> VACUUM FULL, even though without the \"ordered queueing\" everything could\n> just go on working fine. I'm not sure that indicates whether WARNING or\n> NOTICE is the best choice.\n\nGood question. That's a historical choice, still I have seen cases\nwhere those warnings are helpful while not making the logs too\nverbose to see some congestion in the jobs.\n\n> So I'd be inclined to go with the client_min_messages approach?\n\nThe main purpose of the tests in regress/ is to check after the\ngrammar, so using client_min_messages sounds like a plan. We have\na second set of tests in isolation/ where I would actually like to\ndisable autovacuum by default on a subset of tables. Thoughts about\nthe attached?\n--\nMichael",
"msg_date": "Thu, 7 Nov 2019 10:39:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: SKIP_LOCKED test causes random buildfarm failures"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Wed, Nov 06, 2019 at 03:01:11PM -0800, Andres Freund wrote:\n>> I don't know what lead us to doing so, but it doesn't seem reasonable to\n>> allow the user to see whether the table has actually been vacuumed. I\n>> would assume that one uses SKIP_LOCKED partially to avoid unnecessary\n>> impacts in production due to other tasks starting to block on e.g. a\n>> VACUUM FULL, even though without the \"ordered queueing\" everything could\n>> just go on working fine. I'm not sure that indicates whether WARNING or\n>> NOTICE is the best choice.\n\n> Good question. That's a historical choice, still I have seen cases\n> where those warnings are helpful while not making the logs too\n> verbose to see some congestion in the jobs.\n\nI kind of feel that NOTICE is more semantically appropriate, but\nperhaps there's an argument for keeping it at WARNING.\n\n>> So I'd be inclined to go with the client_min_messages approach?\n\n> The main purpose of the tests in regress/ is to check after the\n> grammar, so using client_min_messages sounds like a plan. We have\n> a second set of tests in isolation/ where I would actually like to\n> disable autovacuum by default on a subset of tables. Thoughts about\n> the attached?\n\nI do not want to fix this in the main tests by disabling autovacuum,\nbecause that'd actually reduce the tests' cross-section. The fact\nthat this happens occasionally is a Good Thing for verifying that the\ncode paths actually work. So it seems that there's a consensus on\nadjusting client_min_messages to fix the test output instability ---\nbut we need to agree on whether to do s/WARNING/NOTICE/ first, so we\ncan know what to set client_min_messages to.\n\nAs for the case in the isolation test, shouldn't we also use\nclient_min_messages there, rather than prevent the conflict\nfrom arising? Or would that test fail in some larger way if\nautovacuum gets in the way?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 07 Nov 2019 11:50:25 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: SKIP_LOCKED test causes random buildfarm failures"
},
{
"msg_contents": "On Thu, Nov 07, 2019 at 11:50:25AM -0500, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> Good question. That's a historical choice, still I have seen cases\n>> where those warnings are helpful while not making the logs too\n>> verbose to see some congestion in the jobs.\n> \n> I kind of feel that NOTICE is more semantically appropriate, but\n> perhaps there's an argument for keeping it at WARNING.\n\nPerhaps. Well, that's the same level as the one used after the\npermission checks on the relation vacuumed.\n\n> I do not want to fix this in the main tests by disabling autovacuum,\n> because that'd actually reduce the tests' cross-section. The fact\n> that this happens occasionally is a Good Thing for verifying that the\n> code paths actually work. So it seems that there's a consensus on\n> adjusting client_min_messages to fix the test output instability ---\n> but we need to agree on whether to do s/WARNING/NOTICE/ first, so we\n> can know what to set client_min_messages to.\n\nMakes sense. \n\n> As for the case in the isolation test, shouldn't we also use\n> client_min_messages there, rather than prevent the conflict\n> from arising? Or would that test fail in some larger way if\n> autovacuum gets in the way?\n\nI think that there is no risk regarding the stability of the output\nbecause we use LOCK before from a first session on the relation to\nvacuum in a second session. So if autovacuum runs in parallel, the\nconsequence would be a small slow down while waiting on the lock to be\ntaken. And per the way the test is ordered, it seems to me that it\nmakes the most sense to disable autovacuum as it would just get in the\nway. In this case I think that it is actually better to show the\nmessages as that makes the tests more verbose and we make sure to test\ntheir format, even if we could just rely on the fact that VACUUM\nshould just be blocking or non-blocking.\n--\nMichael",
"msg_date": "Fri, 8 Nov 2019 09:01:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: SKIP_LOCKED test causes random buildfarm failures"
},
{
"msg_contents": "I wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> Good question. That's a historical choice, still I have seen cases\n>> where those warnings are helpful while not making the logs too\n>> verbose to see some congestion in the jobs.\n\n> I kind of feel that NOTICE is more semantically appropriate, but\n> perhaps there's an argument for keeping it at WARNING.\n\nWell, I'm not hearing any groundswell of support for changing the\nmessage level, so let's leave that as-is and just see about\nstabilizing the tests.\n\n>> The main purpose of the tests in regress/ is to check after the\n>> grammar, so using client_min_messages sounds like a plan. We have\n>> a second set of tests in isolation/ where I would actually like to\n>> disable autovacuum by default on a subset of tables. Thoughts about\n>> the attached?\n\nAfter looking more closely at the isolation test, I agree that adding\nthe \"ALTER TABLE ... SET (autovacuum_enabled = false)\" bits to it is\na good idea. The LOCK operations should make that irrelevant for\npart1, but there's at least a theoretical hazard for part2.\n(Actually, is \"autovacuum_enabled = false\" really sufficient to\nkeep autovacuum away? It'd probably lock the table for long enough\nto examine its reloptions, so it seems like all we're doing here is\nreducing the window for trouble a little bit. Still, maybe that's\nworthwhile.)\n\nAs for the SKIP_LOCKED tests in vacuum.sql, what I now propose is that\nwe just remove them. I do not see that they're offering any coverage\nthat's not completely redundant with this isolation test. We don't\nneed to spend cycles every day on that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 14 Nov 2019 13:37:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: SKIP_LOCKED test causes random buildfarm failures"
},
{
"msg_contents": "Hi,\n\nOn 2019-11-14 13:37:44 -0500, Tom Lane wrote:\n> I wrote:\n> > Michael Paquier <michael@paquier.xyz> writes:\n> >> Good question. That's a historical choice, still I have seen cases\n> >> where those warnings are helpful while not making the logs too\n> >> verbose to see some congestion in the jobs.\n> \n> > I kind of feel that NOTICE is more semantically appropriate, but\n> > perhaps there's an argument for keeping it at WARNING.\n> \n> Well, I'm not hearing any groundswell of support for changing the\n> message level, so let's leave that as-is and just see about\n> stabilizing the tests.\n\nOk.\n\n\n> >> The main purpose of the tests in regress/ is to check after the\n> >> grammar, so using client_min_messages sounds like a plan. We have\n> >> a second set of tests in isolation/ where I would actually like to\n> >> disable autovacuum by default on a subset of tables. Thoughts about\n> >> the attached?\n> \n> After looking more closely at the isolation test, I agree that adding\n> the \"ALTER TABLE ... SET (autovacuum_enabled = false)\" bits to it is\n> a good idea. The LOCK operations should make that irrelevant for\n> part1, but there's at least a theoretical hazard for part2.\n> (Actually, is \"autovacuum_enabled = false\" really sufficient to\n> keep autovacuum away? It'd probably lock the table for long enough\n> to examine its reloptions, so it seems like all we're doing here is\n> reducing the window for trouble a little bit. Still, maybe that's\n> worthwhile.)\n\n+1\n\n\n> As for the SKIP_LOCKED tests in vacuum.sql, what I now propose is that\n> we just remove them. I do not see that they're offering any coverage\n> that's not completely redundant with this isolation test. We don't\n> need to spend cycles every day on that.\n\n-0 on that, I'd rather just put a autovacuum_enabled = false for\nthem. They're quick enough, and it's nice to have decent coverage of\nvarious options within the plain regression tests when possible.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 14 Nov 2019 12:16:08 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: SKIP_LOCKED test causes random buildfarm failures"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-11-14 13:37:44 -0500, Tom Lane wrote:\n>> As for the SKIP_LOCKED tests in vacuum.sql, what I now propose is that\n>> we just remove them. I do not see that they're offering any coverage\n>> that's not completely redundant with this isolation test. We don't\n>> need to spend cycles every day on that.\n\n> -0 on that, I'd rather just put a autovacuum_enabled = false for\n> them. They're quick enough, and it's nice to have decent coverage of\n> various options within the plain regression tests when possible.\n\nIf we're going to keep them in vacuum.sql, we should use the\nclient_min_messages fix there, as that's a full solution not just\nreducing the window. But I don't agree that these tests are worth\nthe cycles, given the coverage elsewhere. The probability of breaking\nthis option is just not high enough to justify core-regression-test\ncoverage.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 14 Nov 2019 15:20:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: SKIP_LOCKED test causes random buildfarm failures"
},
{
"msg_contents": "On Thu, Nov 14, 2019 at 03:20:09PM -0500, Tom Lane wrote:\n> If we're going to keep them in vacuum.sql, we should use the\n> client_min_messages fix there, as that's a full solution not just\n> reducing the window. But I don't agree that these tests are worth\n> the cycles, given the coverage elsewhere. The probability of breaking\n> this option is just not high enough to justify core-regression-test\n> coverage.\n\nI would rather keep the solution with client_min_messages, and the\ntests in vacuum.sql to keep those checks for the grammar parsing. So\nthis basically brings us back to use the patch I proposed here:\nhttps://www.postgresql.org/message-id/20191107013942.GA1768@paquier.xyz\n\nAny objections?\n--\nMichael",
"msg_date": "Fri, 15 Nov 2019 10:19:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: SKIP_LOCKED test causes random buildfarm failures"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> I would rather keep the solution with client_min_messages, and the\n> tests in vacuum.sql to keep those checks for the grammar parsing. So\n> this basically brings us back to use the patch I proposed here:\n> https://www.postgresql.org/message-id/20191107013942.GA1768@paquier.xyz\n\nOK.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 15 Nov 2019 11:22:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: SKIP_LOCKED test causes random buildfarm failures"
},
{
"msg_contents": "On Fri, Nov 15, 2019 at 11:22:20AM -0500, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> I would rather keep the solution with client_min_messages, and the\n>> tests in vacuum.sql to keep those checks for the grammar parsing. So\n>> this basically brings us back to use the patch I proposed here:\n>> https://www.postgresql.org/message-id/20191107013942.GA1768@paquier.xyz\n> \n> OK.\n\nThanks, applied and back-patched.\n--\nMichael",
"msg_date": "Sat, 16 Nov 2019 15:24:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: SKIP_LOCKED test causes random buildfarm failures"
}
] |
[
{
"msg_contents": "Hackers,\n\nWhile working on cleaning up the SPI interface, I found that one of the \nSPI error codes, SPI_ERROR_COPY, is never encountered in any test case \nwhen running `make check-world`. This case is certainly reachable by a \nuser, as is shown in the attached patch. Is this tested from some other \ninfrastructure?\n\nTo verify that SPI_ERROR_COPY is not tested, before and after applying \nthe patch, try this modification, and notice before the patch that the \nfatal error is never encountered:\n\ndiff --git a/src/backend/executor/spi.c b/src/backend/executor/spi.c\nindex 2c0ae395ba..ced38abbf6 100644\n--- a/src/backend/executor/spi.c\n+++ b/src/backend/executor/spi.c\n@@ -2245,6 +2245,7 @@ _SPI_execute_plan(SPIPlanPtr plan, ParamListInfo \nparamLI,\n\n\tif (cstmt->filename == NULL)\n\t{\n+\t\telog(FATAL, \"SPI_ERROR_COPY tested\");\n\t\tmy_res = SPI_ERROR_COPY;\n\t\tgoto fail;\n\t}\n\nI am submitting this patch separately from other patches related to SPI, \nsince (a) it does not touch any of the SPI code, (b) it fixes missing \ntest coverage to do with COPY and PL/pgSQL, only indirectly to do with \nSPI, and (c) it should be possible to commit this patch even if other \nSPI patches are rejected.\n\n-- \nMark Dilger",
"msg_date": "Wed, 6 Nov 2019 16:16:14 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": true,
"msg_subject": "Missing test of SPI copy functionality"
},
{
"msg_contents": "On Wed, Nov 06, 2019 at 04:16:14PM -0800, Mark Dilger wrote:\n> While working on cleaning up the SPI interface, I found that one of the SPI\n> error codes, SPI_ERROR_COPY, is never encountered in any test case when\n> running `make check-world`. This case is certainly reachable by a user, as\n> is shown in the attached patch. Is this tested from some other\n> infrastructure?\n\nHard to say, but I think that it would be good to test that part\nindependently anyway. The transaction part close by is actually\ngetting stressed with plpgsql_transaction, so the split done in your\npatch looks fine. I'll look at it again in a couple of days to\ndouble-check for missing spots, and commit it if there are no\nobjections.\n\n> To verify that SPI_ERROR_COPY is not tested, before and after applying the\n> patch, try this modification, and notice before the patch that the fatal\n> error is never encountered:\n\nIf you use \"Assert(false)\", you would get in bonus the call stack. I\nuse this trick from time to time.\n\n> I am submitting this patch separately from other patches related to SPI,\n> since (a) it does not touch any of the SPI code, (b) it fixes missing test\n> coverage to do with COPY and PL/pgSQL, only indirectly to do with SPI, and\n> (c) it should be possible to commit this patch even if other SPI patches are\n> rejected.\n\nThanks for doing so. I can see that it has been added to the CF app:\nhttps://commitfest.postgresql.org/26/2350/\n--\nMichael",
"msg_date": "Thu, 7 Nov 2019 11:27:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Missing test of SPI copy functionality"
},
{
"msg_contents": "\n\nOn 11/6/19 6:27 PM, Michael Paquier wrote:\n> On Wed, Nov 06, 2019 at 04:16:14PM -0800, Mark Dilger wrote:\n>> While working on cleaning up the SPI interface, I found that one of the SPI\n>> error codes, SPI_ERROR_COPY, is never encountered in any test case when\n>> running `make check-world`. This case is certainly reachable by a user, as\n>> is shown in the attached patch. Is this tested from some other\n>> infrastructure?\n> \n> Hard to say, but I think that it would be good to test that part\n> independently anyway. The transaction part close by is actually\n> getting stressed with plpgsql_transaction, so the split done in your\n> patch looks fine. I'll look at it again in a couple of days to\n> double-check for missing spots, and commit it if there are no\n> objections.\n\nThanks for reviewing!\n\n\n-- \nMark Dilger\n\n\n",
"msg_date": "Thu, 7 Nov 2019 06:25:54 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Missing test of SPI copy functionality"
},
{
"msg_contents": "On Thu, Nov 07, 2019 at 06:25:54AM -0800, Mark Dilger wrote:\n> Thanks for reviewing!\n\nAfter a closer lookup, I have noticed that you missed a second code\npath which is able to trigger the COPY failures as you use EXECUTE\nwith COPY in PL/pgSQL. So I have added some tests for that, and\ncommitted the patch. Thanks.\n--\nMichael",
"msg_date": "Sat, 9 Nov 2019 14:59:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Missing test of SPI copy functionality"
}
] |
[
{
"msg_contents": "Hi Amit,\n\nI am reading about this feature and reviewing it.\nTo start with, I reviewed the patch:\n0005-Doc-changes-describing-details-about-logical-decodin.patch.\n\n>prevent VACUUM from removing required rows from the system catalogs,\n>hot_standby_feedback should be set on the standby. In spite of that,\n>if any required rows get removed on standby, the slot gets dropped.\nIIUC, you mean `if any required rows get removed on *the master* the slot\ngets\ndropped`, right?\n\nThank you,\n-- \nRahila Syed\nPerformance Engineer\n2ndQuadrant\nhttp://www.2ndQuadrant.com <http://www.2ndquadrant.com/>\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nHi Amit,I am reading about this feature and reviewing it.To start with, I reviewed the patch: 0005-Doc-changes-describing-details-about-logical-decodin.patch. >prevent VACUUM from removing required rows from the system catalogs,>hot_standby_feedback should be set on the standby. In spite of that,>if any required rows get removed on standby, the slot gets dropped.IIUC, you mean `if any required rows get removed on *the master* the slot getsdropped`, right?Thank you,-- Rahila SyedPerformance Engineer2ndQuadrant http://www.2ndQuadrant.comPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 7 Nov 2019 14:02:03 +0530",
"msg_from": "Rahila Syed <rahila.syed@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Minimal logical decoding on standbys"
}
] |
[
{
"msg_contents": "The docs currently say\n\n The language named <literal>plpythonu</literal> implements\n PL/Python based on the default Python language variant, which is\n currently Python 2. (This default is independent of what any\n local Python installations might consider to be\n their <quote>default</quote>, for example,\n what <filename>/usr/bin/python</filename> might be.) The\n default will probably be changed to Python 3 in a distant future\n release of PostgreSQL, depending on the progress of the\n migration to Python 3 in the Python community.\n\nAs python2 is EOL very soon, I'd say that point is now, i.e. we should\nmake plpythonu.control point at plpython3u in PG13+. And probably drop\npython2 support altogether.\n\nFor PG12, I have the problem that I don't want to keep supporting\npython2 (Debian is already working hard on removing all python2\nreferences), and have therefore already disabled building the\nplpython2 packages for Debian, shipping only plpython3.\n\nPostGIS developer Ra�l Mar�n has rightfully noticed that this leaves\nus without the \"plpythonu\" extension, forcing everyone to move to\n\"plpython3u\" even when their code works with both.\n\nHow do other packagers handle that? Are you still supporting python2?\nWould it be ok to make plpythonu.control point at python3 in PG12 in\nDebian, even the upstream default is still python2?\n\nChristoph\n\n\n",
"msg_date": "Thu, 7 Nov 2019 18:04:32 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "plpythonu -> python3"
},
{
"msg_contents": "Christoph Berg <myon@debian.org> writes:\n> The docs currently say\n> The language named <literal>plpythonu</literal> implements\n> PL/Python based on the default Python language variant, which is\n> currently Python 2. (This default is independent of what any\n> local Python installations might consider to be\n> their <quote>default</quote>, for example,\n> what <filename>/usr/bin/python</filename> might be.) The\n> default will probably be changed to Python 3 in a distant future\n> release of PostgreSQL, depending on the progress of the\n> migration to Python 3 in the Python community.\n\n> As python2 is EOL very soon, I'd say that point is now, i.e. we should\n> make plpythonu.control point at plpython3u in PG13+.\n\nWe're starting to work on that; it's not a trivial change. Among other\nthings, pg_pltemplate has got pointers at plpython2 as well. See [1]\nfor one preliminary step, and there are other discussions in the archives\nabout things we could do to make this smoother.\n\n> And probably drop python2 support altogether.\n\nI think it'll be quite some time before that happens. People who\nare still using ancient versions of Postgres are not likely to be\nimpressed by arguments about how python2 is out of support.\n\n> For PG12, I have the problem that I don't want to keep supporting\n> python2 (Debian is already working hard on removing all python2\n> references), and have therefore already disabled building the\n> plpython2 packages for Debian, shipping only plpython3.\n\nYou're fully within your rights to stop building plpython2 in what you\nship. That's not an argument for removing the upstream support.\n\n> Would it be ok to make plpythonu.control point at python3 in PG12 in\n> Debian, even the upstream default is still python2?\n\nI do not think you should do that. This transition is going to be\npainful enough without distributions making their own ad-hoc changes\nthat are different from what other people are doing.\n\nRight at the moment, given that Debian and others have already stopped\nshipping \"/usr/bin/python\", I'd say that the equivalent thing is just to\nstop building plpython2, and force users to deal with the change manually.\nIf you didn't decide to symlink /usr/bin/python to python3 instead of\npython2, what's the justification for doing the moral equivalent of that\nwith plpython?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/5889.1566415762@sss.pgh.pa.us\n\n\n",
"msg_date": "Thu, 07 Nov 2019 12:32:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: plpythonu -> python3"
},
{
"msg_contents": "Re: Tom Lane 2019-11-07 <14186.1573147925@sss.pgh.pa.us>\n> > And probably drop python2 support altogether.\n> \n> I think it'll be quite some time before that happens. People who\n> are still using ancient versions of Postgres are not likely to be\n> impressed by arguments about how python2 is out of support.\n\nFwiw, I meant to suggest dropping python2 support in PG13+. (At the\nmoment there are some \"interesting\" scripts in src/pl/plpython that\nconvert the plpython2 things on the fly to be python3 compatible,\nthese could go away, simplifying some parts of the build system.)\n\nChristoph\n\n\n",
"msg_date": "Thu, 7 Nov 2019 18:38:53 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: plpythonu -> python3"
},
{
"msg_contents": "On Thu, Nov 7, 2019 at 6:04 PM Christoph Berg <myon@debian.org> wrote:\n> How do other packagers handle that? Are you still supporting python2?\n> Would it be ok to make plpythonu.control point at python3 in PG12 in\n> Debian, even the upstream default is still python2?\nSpeaking for Fedora and RHEL, I'd say the best way to approach this from the\npackager standpoint would be to simply stop building plpython2 for releases\nwithout python2 support.\n\n> Would it be ok to make plpythonu.control point at python3 in PG12 in\n> Debian, even the upstream default is still python2?\nIMHO, this should be done upstream. The reason for this would be to have\na uniform approach to this across distributions, and it is explained\nin more detail\nin some of the older threads of this list.\n\n\n-- \nPatrik Novotný\nAssociate Software Engineer\nRed Hat\npanovotn@redhat.com\n\n\n\n",
"msg_date": "Thu, 7 Nov 2019 18:56:34 +0100",
"msg_from": "Patrik Novotny <panovotn@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: plpythonu -> python3"
},
{
"msg_contents": "I wrote:\n> Christoph Berg <myon@debian.org> writes:\n>> As python2 is EOL very soon, I'd say that point is now, i.e. we should\n>> make plpythonu.control point at plpython3u in PG13+.\n\n> We're starting to work on that; it's not a trivial change. Among other\n> things, pg_pltemplate has got pointers at plpython2 as well. See [1]\n> for one preliminary step, and there are other discussions in the archives\n> about things we could do to make this smoother.\n\nSome of that prior discussion is here:\n\nhttps://www.postgresql.org/message-id/flat/5351890.TdMePpdHBD%40nb.usersys.redhat.com\n\nOne thing that'd be useful to do, perhaps, is polish up the conversion\nscript I posted in that thread, and make it available to users before\nplpython2 disappears. (As written, it needs both plpython versions\nto be available ...)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 07 Nov 2019 12:57:38 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: plpythonu -> python3"
}
] |
[
{
"msg_contents": "Hackers,\n\nAs discussed with Tom in [1] and again with Pavel and Alvaro in [2], \nhere is a partial WIP refactoring of the SPI interface. The goal is to \nremove as many of the SPI_ERROR_xxx codes as possible from the \ninterface, replacing them with elog(ERROR), without removing the ability \nof callers to check meaningful return codes and make their own decisions \nabout what to do next. The crucial point here is that many of the error \ncodes in SPI are \"can't happen\" or \"you made a programmatic mistake\" \ntype errors that don't require run time remediation, but rather require \nfixing the C code and recompiling. Those seem better handled as an \nelog(ERROR) to avoid the need for tests against such error codes \nsprinkled throughout the system.\n\nOne upshot of the refactoring is that some of the SPI functions that \npreviously returned an error code can be changed to return void. Tom \nsuggested a nice way to use macros to provide backward compatibility \nwith older third-party software, and I used his suggestion.\n\nI need guidance with SPI_ERROR_ARGUMENT because it is used by functions \nthat don't return an integer error code, but rather return a SPIPlanPtr, \nsuch as SPI_prepare. Those functions return NULL and set a global \nvariable named SPI_result to the error code. I'd be happy to just \nconvert this to throwing an error, too, but it is a greater API break \nthan anything implemented in the attached patches so far. How do others \nfeel about it?\n\nIf more places like this can be converted to use elog(ERROR), it may be \npossible to convert more functions to return void.\n\n\n[1] https://www.postgresql.org/message-id/13404.1558558354%40sss.pgh.pa.us\n\n[2] \nhttps://www.postgresql.org/message-id/20191106151112.GA12251%40alvherre.pgsql\n-- \nMark Dilger",
"msg_date": "Thu, 7 Nov 2019 15:38:51 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": true,
"msg_subject": "SPI refactoring"
},
{
"msg_contents": "pá 8. 11. 2019 v 0:39 odesílatel Mark Dilger <hornschnorter@gmail.com>\nnapsal:\n\n> Hackers,\n>\n> As discussed with Tom in [1] and again with Pavel and Alvaro in [2],\n> here is a partial WIP refactoring of the SPI interface. The goal is to\n> remove as many of the SPI_ERROR_xxx codes as possible from the\n> interface, replacing them with elog(ERROR), without removing the ability\n> of callers to check meaningful return codes and make their own decisions\n> about what to do next. The crucial point here is that many of the error\n> codes in SPI are \"can't happen\" or \"you made a programmatic mistake\"\n> type errors that don't require run time remediation, but rather require\n> fixing the C code and recompiling. Those seem better handled as an\n> elog(ERROR) to avoid the need for tests against such error codes\n> sprinkled throughout the system.\n>\n> One upshot of the refactoring is that some of the SPI functions that\n> previously returned an error code can be changed to return void. Tom\n> suggested a nice way to use macros to provide backward compatibility\n> with older third-party software, and I used his suggestion.\n>\n> I need guidance with SPI_ERROR_ARGUMENT because it is used by functions\n> that don't return an integer error code, but rather return a SPIPlanPtr,\n> such as SPI_prepare. Those functions return NULL and set a global\n> variable named SPI_result to the error code. I'd be happy to just\n> convert this to throwing an error, too, but it is a greater API break\n> than anything implemented in the attached patches so far. How do others\n> feel about it?\n>\n> If more places like this can be converted to use elog(ERROR), it may be\n> possible to convert more functions to return void.\n>\n>\n> [1] https://www.postgresql.org/message-id/13404.1558558354%40sss.pgh.pa.us\n>\n> [2]\n>\n> https://www.postgresql.org/message-id/20191106151112.GA12251%40alvherre.pgsql\n\n\nGenerally lot of API used by extensions are changing - SPI is not\ndifferent, and I don't see too much benefit of compatibility API. When you\nneed to define BACKWARDS_COMPATIBLE_SPI_CALLS, then you can clean code.\n\nIt looks for me needlessly. If we change internal API, then should be clean\nsignal so code should be fixed, so I don't like\n\n-#define SPI_ERROR_PARAM (-7)\n+#define SPI_ERROR_PARAM (-7) /* not used anymore */\n\nIt should be removed.\n\nI am maybe too aggressive - but because any extension should be compiled\nfor any postgres release, I don't think so we should to hold some internal\nobsolete API. BACKWARDS_COMPATIBLE.. is not used else where, and I would\nnot to introduce this concept here. It can helps in short perspective, but\nit can be trap in long perspective.\n\nRegards\n\nPavel\n\n\n\n\n\n> --\n> Mark Dilger\n>\n\npá 8. 11. 2019 v 0:39 odesílatel Mark Dilger <hornschnorter@gmail.com> napsal:Hackers,\n\nAs discussed with Tom in [1] and again with Pavel and Alvaro in [2], \nhere is a partial WIP refactoring of the SPI interface. The goal is to \nremove as many of the SPI_ERROR_xxx codes as possible from the \ninterface, replacing them with elog(ERROR), without removing the ability \nof callers to check meaningful return codes and make their own decisions \nabout what to do next. The crucial point here is that many of the error \ncodes in SPI are \"can't happen\" or \"you made a programmatic mistake\" \ntype errors that don't require run time remediation, but rather require \nfixing the C code and recompiling. Those seem better handled as an \nelog(ERROR) to avoid the need for tests against such error codes \nsprinkled throughout the system.\n\nOne upshot of the refactoring is that some of the SPI functions that \npreviously returned an error code can be changed to return void. Tom \nsuggested a nice way to use macros to provide backward compatibility \nwith older third-party software, and I used his suggestion.\n\nI need guidance with SPI_ERROR_ARGUMENT because it is used by functions \nthat don't return an integer error code, but rather return a SPIPlanPtr, \nsuch as SPI_prepare. Those functions return NULL and set a global \nvariable named SPI_result to the error code. I'd be happy to just \nconvert this to throwing an error, too, but it is a greater API break \nthan anything implemented in the attached patches so far. How do others \nfeel about it?\n\nIf more places like this can be converted to use elog(ERROR), it may be \npossible to convert more functions to return void.\n\n\n[1] https://www.postgresql.org/message-id/13404.1558558354%40sss.pgh.pa.us\n\n[2] \nhttps://www.postgresql.org/message-id/20191106151112.GA12251%40alvherre.pgsqlGenerally lot of API used by extensions are changing - SPI is not different, and I don't see too much benefit of compatibility API. When you need to define BACKWARDS_COMPATIBLE_SPI_CALLS, then you can clean code.It looks for me needlessly. If we change internal API, then should be clean signal so code should be fixed, so I don't like -#define SPI_ERROR_PARAM\t\t\t(-7)+#define SPI_ERROR_PARAM\t\t\t(-7)\t/* not used anymore */It should be removed. I am maybe too aggressive - but because any extension should be compiled for any postgres release, I don't think so we should to hold some internal obsolete API. BACKWARDS_COMPATIBLE.. is not used else where, and I would not to introduce this concept here. It can helps in short perspective, but it can be trap in long perspective.RegardsPavel \n-- \nMark Dilger",
"msg_date": "Fri, 8 Nov 2019 07:50:48 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SPI refactoring"
},
{
"msg_contents": "On 2019-Nov-07, Mark Dilger wrote:\n\n> From 113d42772be2c2abd71fd142cde9240522f143d7 Mon Sep 17 00:00:00 2001\n> From: Mark Dilger <hornschnorter@gmail.com>\n> Date: Thu, 7 Nov 2019 07:51:06 -0800\n> Subject: [PATCH v1 1/5] Deprecating unused SPI error codes.\n> \n> The SPI_ERROR_NOOUTFUNC and SPI_ERROR_CONNECT codes, defined in spi.h,\n> were no longer used anywhere in the sources except nominally in spi.c\n> where SPI_result_code_string(int code) was translating them to a cstring\n> for use in error messages. But since the system never returns these\n> error codes, it seems unlikely anybody still needs to be able to convert\n> them to a string.\n> \n> Removing these from spi.c, from the docs, and from a code comment in\n> contrib. Leaving the definition in spi.h for backwards compatibility of\n> third-party applications.\n\nBecause of PG_MODULE_MAGIC forcing a recompile of modules for each major\nserver version, there's no ABI-stability requirement for these values.\nIf we were to leave the definitions in spi.h and remove the code that\nhandles them, then code could compile but at run-time it would produce\nthe \"unrecognized\" string. Therefore I think it is better to remove the\ndefinitions from spi.h, so that we can be sure that the code will never\nbe needed.\n\nI didn't look at the other patches, but I suppose the same argument\napplies to retaining their defines too.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 8 Nov 2019 09:50:13 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: SPI refactoring"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nHere's a draft patch that teaches PostgreSQL how to ask for collation\nversions on Windows. It receives a pair of DWORDs, which, it displays\nas hex. The values probably have an internal structure that is\ndisplayed in a user-friendly way by software like Active Directory and\nSQL Server (I'm pretty sure they both track collation versions and\nreindex), but I don't know. It is based on the documentation at:\n\nhttps://docs.microsoft.com/en-us/windows/win32/win7appqual/nls-sorting-changes\n\nMy understanding of our OS and tool chain version strategy on that\nplatform is limited, but it looks like we only allow ourselves to use\nVista (and later) APIs if the compiler is Visual Studio 2015 (aka\n14.0) or later. So I tested that this builds cleanly on AppVeyor\nusing that compiler (see attached CI patch). The regression tests\nfailed with Windows error code 87 before I added in the check to skip\n\"C\" and \"POSIX\", so I know the new code is reached. I don't have an\nenvironment to test it beyond that.\n\nThe reason for returning an empty string for \"C\" and \"POSIX\" is the\nfollowing comment for get_collation_actual_version():\n\n * A particular provider must always either return a non-NULL string or return\n * NULL (if it doesn't support versions). It must not return NULL for some\n * collcollate and not NULL for others.\n\nI'm not sure why, or if that really makes sense.\n\nDo any Windows hackers want to help get it into shape? Some things to\ndo: test it, verify that the _WIN32_WINNT >= 0x0600 stuff makes sense\n(why do we target such ancient Windows releases anyway?), see if there\nis way we could use GetNLSVersion() (no \"Ex\") to make this work on\nolder Windows system, check if it makes sense to assume that\ncollcollate is encoded with CP_ACP (\"the system default Windows ANSI\ncode page\", used elsewhere in the PG source tree for a similar\npurpose, but this seems likely to go wrong for locale names that have\nnon-ASCII characters, and indeed we see complaints on the lists\ninvolving the word \"Bokmål\"), and recommend a better way to display\nthe collation version as text. I'll add this to the next commitfest\nto attract some eyeballs (but note that when cfbot compiles it, it\nwill be using an older tool chain and Win32 target, so the new code\nwill be ifdef'd out and regression test success means nothing).\n\nTo test that it works, you'd need to look at the contents of\npg_collation to confirm that you see the new version strings, create\nan index on a column that explicitly uses a collation that has a\nversion, update the pg_collation table by hand to have a to a\ndifferent value, and then open a new session and to access the index\nto check that you get a warning about the version changing. The\nwarning can be cleared by using ALTER COLLATION ... REFRESH VERSION.",
"msg_date": "Fri, 8 Nov 2019 12:44:13 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Collation versions on Windows (help wanted, apply within)"
},
{
"msg_contents": "On Fri, Nov 8, 2019 at 12:44 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n>\n> Do any Windows hackers want to help get it into shape? Some things to\n> do: test it, verify that the _WIN32_WINNT >= 0x0600 stuff makes sense\n> (why do we target such ancient Windows releases anyway?)\n\n\nYou have to keep in mind that _WIN32_WINNT also applies to MinGW, so any\nbuild with those tools will use a value of 0x0501 and this code will be\nifdef'd out.\n\nAs from where this value comes, my take is that it has not been revised in\na long time [1]. Windows 7 , Server 2008 and 2008 R2 support will end next\nyear [2] [3], maybe you can make a case for updating this value.\n\n\n> see if there is way we could use GetNLSVersion() (no \"Ex\") to make this\n> work on\n> older Windows system\n\n\nOlder systems is just Windows Server 2003, not sure if it is worth any\neffort.\n\n\n> check if it makes sense to assume that\n> collcollate is encoded with CP_ACP (\"the system default Windows ANSI\n> code page\", used elsewhere in the PG source tree for a similar\n> purpose, but this seems likely to go wrong for locale names that have\n> non-ASCII characters, and indeed we see complaints on the lists\n> involving the word \"Bokmål\"), and recommend a better way to display\n> the collation version as text.\n\n\nThe GetNLSVersionEx() function uses a \"Language tag\" value, check Language\nCode Identifier (LCID) [4], and these tags are plain ascii.\n\n\n> To test that it works, you'd need to look at the contents of\n> pg_collation to confirm that you see the new version strings, create\n> an index on a column that explicitly uses a collation that has a\n> version, update the pg_collation table by hand to have a to a\n> different value, and then open a new session and to access the index\n> to check that you get a warning about the version changing. The\n> warning can be cleared by using ALTER COLLATION ... REFRESH VERSION.\n>\n\nThe code works as expected with this collation:\n\npostgres=# CREATE COLLATION en_US (LC_COLLATE = 'en-US', LC_CTYPE =\n'en-US');\nCREATE COLLATION\npostgres=# select * from pg_collation;\n oid | collname | collnamespace | collowner | collprovider |\ncollisdeterministic | collencoding | collcollate | collctype | collversion\n-------+-----------+---------------+-----------+--------------+---------------------+--------------+-------------+-----------+-------------\n 100 | default | 11 | 10 | d | t\n | -1 | | |\n 950 | C | 11 | 10 | c | t\n | -1 | C | C |\n 951 | POSIX | 11 | 10 | c | t\n | -1 | POSIX | POSIX |\n 12326 | ucs_basic | 11 | 10 | c | t\n | 6 | C | C |\n 16387 | en_us | 2200 | 10 | c | t\n | 24 | en-US | en-US | 6020f,6020f\n(5 rows)\n\nThe error code 87 is an ERROR_INVALID_PARAMETER that is raised when the\ncollate input does not match a valid tag, I would suggest not returning it\ndirectly.\n\nRegards,\n\nJuan José Santamaría Flecha\n\n[1]\nhttps://www.postgresql.org/message-id/flat/20090907112633.C851.52131E4D%40oss.ntt.co.jp\n[2]\nhttps://support.microsoft.com/en-us/help/4456235/end-of-support-for-windows-server-2008-and-windows-server-2008-r2\n[3]\nhttps://support.microsoft.com/en-us/help/4057281/windows-7-support-will-end-on-january-14-2020\n[4]\nhttps://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-lcid/70feba9f-294e-491e-b6eb-56532684c37f\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Fri, Nov 8, 2019 at 12:44 AM Thomas Munro <thomas.munro@gmail.com> wrote:\nDo any Windows hackers want to help get it into shape? Some things to\ndo: test it, verify that the _WIN32_WINNT >= 0x0600 stuff makes sense\n(why do we target such ancient Windows releases anyway?)You have to keep in mind that _WIN32_WINNT also applies to MinGW, so any build with those tools will use a value of 0x0501 and this code will be ifdef'd out.As from where this value comes, my take is that it has not been revised in a long time [1]. Windows 7 , Server 2008 and 2008 R2 support will end next year [2] [3], maybe you can make a case for updating this value. see if there is way we could use GetNLSVersion() (no \"Ex\") to make this work on\nolder Windows systemOlder systems is just Windows Server 2003, not sure if it is worth any effort. check if it makes sense to assume that\ncollcollate is encoded with CP_ACP (\"the system default Windows ANSI\ncode page\", used elsewhere in the PG source tree for a similar\npurpose, but this seems likely to go wrong for locale names that have\nnon-ASCII characters, and indeed we see complaints on the lists\ninvolving the word \"Bokmål\"), and recommend a better way to display\nthe collation version as text.The GetNLSVersionEx() function uses a \"Language tag\" value, check Language Code Identifier (LCID) [4], and these tags are plain ascii. To test that it works, you'd need to look at the contents of\npg_collation to confirm that you see the new version strings, create\nan index on a column that explicitly uses a collation that has a\nversion, update the pg_collation table by hand to have a to a\ndifferent value, and then open a new session and to access the index\nto check that you get a warning about the version changing. The\nwarning can be cleared by using ALTER COLLATION ... REFRESH VERSION.The code works as expected with this collation:postgres=# CREATE COLLATION en_US (LC_COLLATE = 'en-US', LC_CTYPE = 'en-US');CREATE COLLATIONpostgres=# select * from pg_collation; oid | collname | collnamespace | collowner | collprovider | collisdeterministic | collencoding | collcollate | collctype | collversion-------+-----------+---------------+-----------+--------------+---------------------+--------------+-------------+-----------+------------- 100 | default | 11 | 10 | d | t | -1 | | | 950 | C | 11 | 10 | c | t | -1 | C | C | 951 | POSIX | 11 | 10 | c | t | -1 | POSIX | POSIX | 12326 | ucs_basic | 11 | 10 | c | t | 6 | C | C | 16387 | en_us | 2200 | 10 | c | t | 24 | en-US | en-US | 6020f,6020f(5 rows)The error code 87 is an ERROR_INVALID_PARAMETER that is raised when the collate input does not match a valid tag, I would suggest not returning it directly.Regards,Juan José Santamaría Flecha [1] https://www.postgresql.org/message-id/flat/20090907112633.C851.52131E4D%40oss.ntt.co.jp[2] https://support.microsoft.com/en-us/help/4456235/end-of-support-for-windows-server-2008-and-windows-server-2008-r2[3] https://support.microsoft.com/en-us/help/4057281/windows-7-support-will-end-on-january-14-2020[4] https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-lcid/70feba9f-294e-491e-b6eb-56532684c37fRegards,Juan José Santamaría Flecha",
"msg_date": "Fri, 8 Nov 2019 22:20:42 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Collation versions on Windows (help wanted, apply within)"
},
{
"msg_contents": "On Fri, Nov 8, 2019 at 12:44 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n>\n> recommend a better way to display the collation version as text.\n>\n>\nThere is a major and a minor version. The attached patch applies on top the\nprevious.\n\nRegards,\n\nJuan José Santamaría Flecha",
"msg_date": "Sat, 9 Nov 2019 11:03:59 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Collation versions on Windows (help wanted, apply within)"
},
{
"msg_contents": "On Sat, Nov 9, 2019 at 10:20 AM Juan José Santamaría Flecha\n<juanjo.santamaria@gmail.com> wrote:\n>> Do any Windows hackers want to help get it into shape? Some things to\n>> do: test it, verify that the _WIN32_WINNT >= 0x0600 stuff makes sense\n>> (why do we target such ancient Windows releases anyway?)\n>\n> You have to keep in mind that _WIN32_WINNT also applies to MinGW, so any build with those tools will use a value of 0x0501 and this code will be ifdef'd out.\n>\n> As from where this value comes, my take is that it has not been revised in a long time [1]. Windows 7 , Server 2008 and 2008 R2 support will end next year [2] [3], maybe you can make a case for updating this value.\n\nAh, I see, thanks. I think what I have is OK for now then. If\nsomeone else who is closer to the matter wants to argue that we should\nalways target Vista+ (for example on MinGW) in order to access this\nfunctionality, I'll let them do that separately.\n\n>> see if there is way we could use GetNLSVersion() (no \"Ex\") to make this work on\n>> older Windows system\n>\n> Older systems is just Windows Server 2003, not sure if it is worth any effort.\n\nCool. Nothing to do here then.\n\n> 16387 | en_us | 2200 | 10 | c | t | 24 | en-US | en-US | 6020f,6020f\n\nThanks for testing!\n\nOn Sat, Nov 9, 2019 at 11:04 PM Juan José Santamaría Flecha\n<juanjo.santamaria@gmail.com> wrote:\n> There is a major and a minor version. The attached patch applies on top the previous.\n\nPerfect. I've merged this into the patch.\n\nIt's interesting that minor version changes mean no order changed but\nnew code points were added; that must be useful if your system\nprevents you from using code points before you add them, I guess (?).\nI don't understand the difference between the NLS and \"defined\"\nversions, but at this stage I don't think we can try to be too fancy\nhere, think we're just going to have to assume we need both of them\nand treat this the same way across all providers: if it moves, reindex\nit.",
"msg_date": "Mon, 11 Nov 2019 15:32:44 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Collation versions on Windows (help wanted, apply within)"
},
{
"msg_contents": "On Fri, Nov 8, 2019 at 12:44 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> The reason for returning an empty string for \"C\" and \"POSIX\" is the\n> following comment for get_collation_actual_version():\n>\n> * A particular provider must always either return a non-NULL string or return\n> * NULL (if it doesn't support versions). It must not return NULL for some\n> * collcollate and not NULL for others.\n>\n> I'm not sure why, or if that really makes sense.\n\nPeter E, do you have any thoughts on this question?\n\n\n",
"msg_date": "Wed, 27 Nov 2019 09:39:25 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Collation versions on Windows (help wanted, apply within)"
},
{
"msg_contents": "On 2019-11-26 21:39, Thomas Munro wrote:\n> On Fri, Nov 8, 2019 at 12:44 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> The reason for returning an empty string for \"C\" and \"POSIX\" is the\n>> following comment for get_collation_actual_version():\n>>\n>> * A particular provider must always either return a non-NULL string or return\n>> * NULL (if it doesn't support versions). It must not return NULL for some\n>> * collcollate and not NULL for others.\n>>\n>> I'm not sure why, or if that really makes sense.\n> \n> Peter E, do you have any thoughts on this question?\n\nDoesn't make sense to me either.\n\nWe need to handle the various combinations of null and non-null stored \nand actual versions, which we do, but that only applies within a given \ncollcollate. I don't think we need the property that that comment calls \nfor.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 26 Nov 2019 22:38:06 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Collation versions on Windows (help wanted, apply within)"
},
{
"msg_contents": "On Wed, Nov 27, 2019 at 10:38 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> On 2019-11-26 21:39, Thomas Munro wrote:\n> > On Fri, Nov 8, 2019 at 12:44 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> >> The reason for returning an empty string for \"C\" and \"POSIX\" is the\n> >> following comment for get_collation_actual_version():\n> >>\n> >> * A particular provider must always either return a non-NULL string or return\n> >> * NULL (if it doesn't support versions). It must not return NULL for some\n> >> * collcollate and not NULL for others.\n> >>\n> >> I'm not sure why, or if that really makes sense.\n> >\n> > Peter E, do you have any thoughts on this question?\n>\n> Doesn't make sense to me either.\n>\n> We need to handle the various combinations of null and non-null stored\n> and actual versions, which we do, but that only applies within a given\n> collcollate. I don't think we need the property that that comment calls\n> for.\n\nWhile wondering about that, I noticed the \"C.UTF-8\" encoding (here on\na glibc system):\n\npostgres=# \\pset null <NULL>\nNull display is \"<NULL>\".\npostgres=# select collname, collprovider, collencoding, collcollate,\ncollctype, collversion from pg_collation ;\n collname | collprovider | collencoding | collcollate | collctype |\ncollversion\n------------+--------------+--------------+-------------+------------+-------------\n default | d | -1 | | | <NULL>\n C | c | -1 | C | C | <NULL>\n POSIX | c | -1 | POSIX | POSIX | <NULL>\n ucs_basic | c | 6 | C | C | <NULL>\n C.UTF-8 | c | 6 | C.UTF-8 | C.UTF-8 | 2.28\n en_NZ.utf8 | c | 6 | en_NZ.utf8 | en_NZ.utf8 | 2.28\n en_NZ | c | 6 | en_NZ.utf8 | en_NZ.utf8 | 2.28\n(7 rows)\n\nI wonder if we should do something to give that no collversion, since\nwe know that it's really just another way of spelling \"binary sort\nplease\", or if we'd be better off not trying to interpret the names\nlocale -a spits out.\n\nJuan Jose,\n\nWould you mind posting the full output of the above query (with <NULL>\nshowing) on a Windows system after running initdb with the -v2 patch,\nso we can see how the collations look?\n\n\n",
"msg_date": "Mon, 16 Dec 2019 13:26:10 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Collation versions on Windows (help wanted, apply within)"
},
{
"msg_contents": "On 2019-12-16 01:26, Thomas Munro wrote:\n> While wondering about that, I noticed the \"C.UTF-8\" encoding (here on\n> a glibc system):\n> \n> postgres=# \\pset null <NULL>\n> Null display is \"<NULL>\".\n> postgres=# select collname, collprovider, collencoding, collcollate,\n> collctype, collversion from pg_collation ;\n> collname | collprovider | collencoding | collcollate | collctype |\n> collversion\n> ------------+--------------+--------------+-------------+------------+-------------\n> default | d | -1 | | | <NULL>\n> C | c | -1 | C | C | <NULL>\n> POSIX | c | -1 | POSIX | POSIX | <NULL>\n> ucs_basic | c | 6 | C | C | <NULL>\n> C.UTF-8 | c | 6 | C.UTF-8 | C.UTF-8 | 2.28\n> en_NZ.utf8 | c | 6 | en_NZ.utf8 | en_NZ.utf8 | 2.28\n> en_NZ | c | 6 | en_NZ.utf8 | en_NZ.utf8 | 2.28\n> (7 rows)\n> \n> I wonder if we should do something to give that no collversion, since\n> we know that it's really just another way of spelling \"binary sort\n> please\", or if we'd be better off not trying to interpret the names\n> locale -a spits out.\n\nI think it's worth handling that separately. If we want to give it an \nair of generality and not just hard-code that one locale name, we could \nstrip off the encoding and compare the rest with the well-known encoding \nnames.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 16 Dec 2019 13:20:59 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Collation versions on Windows (help wanted, apply within)"
},
{
"msg_contents": "On Mon, Dec 16, 2019 at 1:26 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Wed, Nov 27, 2019 at 10:38 AM Peter Eisentraut\n>\n> Would you mind posting the full output of the above query (with <NULL>\n> showing) on a Windows system after running initdb with the -v2 patch,\n> so we can see how the collations look?\n>\n>\nSure, you can find attached the full output with ICU.\n\nThis is a resume to illustrate an issue with locale = 'C':\n\npostgres=# CREATE COLLATION c_test (locale = 'C');\nCREATE COLLATION\npostgres=# select collname, collprovider, collencoding, collcollate,\ncollctype, collversion from pg_collation ;\n collname | collprovider | collencoding | collcollate |\n collctype | collversion\n------------------------+--------------+--------------+------------------+------------------+-------------\n default | d | -1 | |\n | <NULL>\n C | c | -1 | C |\nC | <NULL>\n POSIX | c | -1 | POSIX |\nPOSIX | <NULL>\n ucs_basic | c | 6 | C |\nC | <NULL>\n und-x-icu | i | -1 | und |\nund | 153.97\n[... resumed ...]\n c_test | c | 6 | C |\nC |\n(757 rows)\n\n Shouldn't it be NULL?\n\nRegards,\n\nJuan José Santamaría Flecha",
"msg_date": "Mon, 16 Dec 2019 13:40:40 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Collation versions on Windows (help wanted, apply within)"
},
{
"msg_contents": "On Tue, Dec 17, 2019 at 1:40 AM Juan José Santamaría Flecha\n<juanjo.santamaria@gmail.com> wrote:\n> On Mon, Dec 16, 2019 at 1:26 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> On Wed, Nov 27, 2019 at 10:38 AM Peter Eisentraut\n>>\n>> Would you mind posting the full output of the above query (with <NULL>\n>> showing) on a Windows system after running initdb with the -v2 patch,\n>> so we can see how the collations look?\n>\n> Sure, you can find attached the full output with ICU.\n\nOh, I didn't realise until now that on Windows, initdb doesn't do\nanything like \"locale -a\" to populate the system locales (which should\nperhaps be done with EnumSystemLocalesEx(), as shown in Noah's nearby\nproblem report). So you have to define them manually, and otherwise\nmost people probably just use the default.\n\n> This is a resume to illustrate an issue with locale = 'C':\n>\n> postgres=# CREATE COLLATION c_test (locale = 'C');\n> CREATE COLLATION\n> postgres=# select collname, collprovider, collencoding, collcollate, collctype, collversion from pg_collation ;\n> collname | collprovider | collencoding | collcollate | collctype | collversion\n> ------------------------+--------------+--------------+------------------+------------------+-------------\n> default | d | -1 | | | <NULL>\n> C | c | -1 | C | C | <NULL>\n> POSIX | c | -1 | POSIX | POSIX | <NULL>\n> ucs_basic | c | 6 | C | C | <NULL>\n> und-x-icu | i | -1 | und | und | 153.97\n> [... resumed ...]\n> c_test | c | 6 | C | C |\n> (757 rows)\n>\n> Shouldn't it be NULL?\n\nYeah, it should. I'll post a new patch next week that does that and\nalso removes the comment about how you can't have a mixture of NULL\nand non-NULL, and see if I can identify anything that depends on that.\nIf you'd like to do that, please go ahead.\n\n\n",
"msg_date": "Wed, 18 Dec 2019 23:02:02 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Collation versions on Windows (help wanted, apply within)"
},
{
"msg_contents": "On Wed, Dec 18, 2019 at 11:02 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Tue, Dec 17, 2019 at 1:40 AM Juan José Santamaría Flecha\n> <juanjo.santamaria@gmail.com> wrote:\n> > This is a resume to illustrate an issue with locale = 'C':\n> >\n> > postgres=# CREATE COLLATION c_test (locale = 'C');\n> > CREATE COLLATION\n> > postgres=# select collname, collprovider, collencoding, collcollate, collctype, collversion from pg_collation ;\n> > collname | collprovider | collencoding | collcollate | collctype | collversion\n> > ------------------------+--------------+--------------+------------------+------------------+-------------\n> > default | d | -1 | | | <NULL>\n> > C | c | -1 | C | C | <NULL>\n> > POSIX | c | -1 | POSIX | POSIX | <NULL>\n> > ucs_basic | c | 6 | C | C | <NULL>\n> > und-x-icu | i | -1 | und | und | 153.97\n> > [... resumed ...]\n> > c_test | c | 6 | C | C |\n> > (757 rows)\n> >\n> > Shouldn't it be NULL?\n\nDone in this new 0002 patch (untested). 0001 removes the comment that\nindividual collations can't have a NULL version, reports NULL for\nLinux/glibc collations like C.UTF-8 by stripping the suffix and\ncomparing with C and POSIX as suggested by Peter E.",
"msg_date": "Mon, 23 Mar 2020 17:58:53 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Collation versions on Windows (help wanted, apply within)"
},
{
"msg_contents": "On Mon, Mar 23, 2020 at 5:59 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n>\n> Done in this new 0002 patch (untested). 0001 removes the comment that\n> individual collations can't have a NULL version, reports NULL for\n> Linux/glibc collations like C.UTF-8 by stripping the suffix and\n> comparing with C and POSIX as suggested by Peter E.\n>\n\n It applies and passes tests without a problem in Windows, and works as\nexpected.\n\nRegards\n\nOn Mon, Mar 23, 2020 at 5:59 AM Thomas Munro <thomas.munro@gmail.com> wrote:\nDone in this new 0002 patch (untested). 0001 removes the comment that\nindividual collations can't have a NULL version, reports NULL for\nLinux/glibc collations like C.UTF-8 by stripping the suffix and\ncomparing with C and POSIX as suggested by Peter E. It applies and passes tests without a problem in Windows, and works as expected.Regards",
"msg_date": "Mon, 23 Mar 2020 19:56:03 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Collation versions on Windows (help wanted, apply within)"
},
{
"msg_contents": "On Tue, Mar 24, 2020 at 7:56 AM Juan José Santamaría Flecha\n<juanjo.santamaria@gmail.com> wrote:\n> On Mon, Mar 23, 2020 at 5:59 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> Done in this new 0002 patch (untested). 0001 removes the comment that\n>> individual collations can't have a NULL version, reports NULL for\n>> Linux/glibc collations like C.UTF-8 by stripping the suffix and\n>> comparing with C and POSIX as suggested by Peter E.\n>\n> It applies and passes tests without a problem in Windows, and works as expected.\n\nThanks! Pushed.\n\n From the things we learned in this thread, I think there is an open\nitem for someone to write a patch to call EnumSystemLocalesEx() and\npopulate the initial set of collations, where we use \"locale -a\" on\nUnix. I'm not sure where the encoding is supposed to come from\nthough, which is why I didn't try to write a patch myself.\n\n\n",
"msg_date": "Wed, 25 Mar 2020 16:18:17 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Collation versions on Windows (help wanted, apply within)"
},
{
"msg_contents": "On Wed, Mar 25, 2020 at 4:18 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n>\n> Thanks! Pushed.\n>\n\nGreat!\n\n\n> From the things we learned in this thread, I think there is an open\n> item for someone to write a patch to call EnumSystemLocalesEx() and\n> populate the initial set of collations, where we use \"locale -a\" on\n> Unix. I'm not sure where the encoding is supposed to come from\n> though, which is why I didn't try to write a patch myself.\n>\n\nI will take a look at this when the current commitfest is over.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Wed, Mar 25, 2020 at 4:18 AM Thomas Munro <thomas.munro@gmail.com> wrote:\nThanks! Pushed.Great! From the things we learned in this thread, I think there is an open\nitem for someone to write a patch to call EnumSystemLocalesEx() and\npopulate the initial set of collations, where we use \"locale -a\" on\nUnix. I'm not sure where the encoding is supposed to come from\nthough, which is why I didn't try to write a patch myself.I will take a look at this when the current commitfest is over.Regards,Juan José Santamaría Flecha",
"msg_date": "Thu, 26 Mar 2020 09:08:30 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Collation versions on Windows (help wanted, apply within)"
}
] |
[
{
"msg_contents": "Hi all\n\nI recently found the need to pretty-print the contents of pg_locks. So\nhere's a little helper to do it, for anyone else who happens to have that\nneed. pg_identify_object is far from adequate for the purpose. Reckon I\nshould turn it into C and submit?\n\n CREATE FUNCTION describe_pg_lock(IN l pg_locks,\n OUT lock_objtype text, OUT lock_objschema text,\n OUT lock_objname text, OUT lock_objidentity text,\n OUT lock_objdescription text)\n LANGUAGE sql VOLATILE RETURNS NULL ON NULL INPUT AS\n $$\n SELECT\n *,\n CASE\n WHEN l.locktype IN ('relation', 'extend') THEN\n 'relation ' || lo.lock_objidentity\n WHEN l.locktype = 'page' THEN\n 'relation ' || lo.lock_objidentity || ' page ' || l.page\n WHEN l.locktype = 'tuple' THEN\n 'relation ' || lo.lock_objidentity || ' page ' || l.page || ' tuple\n' || l.tuple\n WHEN l.locktype = 'transactionid' THEN\n 'transactionid ' || l.transactionid\n WHEN l.locktype = 'virtualxid' THEN\n 'virtualxid ' || l.virtualxid\n WHEN l.locktype = 'speculative token' THEN\n 'speculative token'\n WHEN lock_objidentity IS NOT NULL THEN\n l.locktype || ' ' || lo.lock_objidentity\n ELSE\n l.locktype\n END\n FROM (\n SELECT *\n FROM pg_identify_object('pg_class'::regclass, l.relation, 0)\n WHERE l.locktype IN ('relation', 'extend', 'page', 'tuple')\n UNION ALL\n SELECT *\n FROM pg_identify_object(l.classid, l.objid, l.objsubid)\n WHERE l.locktype NOT IN ('relation', 'extend', 'page', 'tuple')\n ) AS lo(lock_objtype, lock_objschema, lock_objname, lock_objidentity);\n $$;\n\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nHi allI recently found the need to pretty-print the contents of pg_locks. So here's a little helper to do it, for anyone else who happens to have that need. pg_identify_object is far from adequate for the purpose. Reckon I should turn it into C and submit? CREATE FUNCTION describe_pg_lock(IN l pg_locks, OUT lock_objtype text, OUT lock_objschema text, OUT lock_objname text, OUT lock_objidentity text, OUT lock_objdescription text) LANGUAGE sql VOLATILE RETURNS NULL ON NULL INPUT AS $$ SELECT *, CASE WHEN l.locktype IN ('relation', 'extend') THEN 'relation ' || lo.lock_objidentity WHEN l.locktype = 'page' THEN 'relation ' || lo.lock_objidentity || ' page ' || l.page WHEN l.locktype = 'tuple' THEN 'relation ' || lo.lock_objidentity || ' page ' || l.page || ' tuple ' || l.tuple WHEN l.locktype = 'transactionid' THEN 'transactionid ' || l.transactionid WHEN l.locktype = 'virtualxid' THEN 'virtualxid ' || l.virtualxid WHEN l.locktype = 'speculative token' THEN 'speculative token' WHEN lock_objidentity IS NOT NULL THEN l.locktype || ' ' || lo.lock_objidentity ELSE l.locktype END FROM ( SELECT * FROM pg_identify_object('pg_class'::regclass, l.relation, 0) WHERE l.locktype IN ('relation', 'extend', 'page', 'tuple') UNION ALL SELECT * FROM pg_identify_object(l.classid, l.objid, l.objsubid) WHERE l.locktype NOT IN ('relation', 'extend', 'page', 'tuple') ) AS lo(lock_objtype, lock_objschema, lock_objname, lock_objidentity); $$;-- Craig Ringer http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Fri, 8 Nov 2019 14:49:25 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Handy describe_pg_lock function"
},
{
"msg_contents": "Hi,\n\nOn 2019-11-08 14:49:25 +0800, Craig Ringer wrote:\n> I recently found the need to pretty-print the contents of pg_locks. So\n> here's a little helper to do it, for anyone else who happens to have that\n> need. pg_identify_object is far from adequate for the purpose. Reckon I\n> should turn it into C and submit?\n\nYea, I think we need to make it easier for users to understand\nlocking. I kind of wonder whether part of the answer would be to change\nthe details that pg_locks shows, or add a pg_locks_detailed or such\n(presumably a more detailed version would include walking the dependency\ngraph to at least some degree, and thus more expensive).\n\nI think we probably could include the described lock as an extra column\nfor pg_locks, as part of a function call in the view targetlist. That\nway one would not pay the price when selecting from pg_locks without\nincluding the new columns.\n\nWonder if it'd be worth introducing a regdatabase type. It'd sure make\nviews like pg_stat_activity, pg_stat_statements, pg_locks, pg_shdepend\neasier to interpret (if we change the views to use regdatabase) / query\n(if not, it's just an added cast).\n\n\n> CREATE FUNCTION describe_pg_lock(IN l pg_locks,\n> OUT lock_objtype text, OUT lock_objschema text,\n> OUT lock_objname text, OUT lock_objidentity text,\n> OUT lock_objdescription text)\n> LANGUAGE sql VOLATILE RETURNS NULL ON NULL INPUT AS\n> $$\n> SELECT\n> *,\n> CASE\n> WHEN l.locktype IN ('relation', 'extend') THEN\n> 'relation ' || lo.lock_objidentity\n> WHEN l.locktype = 'page' THEN\n> 'relation ' || lo.lock_objidentity || ' page ' || l.page\n> WHEN l.locktype = 'tuple' THEN\n> 'relation ' || lo.lock_objidentity || ' page ' || l.page || ' tuple\n> ' || l.tuple\n> WHEN l.locktype = 'transactionid' THEN\n> 'transactionid ' || l.transactionid\n> WHEN l.locktype = 'virtualxid' THEN\n> 'virtualxid ' || l.virtualxid\n> WHEN l.locktype = 'speculative token' THEN\n> 'speculative token'\n> WHEN lock_objidentity IS NOT NULL THEN\n> l.locktype || ' ' || lo.lock_objidentity\n> ELSE\n> l.locktype\n> END\n> FROM (\n> SELECT *\n> FROM pg_identify_object('pg_class'::regclass, l.relation, 0)\n> WHERE l.locktype IN ('relation', 'extend', 'page', 'tuple')\n> UNION ALL\n> SELECT *\n> FROM pg_identify_object(l.classid, l.objid, l.objsubid)\n> WHERE l.locktype NOT IN ('relation', 'extend', 'page', 'tuple')\n> ) AS lo(lock_objtype, lock_objschema, lock_objname, lock_objidentity);\n> $$;\n\nI think you'd need to filter for database oid before doing the lock type\nidentifcation. Object oids are not guaranteed to be unique across\ndatabases. It's somewhat unlikely to hit in test scenarios, but in\nlonger lived databases it's quite possible (and e.g. more likely if a\nlot of toasted values exist, as each new toast value advances the\nnextoid counter). Presumably\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 9 Nov 2019 14:09:39 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Handy describe_pg_lock function"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-11-08 14:49:25 +0800, Craig Ringer wrote:\n>> I recently found the need to pretty-print the contents of pg_locks. So\n>> here's a little helper to do it, for anyone else who happens to have that\n>> need. pg_identify_object is far from adequate for the purpose. Reckon I\n>> should turn it into C and submit?\n\n> Yea, I think we need to make it easier for users to understand\n> locking. I kind of wonder whether part of the answer would be to change\n> the details that pg_locks shows, or add a pg_locks_detailed or such\n> (presumably a more detailed version would include walking the dependency\n> graph to at least some degree, and thus more expensive).\n\nI think the actual reason why pg_locks is so bare-bones is that it's\nnot supposed to require taking any locks of its own internally. If,\nfor example, we changed the database column so that it requires a lookup\nin pg_database, then the view would stop working if someone had an\nexclusive lock on pg_database --- pretty much exactly the kind of case\nyou might wish to be investigating with that view.\n\nI don't have any objection to adding a more user-friendly layer\nto use for normal cases, but I'm hesitant to add any gotchas like\nthat into the basic view.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 10 Nov 2019 00:42:40 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Handy describe_pg_lock function"
},
{
"msg_contents": "On Sun, 10 Nov 2019 at 13:42, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-11-08 14:49:25 +0800, Craig Ringer wrote:\n> >> I recently found the need to pretty-print the contents of pg_locks. So\n> >> here's a little helper to do it, for anyone else who happens to have\n> that\n> >> need. pg_identify_object is far from adequate for the purpose. Reckon I\n> >> should turn it into C and submit?\n>\n> > Yea, I think we need to make it easier for users to understand\n> > locking. I kind of wonder whether part of the answer would be to change\n> > the details that pg_locks shows, or add a pg_locks_detailed or such\n> > (presumably a more detailed version would include walking the dependency\n> > graph to at least some degree, and thus more expensive).\n>\n> I think the actual reason why pg_locks is so bare-bones is that it's\n> not supposed to require taking any locks of its own internally. If,\n> for example, we changed the database column so that it requires a lookup\n> in pg_database, then the view would stop working if someone had an\n> exclusive lock on pg_database --- pretty much exactly the kind of case\n> you might wish to be investigating with that view.\n>\n> I don't have any objection to adding a more user-friendly layer\n> to use for normal cases, but I'm hesitant to add any gotchas like\n> that into the basic view.\n>\n>\nYeah.\n\nYou can always query pg_catalog.pg_lock_status() directly, but that's not\nreally documented. I'd be fine with adding a secondary view.\n\nThat reminds me, I've been meaning to submit a decent \"find blocking lock\nrelationships\" view for some time too. It's absurd that people still have\nto crib half-broken code from the wiki (\nhttps://wiki.postgresql.org/wiki/Lock_Monitoring) to get a vaguely\ncomprehensible summary of what's waiting for what. We now\nhave pg_blocking_pids(), which is fantastic, but it's not AFAIK rolled into\nany user-friendly view to help users out so they have to roll their own.\n\nAnyone inclined to object to the addition of an official \"pg_lock_details\"\nview with info like in my example function, and a \"pg_lock_waiters\" or\n\"pg_locks_blocked\" view with info on blocking/blocked-by relationships? I'd\nbe inclined to add a C level function to help describe the lock subject of\na pg_locks row, then use that in system_views.sql for the \"pg_lock_details\"\nview. Then build a \"pg_lock_waiters\" view on top of it\nusing pg_blocking_pids(). Reasonable?\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Sun, 10 Nov 2019 at 13:42, Tom Lane <tgl@sss.pgh.pa.us> wrote:Andres Freund <andres@anarazel.de> writes:\n> On 2019-11-08 14:49:25 +0800, Craig Ringer wrote:\n>> I recently found the need to pretty-print the contents of pg_locks. So\n>> here's a little helper to do it, for anyone else who happens to have that\n>> need. pg_identify_object is far from adequate for the purpose. Reckon I\n>> should turn it into C and submit?\n\n> Yea, I think we need to make it easier for users to understand\n> locking. I kind of wonder whether part of the answer would be to change\n> the details that pg_locks shows, or add a pg_locks_detailed or such\n> (presumably a more detailed version would include walking the dependency\n> graph to at least some degree, and thus more expensive).\n\nI think the actual reason why pg_locks is so bare-bones is that it's\nnot supposed to require taking any locks of its own internally. If,\nfor example, we changed the database column so that it requires a lookup\nin pg_database, then the view would stop working if someone had an\nexclusive lock on pg_database --- pretty much exactly the kind of case\nyou might wish to be investigating with that view.\n\nI don't have any objection to adding a more user-friendly layer\nto use for normal cases, but I'm hesitant to add any gotchas like\nthat into the basic view.Yeah.You can always query pg_catalog.pg_lock_status() directly, but that's not really documented. I'd be fine with adding a secondary view.That reminds me, I've been meaning to submit a decent \"find blocking lock relationships\" view for some time too. It's absurd that people still have to crib half-broken code from the wiki (https://wiki.postgresql.org/wiki/Lock_Monitoring) to get a vaguely comprehensible summary of what's waiting for what. We now have pg_blocking_pids(), which is fantastic, but it's not AFAIK rolled into any user-friendly view to help users out so they have to roll their own.Anyone inclined to object to the addition of an official \"pg_lock_details\" view with info like in my example function, and a \"pg_lock_waiters\" or \"pg_locks_blocked\" view with info on blocking/blocked-by relationships? I'd be inclined to add a C level function to help describe the lock subject of a pg_locks row, then use that in system_views.sql for the \"pg_lock_details\" view. Then build a \"pg_lock_waiters\" view on top of it using pg_blocking_pids(). Reasonable?-- Craig Ringer http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Sun, 10 Nov 2019 17:45:08 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Handy describe_pg_lock function"
},
{
"msg_contents": "On Sun, Nov 10, 2019 at 05:45:08PM +0800, Craig Ringer wrote:\n> On Sun, 10 Nov 2019 at 13:42, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> > Andres Freund <andres@anarazel.de> writes:\n> > > On 2019-11-08 14:49:25 +0800, Craig Ringer wrote:\n> > >> I recently found the need to pretty-print the contents of pg_locks. So\n> > >> here's a little helper to do it, for anyone else who happens to have\n> > that\n> > >> need. pg_identify_object is far from adequate for the purpose. Reckon I\n> > >> should turn it into C and submit?\n> >\n> > > Yea, I think we need to make it easier for users to understand\n> > > locking. I kind of wonder whether part of the answer would be to change\n> > > the details that pg_locks shows, or add a pg_locks_detailed or such\n> > > (presumably a more detailed version would include walking the dependency\n> > > graph to at least some degree, and thus more expensive).\n> >\n> > I think the actual reason why pg_locks is so bare-bones is that it's\n> > not supposed to require taking any locks of its own internally. If,\n> > for example, we changed the database column so that it requires a lookup\n> > in pg_database, then the view would stop working if someone had an\n> > exclusive lock on pg_database --- pretty much exactly the kind of case\n> > you might wish to be investigating with that view.\n> >\n> > I don't have any objection to adding a more user-friendly layer\n> > to use for normal cases, but I'm hesitant to add any gotchas like\n> > that into the basic view.\n> >\n> >\n> Yeah.\n> \n> You can always query pg_catalog.pg_lock_status() directly, but that's not\n> really documented. I'd be fine with adding a secondary view.\n> \n> That reminds me, I've been meaning to submit a decent \"find blocking lock\n> relationships\" view for some time too. It's absurd that people still have\n> to crib half-broken code from the wiki (\n> https://wiki.postgresql.org/wiki/Lock_Monitoring) to get a vaguely\n> comprehensible summary of what's waiting for what. We now\n> have pg_blocking_pids(), which is fantastic, but it's not AFAIK rolled into\n> any user-friendly view to help users out so they have to roll their own.\n> \n> Anyone inclined to object to the addition of an official \"pg_lock_details\"\n> view with info like in my example function, and a \"pg_lock_waiters\" or\n> \"pg_locks_blocked\" view with info on blocking/blocked-by relationships? I'd\n> be inclined to add a C level function to help describe the lock subject of\n> a pg_locks row, then use that in system_views.sql for the \"pg_lock_details\"\n> view. Then build a \"pg_lock_waiters\" view on top of it\n> using pg_blocking_pids(). Reasonable?\n\nVery.\n\n+1\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Mon, 11 Nov 2019 18:32:39 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: Handy describe_pg_lock function"
}
] |
[
{
"msg_contents": "Monitoring the available disk space is the topmost thing on the\npriority for PostgreSQL operation, yet this metric is not available\nfrom the SQL level.\n\nThe attached patch implements a function pg_tablespace_statfs(tblspc)\nto report disk space numbers per tablespace:\n\n# select * from pg_tablespace_statfs('pg_default');\n blocks │ bfree │ bavail │ files │ ffree\n───────────┼──────────┼──────────┼──────────┼──────────\n 103179564 │ 20829222 │ 20815126 │ 26214400 │ 24426295\n\nOpen points:\n* should these numbers be converted to bytes?\n* the column names currently mirror the statfs() names and should\n certainly be improved\n* which of these columns add to \\db+ output?\n* possibly extend this (and \\db) to pg_wal\n\nChristoph",
"msg_date": "Fri, 8 Nov 2019 14:24:19 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Monitoring disk space from within the server"
},
{
"msg_contents": "On Fri, Nov 8, 2019 at 2:24 PM Christoph Berg <myon@debian.org> wrote:\n>\n> Monitoring the available disk space is the topmost thing on the\n> priority for PostgreSQL operation, yet this metric is not available\n> from the SQL level.\n>\n> The attached patch implements a function pg_tablespace_statfs(tblspc)\n> to report disk space numbers per tablespace:\n>\n> # select * from pg_tablespace_statfs('pg_default');\n> blocks │ bfree │ bavail │ files │ ffree\n> ───────────┼──────────┼──────────┼──────────┼──────────\n> 103179564 │ 20829222 │ 20815126 │ 26214400 │ 24426295\n>\n> Open points:\n> * should these numbers be converted to bytes?\n> * the column names currently mirror the statfs() names and should\n> certainly be improved\n> * which of these columns add to \\db+ output?\n> * possibly extend this (and \\db) to pg_wal\n\nShouldn't we have something more generic, in hope that this eventually\nget implemented on Windows? I'm also wondering if getting the fs\ninformation is enough, as there might be quota.\n\n\n",
"msg_date": "Fri, 8 Nov 2019 14:32:13 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Monitoring disk space from within the server"
},
{
"msg_contents": "Re: Julien Rouhaud 2019-11-08 <CAOBaU_YVGEnsnP1ufp42NiJ+WvPHRWBOsBOcaxWxsbXPN_sdNQ@mail.gmail.com>\n> Shouldn't we have something more generic, in hope that this eventually\n> get implemented on Windows? I'm also wondering if getting the fs\n> information is enough, as there might be quota.\n\nThe name is certainly not a good pick, it's not meant to be a raw\nstatfs() wrapper but something more high-level. I just went with that\nto have something working to start with.\n\nHow about these?\npg_tablespace_stats()\npg_tablespace_space()\npg_tablespace_disk_space()\n\nChristoph\n\n\n",
"msg_date": "Fri, 8 Nov 2019 14:35:31 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: Monitoring disk space from within the server"
},
{
"msg_contents": "Re: Julien Rouhaud 2019-11-08 <CAOBaU_YVGEnsnP1ufp42NiJ+WvPHRWBOsBOcaxWxsbXPN_sdNQ@mail.gmail.com>\n> I'm also wondering if getting the fs\n> information is enough, as there might be quota.\n\nWe could append the quotactl(Q_GETQUOTA) information as well, but I'm\nnot sure this has a sensible actual-users-to-noise ratio.\n\nChristoph\n\n\n",
"msg_date": "Fri, 8 Nov 2019 14:40:13 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: Monitoring disk space from within the server"
},
{
"msg_contents": "On Fri, Nov 8, 2019 at 2:35 PM Christoph Berg <myon@debian.org> wrote:\n>\n> Re: Julien Rouhaud 2019-11-08 <CAOBaU_YVGEnsnP1ufp42NiJ+WvPHRWBOsBOcaxWxsbXPN_sdNQ@mail.gmail.com>\n> > Shouldn't we have something more generic, in hope that this eventually\n> > get implemented on Windows? I'm also wondering if getting the fs\n> > information is enough, as there might be quota.\n>\n> The name is certainly not a good pick, it's not meant to be a raw\n> statfs() wrapper but something more high-level. I just went with that\n> to have something working to start with.\n>\n> How about these?\n> pg_tablespace_stats()\n> pg_tablespace_space()\n> pg_tablespace_disk_space()\n\nThe related function on Windows is apparently GetDiskFreeSpaceA [1].\nIt'll probably be quite hard to get something consistent for most of\ncounters, so probably pg_tablespace_(disk_)space is the best name,\nproviding only total size and free size?\n\n[1] https://docs.microsoft.com/en-us/windows/win32/api/fileapi/nf-fileapi-getdiskfreespacea\n\n\n",
"msg_date": "Fri, 8 Nov 2019 14:48:58 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Monitoring disk space from within the server"
},
{
"msg_contents": "On Fri, Nov 8, 2019 at 2:40 PM Christoph Berg <myon@debian.org> wrote:\n>\n> Re: Julien Rouhaud 2019-11-08 <CAOBaU_YVGEnsnP1ufp42NiJ+WvPHRWBOsBOcaxWxsbXPN_sdNQ@mail.gmail.com>\n> > I'm also wondering if getting the fs\n> > information is enough, as there might be quota.\n>\n> We could append the quotactl(Q_GETQUOTA) information as well, but I'm\n> not sure this has a sensible actual-users-to-noise ratio.\n\nWell, having a quota is one of the few real reason to create a\ntablespace so it's probably worth it, although I have to agree that I\nseldom saw quota in production.\n\n\n",
"msg_date": "Fri, 8 Nov 2019 14:55:25 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Monitoring disk space from within the server"
},
{
"msg_contents": "Re: Julien Rouhaud 2019-11-08 <CAOBaU_Zu6RP6-mHyA_J9-xkxJe0tarTVqU9TFza+tCPKUxsjiA@mail.gmail.com>\n> The related function on Windows is apparently GetDiskFreeSpaceA [1].\n\nThere's a link to GetDiskFreeSpaceExA() which seems much easier to use\nbecause it accepts any directory on the drive in question:\n\nhttps://docs.microsoft.com/en-us/windows/win32/api/fileapi/nf-fileapi-getdiskfreespaceexa\n\n> It'll probably be quite hard to get something consistent for most of\n> counters, so probably pg_tablespace_(disk_)space is the best name,\n> providing only total size and free size?\n\nSo, how about:\n\npg_tablespace_disk_space -> total_bytes | free_bytes\n\nThe inode numbers are probably not very interesting in a PG tablespace\nas we don't create that many files. Or do we think including these\ncounters (on UNIX) makes sense?\n\nThere's precedents for leaving fields NULL where not supported by the\nOS, for example pg_stat_file() returns \"change\" on UNIX only, and\n\"creation\" on Windows only.\n\nChristoph\n\n\n",
"msg_date": "Fri, 8 Nov 2019 14:58:30 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: Monitoring disk space from within the server"
},
{
"msg_contents": "Re: Julien Rouhaud 2019-11-08 <CAOBaU_ay0FT6dFt61Pae77pHEu6sny3xM43L4i-pPi5kKkguxQ@mail.gmail.com>\n> > We could append the quotactl(Q_GETQUOTA) information as well, but I'm\n> > not sure this has a sensible actual-users-to-noise ratio.\n> \n> Well, having a quota is one of the few real reason to create a\n> tablespace so it's probably worth it, although I have to agree that I\n> seldom saw quota in production.\n\nGiven that PG deals badly with one tablespace being full (might work\nin production, but if it's full during recovery, the whole server will\nstop), I've never seen quotas being used.\n\nChristoph\n\n\n",
"msg_date": "Fri, 8 Nov 2019 15:00:03 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: Monitoring disk space from within the server"
},
{
"msg_contents": "On Fri, Nov 8, 2019 at 2:58 PM Christoph Berg <myon@debian.org> wrote:\n>\n> Re: Julien Rouhaud 2019-11-08 <CAOBaU_Zu6RP6-mHyA_J9-xkxJe0tarTVqU9TFza+tCPKUxsjiA@mail.gmail.com>\n> > The related function on Windows is apparently GetDiskFreeSpaceA [1].\n>\n> There's a link to GetDiskFreeSpaceExA() which seems much easier to use\n> because it accepts any directory on the drive in question:\n>\n> https://docs.microsoft.com/en-us/windows/win32/api/fileapi/nf-fileapi-getdiskfreespaceexa\n>\n> > It'll probably be quite hard to get something consistent for most of\n> > counters, so probably pg_tablespace_(disk_)space is the best name,\n> > providing only total size and free size?\n>\n> So, how about:\n>\n> pg_tablespace_disk_space -> total_bytes | free_bytes\n>\n> The inode numbers are probably not very interesting in a PG tablespace\n> as we don't create that many files. Or do we think including these\n> counters (on UNIX) makes sense?\n\nAgreed, inodes are probably not very useful there.\n\n> There's precedents for leaving fields NULL where not supported by the\n> OS, for example pg_stat_file() returns \"change\" on UNIX only, and\n> \"creation\" on Windows only.\n> [...]\n> Given that PG deals badly with one tablespace being full (might work\n> in production, but if it's full during recovery, the whole server will\n> stop), I've never seen quotas being used.\n\nI'm +1 on \"pg_tablespace_disk_space -> total_bytes | free_bytes\"\n\n\n",
"msg_date": "Fri, 8 Nov 2019 15:10:57 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Monitoring disk space from within the server"
},
{
"msg_contents": "On Fri, Nov 08, 2019 at 02:24:19PM +0100, Christoph Berg wrote:\n>Monitoring the available disk space is the topmost thing on the\n>priority for PostgreSQL operation, yet this metric is not available\n>from the SQL level.\n>\n\nWhile I agree monitoring disk space is important, I think pretty much\nevery deployment already does that using some other monitoring tool\n(which also monitors million other things).\n\nAlso, I wonder how universal / reliable this actually is, considering\nthe range of filesystems and related stuff (thin provisioning, quotas,\n...) people use in production. I do recall a number of cases when \"df\"\nwas showing a plenty of free space, but one of the internal resources\nfor that particular filesystem was exhausted. I doubt it's desirable to\nadd all this knowledge into PostgreSQL.\n\nIt's not clear to me what issue this is actually meant to solve - it\nprovides data, which is nice, but it still needs to be fed to some\nmotinoring and alerting system. And every monitoring system has a good\nplugin to collect this type of data, so why not to use that?\n\nSurely, we can't rely on this for any internal logic - so why not to\nprovide this as an extension?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 8 Nov 2019 15:50:25 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Monitoring disk space from within the server"
},
{
"msg_contents": "Re: Tomas Vondra 2019-11-08 <20191108145025.d7pfcip6plufxiah@development>\n> While I agree monitoring disk space is important, I think pretty much\n> every deployment already does that using some other monitoring tool\n> (which also monitors million other things).\n\nThere are plenty of deployments where that isn't true, either because\nthey aren't doing any monitoring, or (probably more commonly) because\nthe OS monitoring is done by the OS department and the DB department\ndoesn't have good access to the figures, and possibly not even any\nshell access.\n\nOffering the numbers on the database level would make monitoring\neasier for these users, and also provide the numbers on the level\nwhere they might be useful. (\"Do I have enough disk space to load this\n5GB dump now?\")\n\n> Also, I wonder how universal / reliable this actually is, considering\n> the range of filesystems and related stuff (thin provisioning, quotas,\n> ...) people use in production. I do recall a number of cases when \"df\"\n> was showing a plenty of free space, but one of the internal resources\n> for that particular filesystem was exhausted. I doubt it's desirable to\n> add all this knowledge into PostgreSQL.\n\nThat might be partly true, e.g. btrfs traditionally didn't support\n\"df\" but only \"btrfs df\". But this got fixed in the meantime, and just\nbecause there are weird filesystems doesn't mean we shouldn't try to\nsupport the normal case where statfs() just works.\n\n> It's not clear to me what issue this is actually meant to solve - it\n> provides data, which is nice, but it still needs to be fed to some\n> motinoring and alerting system. And every monitoring system has a good\n> plugin to collect this type of data, so why not to use that?\n\nWhat's wrong with providing nice data? It doesn't hurt to have it.\nAnd the cost of the implementation is low.\n\n> Surely, we can't rely on this for any internal logic - so why not to\n> provide this as an extension?\n\nBy the same argument you could also argue that \\l+ should be an\nextension because database size is optional to know.\n\nI think this should be directly in core because it's useful to a wide\nrange of users.\n\nChristoph\n\n\n",
"msg_date": "Fri, 8 Nov 2019 16:06:21 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: Monitoring disk space from within the server"
},
{
"msg_contents": "Re: To Tomas Vondra 2019-11-08 <20191108150621.GL8017@msg.df7cb.de>\n> I think this should be directly in core because it's useful to a wide\n> range of users.\n\nAlso, I want to have it in \\db+ in psql where users would actually be\nlooking for it.\n\nChristoph\n\n\n",
"msg_date": "Fri, 8 Nov 2019 16:12:22 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: Monitoring disk space from within the server"
},
{
"msg_contents": "On Fri, Nov 08, 2019 at 04:06:21PM +0100, Christoph Berg wrote:\n>Re: Tomas Vondra 2019-11-08 <20191108145025.d7pfcip6plufxiah@development>\n>> While I agree monitoring disk space is important, I think pretty much\n>> every deployment already does that using some other monitoring tool\n>> (which also monitors million other things).\n>\n>There are plenty of deployments where that isn't true, either because\n>they aren't doing any monitoring, or (probably more commonly) because\n>the OS monitoring is done by the OS department and the DB department\n>doesn't have good access to the figures, and possibly not even any\n>shell access.\n>\n\nIt might sound a bit annoying, but I suspect deployments where the DBAs\ndoes not have access to such basic system metrics have bigger issues and\ngiving them one particular piece of information is just a bandaid.\n\n>Offering the numbers on the database level would make monitoring\n>easier for these users, and also provide the numbers on the level\n>where they might be useful. (\"Do I have enough disk space to load this\n>5GB dump now?\")\n>\n>> Also, I wonder how universal / reliable this actually is, considering\n>> the range of filesystems and related stuff (thin provisioning, quotas,\n>> ...) people use in production. I do recall a number of cases when \"df\"\n>> was showing a plenty of free space, but one of the internal resources\n>> for that particular filesystem was exhausted. I doubt it's desirable to\n>> add all this knowledge into PostgreSQL.\n>\n>That might be partly true, e.g. btrfs traditionally didn't support\n>\"df\" but only \"btrfs df\". But this got fixed in the meantime, and just\n>because there are weird filesystems doesn't mean we shouldn't try to\n>support the normal case where statfs() just works.\n>\n\nMaybe, but if you suggest to show the information in \\dn+ then we should\nat least know how common / likely those issues are. Because if they are\nanything but extremely uncommon, we'll get plenty of bogus bug reports\ncomplaining about inaccurate information.\n\n>> It's not clear to me what issue this is actually meant to solve - it\n>> provides data, which is nice, but it still needs to be fed to some\n>> motinoring and alerting system. And every monitoring system has a good\n>> plugin to collect this type of data, so why not to use that?\n>\n>What's wrong with providing nice data? It doesn't hurt to have it.\n>And the cost of the implementation is low.\n>\n\nMy point was that there are other (better, already existing) ways to get\nthe data, and getting them through database means extra complexity (I do\nagree it's not a lot of code at this point, though).\n\nOf course, if your assumption is that using those other ways to get the\ndata is impossible, then sure - adding this makes sense.\n\n>> Surely, we can't rely on this for any internal logic - so why not to\n>> provide this as an extension?\n>\n>By the same argument you could also argue that \\l+ should be an\n>extension because database size is optional to know.\n>\n\nThat's not really my argument, though. I'm not suggesting everything\n\"optional\" should be in an extension, but that there's nothing forcing\nus to make this directly part of the core.\n\nAnd packaging stuff into extension has advantages too (independent dev\ncycle, and so on).\n\n>I think this should be directly in core because it's useful to a wide\n>range of users.\n>\n\nI'm not convinced that's actually true. It might be, but I don't have\nany data to support it (or vice versa).\n\n\ncheers\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 8 Nov 2019 22:11:16 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Monitoring disk space from within the server"
},
{
"msg_contents": "On Fri, Nov 08, 2019 at 03:10:57PM +0100, Julien Rouhaud wrote:\n> Agreed, inodes are probably not very useful there.\n\nTotal bytes and free bytes looks like a good first cut. Have you\nlooked at the portability of statfs() on other BSD flavors and\nSolaris? I recall from a lookup at statvfs() that these may not be\npresent everywhere. I'd think that we can live with a configure\nswitch and complain with an error or a warning if we are out of\noptions on a given platform.\n--\nMichael",
"msg_date": "Sat, 9 Nov 2019 10:56:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Monitoring disk space from within the server"
},
{
"msg_contents": "## Michael Paquier (michael@paquier.xyz):\n\n> Total bytes and free bytes looks like a good first cut. Have you\n> looked at the portability of statfs() on other BSD flavors and\n> Solaris?\n\n\"The statfs() system call first appeared in 4.4BSD.\" (from the statfs(2)\nmanpage on FreeBSD). struct statfs differs between Linux and BSD, but\nis \"close enough\" for this, the fields from the original patch are\npresent in both implementations.\nSolaris does not have statfs() anymore. Instead, it has a statvfs()\nwhich is \"more or less equivalent\" to the Linux statvfs(). On FreeBSD,\nusing statvfs() (it's available) is rather not recommended, from the\nman page:\n The statvfs() and fstatvfs() functions fill the structure pointed\n to by buf with garbage. This garbage will occasionally bear resemblance\n to file system statistics, but portable applications must not depend on\n this.\nThat's funny, as statvfs() is in our beloved POSIX.1 since at least\n2001 - current specs:\nhttps://pubs.opengroup.org/onlinepubs/9699919799/functions/fstatvfs.html\n\nRegards,\nChristoph\n\n-- \nSpare Space\n\n\n",
"msg_date": "Sat, 9 Nov 2019 14:33:49 +0100",
"msg_from": "Christoph Moench-Tegeder <cmt@burggraben.net>",
"msg_from_op": false,
"msg_subject": "Re: Monitoring disk space from within the server"
},
{
"msg_contents": "On Fri, 2019-11-08 at 14:24 +0100, Christoph Berg wrote:\n> Monitoring the available disk space is the topmost thing on the\n> priority for PostgreSQL operation, yet this metric is not available\n> from the SQL level.\n> \n> The attached patch implements a function pg_tablespace_statfs(tblspc)\n> to report disk space numbers per tablespace:\n> \n> # select * from pg_tablespace_statfs('pg_default');\n> blocks │ bfree │ bavail │ files │ ffree\n> ───────────┼──────────┼──────────┼──────────┼──────────\n> 103179564 │ 20829222 │ 20815126 │ 26214400 │ 24426295\n> \n> Open points:\n> * should these numbers be converted to bytes?\n> * the column names currently mirror the statfs() names and should\n> certainly be improved\n> * which of these columns add to \\db+ output?\n> * possibly extend this (and \\db) to pg_wal\n\nWill this work on Windows?\nA quick web search seems to indicate that Windows has no statfs(2).\n\nWhat's more is that the Linux man page says that statfs(2) is\nLinux-specific.\n\nI think that if we have such a feature (which I think would be useful)\nshould be available for all operating systems supported by PostgreSQL.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Mon, 11 Nov 2019 21:11:35 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Monitoring disk space from within the server"
},
{
"msg_contents": "On Sat, Nov 09, 2019 at 02:33:49PM +0100, Christoph Moench-Tegeder wrote:\n> \"The statfs() system call first appeared in 4.4BSD.\" (from the statfs(2)\n> manpage on FreeBSD). struct statfs differs between Linux and BSD, but\n> is \"close enough\" for this, the fields from the original patch are\n> present in both implementations.\n> Solaris does not have statfs() anymore. Instead, it has a statvfs()\n> which is \"more or less equivalent\" to the Linux statvfs(). On FreeBSD,\n> using statvfs() (it's available) is rather not recommended, from the\n> man page:\n> The statvfs() and fstatvfs() functions fill the structure pointed\n> to by buf with garbage. This garbage will occasionally bear resemblance\n> to file system statistics, but portable applications must not depend on\n> this.\n> That's funny, as statvfs() is in our beloved POSIX.1 since at least\n> 2001 - current specs:\n> https://pubs.opengroup.org/onlinepubs/9699919799/functions/fstatvfs.html\n\nThanks for looking at that. The point of FreeBSD is interesting to\nknow. So this basically would leave us with the following hierarchy\nto grab the data:\n1) statfs()\n2) statvfs()\n3) Windows-specific implementation\n4) Complain if nothing is present\n\nFor the free space, then we just need (f_bsize * f_bfree), and the\ntotal is (f_blocks * f_bsize).\n\nAny opinions?\n--\nMichael",
"msg_date": "Tue, 12 Nov 2019 10:46:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Monitoring disk space from within the server"
},
{
"msg_contents": "On Mon, Nov 11, 2019 at 09:11:35PM +0100, Laurenz Albe wrote:\n> Will this work on Windows?\n> A quick web search seems to indicate that Windows has no statfs(2).\n\nIt won't. We are actually discussing the compatibility aspects and\nthe minimal data set we could grab in a different part of the thread.\n--\nMichael",
"msg_date": "Tue, 12 Nov 2019 10:47:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Monitoring disk space from within the server"
},
{
"msg_contents": "> On 12 Nov 2019, at 02:46, Michael Paquier <michael@paquier.xyz> wrote:\n\n> Any opinions?\n\nI agree with Tomas upthread that it's unclear whether this needs to be in core.\nThere are many system parameters a database admin is likely to be interested\nin, diskspace being just one of them (albeit a very important one for many\nreasons), and there is nothing that makes the SQL interface (or postgres core\nfor that matter) particularly more suited for this job than other existing\ntools.\n\nWhy is SQL level crucial for this?\n\ncheers ./daniel\n\n",
"msg_date": "Tue, 12 Nov 2019 09:47:47 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Monitoring disk space from within the server"
},
{
"msg_contents": "On Tue, Nov 12, 2019 at 09:47:47AM +0100, Daniel Gustafsson wrote:\n> I agree with Tomas upthread that it's unclear whether this needs to be in core.\n> There are many system parameters a database admin is likely to be interested\n> in, diskspace being just one of them (albeit a very important one for many\n> reasons), and there is nothing that makes the SQL interface (or postgres core\n> for that matter) particularly more suited for this job than other existing\n> tools.\n> \n> Why is SQL level crucial for this?\n\nBecause this makes the monitoring experience easier from a remote\nperspective. FWIW, I have cases it would have been useful to monitor\nout the physical amount of space available with the amount of space\ncovered by bloated tables for a given set of tablespaces through a\nsingle source.\n--\nMichael",
"msg_date": "Tue, 12 Nov 2019 18:03:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Monitoring disk space from within the server"
},
{
"msg_contents": "Re: Daniel Gustafsson 2019-11-12 <7A3B9BB6-BEA0-466E-98A9-B4DD8F04830E@yesql.se>\n> I agree with Tomas upthread that it's unclear whether this needs to be in core.\n> There are many system parameters a database admin is likely to be interested\n> in, diskspace being just one of them (albeit a very important one for many\n> reasons), and there is nothing that makes the SQL interface (or postgres core\n> for that matter) particularly more suited for this job than other existing\n> tools.\n> \n> Why is SQL level crucial for this?\n\nBecause the figure is interesting to users as well. They will usually\nnot have any access to monitoring, and checking if they can load this\nextra 10 GB dataset is a good use case.\n\nThis is about providing the numbers in the place where they are\nneeded. Of course admins can just go elsewhere to look it up (and\nprobably will), but I think now there's a usability gap for people who\njust have SQL access.\n\nChristoph\n\n\n",
"msg_date": "Tue, 12 Nov 2019 10:04:24 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: Monitoring disk space from within the server"
},
{
"msg_contents": "On Tue, Nov 12, 2019 at 2:48 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Nov 11, 2019 at 09:11:35PM +0100, Laurenz Albe wrote:\n> > Will this work on Windows?\n> > A quick web search seems to indicate that Windows has no statfs(2).\n>\n> It won't. We are actually discussing the compatibility aspects and\n> the minimal data set we could grab in a different part of the thread.\n\nFor the record I already mentioned Windows specificity in [1] and\nGetDiskFreeSpaceA [2] looks like the function to use on windows.\n\n[1] https://www.postgresql.org/message-id/CAOBaU_Zu6RP6-mHyA_J9-xkxJe0tarTVqU9TFza%2BtCPKUxsjiA%40mail.gmail.com\n[2] https://docs.microsoft.com/en-us/windows/win32/api/fileapi/nf-fileapi-getdiskfreespacea\n\n\n",
"msg_date": "Tue, 12 Nov 2019 10:07:28 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Monitoring disk space from within the server"
}
] |
[
{
"msg_contents": "Hello all,\n\nI would like to direct your attention to the queries of following type,\nselect <some_column(s)>\nfrom <table_name>\nwhere <some_column> IN (<a_list_of_some_values>)\n\nthe plan for such a query uses index scan (or index-only), now in our\nexperiments, if the provided list is sorted then query performance\nimproves by ~10%. Which makes sense also as once we have found the required\nbtree leaf we just keep moving in one direction, which should be\nexpectantly less time consuming than searching the tree again.\n\nNow, my question is shouldn't we always use this list in sorted order, in\nother words can there be scenarios where such a sorting will not help? I am\ntalking about only the cases where the list consists of all constants and\ncould fit in memory. Basically, when we are transforming the in expression\nand found that it consists of all constants, then sort it as well, codewise\nat transfromAExprIn, of course there might be better ways to accomplish\nthis.\n\nSo, your thoughts, opinions, suggestions are more than welcome.\n\n-- \nRegards,\nRafia Sabih\n\nHello all,I would like to direct your attention to the queries of following type,select <some_column(s)>from <table_name>where <some_column> IN (<a_list_of_some_values>)the plan for such a query uses index scan (or index-only), now in our experiments, if the provided list is sorted then query performance improves by ~10%. Which makes sense also as once we have found the required btree leaf we just keep moving in one direction, which should be expectantly less time consuming than searching the tree again.Now, my question is shouldn't we always use this list in sorted order, in other words can there be scenarios where such a sorting will not help? I am talking about only the cases where the list consists of all constants and could fit in memory. Basically, when we are transforming the in expression and found that it consists of all constants, then sort it as well, codewise at transfromAExprIn, of course there might be better ways to accomplish this.So, your thoughts, opinions, suggestions are more than welcome.-- Regards,Rafia Sabih",
"msg_date": "Fri, 8 Nov 2019 14:52:12 +0100",
"msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>",
"msg_from_op": true,
"msg_subject": "Performance improvement for queries with IN clause"
},
{
"msg_contents": "On 11/8/19 2:52 PM, Rafia Sabih wrote:\n> Now, my question is shouldn't we always use this list in sorted order, \n> in other words can there be scenarios where such a sorting will not \n> help? I am talking about only the cases where the list consists of all \n> constants and could fit in memory. Basically, when we are \n> transforming the in expression and found that it consists of all \n> constants, then sort it as well, codewise at transfromAExprIn, of course \n> there might be better ways to accomplish this.\n> \n> So, your thoughts, opinions, suggestions are more than welcome.\n\nIf it is worth sorting them should depend on the index, e.g. for hash \nindexes sorting would just be a waste of time.\n\nAndreas\n\n\n",
"msg_date": "Sat, 9 Nov 2019 12:52:21 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: Performance improvement for queries with IN clause"
},
{
"msg_contents": "On Sat, Nov 9, 2019 at 5:22 PM Andreas Karlsson <andreas@proxel.se> wrote:\n>\n> On 11/8/19 2:52 PM, Rafia Sabih wrote:\n> > Now, my question is shouldn't we always use this list in sorted order,\n> > in other words can there be scenarios where such a sorting will not\n> > help? I am talking about only the cases where the list consists of all\n> > constants and could fit in memory. Basically, when we are\n> > transforming the in expression and found that it consists of all\n> > constants, then sort it as well, codewise at transfromAExprIn, of course\n> > there might be better ways to accomplish this.\n> >\n> > So, your thoughts, opinions, suggestions are more than welcome.\n>\n> If it is worth sorting them should depend on the index, e.g. for hash\n> indexes sorting would just be a waste of time.\n>\n\nI think we also need to be careful that this might lead to regression\nfor cases where the list is already sorted.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 9 Nov 2019 17:41:00 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Performance improvement for queries with IN clause"
},
{
"msg_contents": "On Sat, 9 Nov 2019 at 12:52, Andreas Karlsson <andreas@proxel.se> wrote:\n\n> On 11/8/19 2:52 PM, Rafia Sabih wrote:\n> > Now, my question is shouldn't we always use this list in sorted order,\n> > in other words can there be scenarios where such a sorting will not\n> > help? I am talking about only the cases where the list consists of all\n> > constants and could fit in memory. Basically, when we are\n> > transforming the in expression and found that it consists of all\n> > constants, then sort it as well, codewise at transfromAExprIn, of course\n> > there might be better ways to accomplish this.\n> >\n> > So, your thoughts, opinions, suggestions are more than welcome.\n>\n> If it is worth sorting them should depend on the index, e.g. for hash\n> indexes sorting would just be a waste of time.\n\n\nHi Andreas,\nThanks for your response. Here, I meant this list sorting only for Btree,\nas you well pointed out that for other indexes like hash this wouldn't\nreally make sense.\n\n\n-- \nRegards,\nRafia Sabih\n\nOn Sat, 9 Nov 2019 at 12:52, Andreas Karlsson <andreas@proxel.se> wrote:On 11/8/19 2:52 PM, Rafia Sabih wrote:\n> Now, my question is shouldn't we always use this list in sorted order, \n> in other words can there be scenarios where such a sorting will not \n> help? I am talking about only the cases where the list consists of all \n> constants and could fit in memory. Basically, when we are \n> transforming the in expression and found that it consists of all \n> constants, then sort it as well, codewise at transfromAExprIn, of course \n> there might be better ways to accomplish this.\n> \n> So, your thoughts, opinions, suggestions are more than welcome.\n\nIf it is worth sorting them should depend on the index, e.g. for hash \nindexes sorting would just be a waste of time.Hi Andreas, Thanks for your response. Here, I meant this list sorting only for Btree, as you well pointed out that for other indexes like hash this wouldn't really make sense. -- Regards,Rafia Sabih",
"msg_date": "Mon, 11 Nov 2019 10:14:26 +0100",
"msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Performance improvement for queries with IN clause"
}
] |
[
{
"msg_contents": "Folks,\nWe've been seeing nearly daily crashes from a PostgreSQL 9.6 application that is heavily\ndependent on the HLL extension (v 2.10.2). All these crashes are from inside the HLL\nbitstream_unpack function. Usually they're from an INSERT VALUES statement, but\noccasionally they are from an hll_cardinality call in a query.\nI think I've identified the root cause, but I'd like someone who is familiar with the code\nin the HLL library to confirm my hypothesis:\n In bitstream_unpack it pulls a full quadword of data out of the bitstream using the\n brc_curp pointer. Usually this is not a problem. However, if the brc_curp pointer is\n less than 8 bytes from the end of the bitstream data, then that quadword read is\n reading past the end of the actual bitstream data. Because of the subsequent bit\n reordering, shifting, and masking this has no effect of the answers. However, when\n the end of the bitstream is very close to the end of an OS page then the quadword\n read will attempt to read the next OS page, and if that next OS page does not exist\n in this process, then it will SEGV.\n\nI posted this as a comment in the HLL GitHub, but have yet to get a response there:\n https://github.com/citusdata/postgresql-hll/issues/84\n\nThanks for any assistance!\n\n\n\n\n\n\n\n\n\n\n\nFolks,\n\nWe’ve been seeing nearly daily crashes from a PostgreSQL 9.6 application that is heavily\ndependent on the HLL extension (v 2.10.2). All these crashes are from inside the HLL\nbitstream_unpack function. Usually they’re from an INSERT VALUES statement, but\noccasionally they are from an hll_cardinality call in a query.\nI think I’ve identified the root cause, but I’d like someone who is familiar with the code\nin the HLL library to confirm my hypothesis:\n In bitstream_unpack it pulls a full quadword of data out of the bitstream using the\n\n brc_curp pointer. Usually this is not a problem. However, if the brc_curp pointer is\n\n less than 8 bytes from the end of the bitstream data, then that quadword read is\n\n reading past the end of the actual bitstream data. Because of the subsequent bit\n reordering, shifting, and masking this has no effect of the answers. However, when\n the end of the bitstream is very close to the end of an OS page then the quadword\n read will attempt to read the next OS page, and if that next OS page does not exist\n in this process, then it will SEGV.\n \nI posted this as a comment in the HLL GitHub, but have yet to get a response there:\n\n https://github.com/citusdata/postgresql-hll/issues/84\n \nThanks for any assistance!",
"msg_date": "Fri, 8 Nov 2019 15:00:00 +0000",
"msg_from": "\"Kirk, Steve\" <stkir@amazon.com>",
"msg_from_op": true,
"msg_subject": "Frequent HLL bitstream_unpack crashes"
},
{
"msg_contents": "On Fri, Nov 8, 2019 at 8:30 PM Kirk, Steve <stkir@amazon.com> wrote:\n>\n> I posted this as a comment in the HLL GitHub, but have yet to get a response there:\n>\n> https://github.com/citusdata/postgresql-hll/issues/84\n>\n\nI don't think this is the right mailing list to expect an answer to\nthis problem. This seems to be something related to citusdata's\nextension. Even, if this turns out to be a problem of core Postgres,\nit is better to present a test or scenario describing the problem in\nPostgres.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 9 Nov 2019 08:53:40 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Frequent HLL bitstream_unpack crashes"
}
] |
[
{
"msg_contents": "Hi,\nPlease can anybody review and commit this patch.\n\nThanks.\n\nRanier Vilela\n\n--- \\dll\\postgresql-12.0\\a\\backend\\libpq\\auth.c\tMon Sep 30 17:06:55 2019\n+++ auth.c\tFri Nov 08 14:27:17 2019\n@@ -1815,6 +1815,7 @@\n \tchar\t\tident_user[IDENT_USERNAME_MAX + 1];\n \tpgsocket\tsock_fd = PGINVALID_SOCKET; /* for talking to Ident server */\n \tint\t\t\trc;\t\t\t\t/* Return code from a locally called function */\n+\tint\t\t\tident_query_len;\n \tbool\t\tident_return;\n \tchar\t\tremote_addr_s[NI_MAXHOST];\n \tchar\t\tremote_port[NI_MAXSERV];\n@@ -1913,7 +1914,7 @@\n \t}\n \n \t/* The query we send to the Ident server */\n-\tsnprintf(ident_query, sizeof(ident_query), \"%s,%s\\r\\n\",\n+\tident_query_len = snprintf(ident_query, sizeof(ident_query), \"%s,%s\\r\\n\",\n \t\t\t remote_port, local_port);\n \n \t/* loop in case send is interrupted */\n@@ -1921,7 +1922,7 @@\n \t{\n \t\tCHECK_FOR_INTERRUPTS();\n \n-\t\trc = send(sock_fd, ident_query, strlen(ident_query), 0);\n+\t\trc = send(sock_fd, ident_query, ident_query_len, 0);\n \t} while (rc < 0 && errno == EINTR);\n \n \tif (rc < 0)\n@@ -3053,6 +3054,8 @@\n \tchar\t *receive_buffer = (char *) &radius_recv_pack;\n \tint32\t\tservice = pg_hton32(RADIUS_AUTHENTICATE_ONLY);\n \tuint8\t *cryptvector;\n+\tint\t\t\tsecretlen;\n+\tint\t\t\tpasswdlen;\n \tint\t\t\tencryptedpasswordlen;\n \tuint8\t\tencryptedpassword[RADIUS_MAX_PASSWORD_LENGTH];\n \tuint8\t *md5trailer;\n@@ -3125,10 +3128,12 @@\n \tmemcpy(cryptvector, secret, strlen(secret));\n \n \t/* for the first iteration, we use the Request Authenticator vector */\n+ secretlen = strlen(secret);\n+ passwdlen = strlen(passwd);\n \tmd5trailer = packet->vector;\n \tfor (i = 0; i < encryptedpasswordlen; i += RADIUS_VECTOR_LENGTH)\n \t{\n-\t\tmemcpy(cryptvector + strlen(secret), md5trailer, RADIUS_VECTOR_LENGTH);\n+\t\tmemcpy(cryptvector + secretlen, md5trailer, RADIUS_VECTOR_LENGTH);\n \n \t\t/*\n \t\t * .. and for subsequent iterations the result of the previous XOR\n@@ -3136,7 +3141,7 @@\n \t\t */\n \t\tmd5trailer = encryptedpassword + i;\n \n-\t\tif (!pg_md5_binary(cryptvector, strlen(secret) + RADIUS_VECTOR_LENGTH, encryptedpassword + i))\n+\t\tif (!pg_md5_binary(cryptvector, secretlen + RADIUS_VECTOR_LENGTH, encryptedpassword + i))\n \t\t{\n \t\t\tereport(LOG,\n \t\t\t\t\t(errmsg(\"could not perform MD5 encryption of password\")));\n@@ -3147,7 +3152,7 @@\n \n \t\tfor (j = i; j < i + RADIUS_VECTOR_LENGTH; j++)\n \t\t{\n-\t\t\tif (j < strlen(passwd))\n+\t\t\tif (j < passwdlen)\n \t\t\t\tencryptedpassword[j] = passwd[j] ^ encryptedpassword[j];\n \t\t\telse\n \t\t\t\tencryptedpassword[j] = '\\0' ^ encryptedpassword[j];\n@@ -3329,7 +3334,7 @@\n \t\t * Verify the response authenticator, which is calculated as\n \t\t * MD5(Code+ID+Length+RequestAuthenticator+Attributes+Secret)\n \t\t */\n-\t\tcryptvector = palloc(packetlength + strlen(secret));\n+\t\tcryptvector = palloc(packetlength + secretlen);\n \n \t\tmemcpy(cryptvector, receivepacket, 4);\t/* code+id+length */\n \t\tmemcpy(cryptvector + 4, packet->vector, RADIUS_VECTOR_LENGTH);\t/* request\n@@ -3338,10 +3343,10 @@\n \t\tif (packetlength > RADIUS_HEADER_LENGTH)\t/* there may be no\n \t\t\t\t\t\t\t\t\t\t\t\t\t * attributes at all */\n \t\t\tmemcpy(cryptvector + RADIUS_HEADER_LENGTH, receive_buffer + RADIUS_HEADER_LENGTH, packetlength - RADIUS_HEADER_LENGTH);\n-\t\tmemcpy(cryptvector + packetlength, secret, strlen(secret));\n+\t\tmemcpy(cryptvector + packetlength, secret, secretlen);\n \n \t\tif (!pg_md5_binary(cryptvector,\n-\t\t\t\t\t\t packetlength + strlen(secret),\n+\t\t\t\t\t\t packetlength + secretlen,\n \t\t\t\t\t\t encryptedpassword))\n \t\t{\n \t\t\tereport(LOG,",
"msg_date": "Fri, 8 Nov 2019 17:41:40 +0000",
"msg_from": "Ranier VF <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Patch avoid call strlen repeatedly in loop."
},
{
"msg_contents": "\n\nOn 11/8/19 9:41 AM, Ranier VF wrote:\n> --- \\dll\\postgresql-12.0\\a\\backend\\libpq\\auth.c\tMon Sep 30 17:06:55 2019\n> +++ auth.c\tFri Nov 08 14:27:17 2019\n> @@ -1815,6 +1815,7 @@\n> \tchar\t\tident_user[IDENT_USERNAME_MAX + 1];\n> \tpgsocket\tsock_fd = PGINVALID_SOCKET; /* for talking to Ident server */\n> \tint\t\t\trc;\t\t\t\t/* Return code from a locally called function */\n> +\tint\t\t\tident_query_len;\n> \tbool\t\tident_return;\n> \tchar\t\tremote_addr_s[NI_MAXHOST];\n> \tchar\t\tremote_port[NI_MAXSERV];\n> @@ -1913,7 +1914,7 @@\n> \t}\n> \n> \t/* The query we send to the Ident server */\n> -\tsnprintf(ident_query, sizeof(ident_query), \"%s,%s\\r\\n\",\n> +\tident_query_len = snprintf(ident_query, sizeof(ident_query), \"%s,%s\\r\\n\",\n> \t\t\t remote_port, local_port);\n> \n> \t/* loop in case send is interrupted */\n> @@ -1921,7 +1922,7 @@\n> \t{\n> \t\tCHECK_FOR_INTERRUPTS();\n> \n> -\t\trc = send(sock_fd, ident_query, strlen(ident_query), 0);\n> +\t\trc = send(sock_fd, ident_query, ident_query_len, 0);\n\nHello Ranier,\n\nIn general, writing a string with snprintf and then calling strlen on \nthat same string is not guaranteed to give the same lengths. You can \neasily construct a case where they differ:\n\n char foo[3] = {0};\n int foolen;\n foolen = snprintf(foo, sizeof(foo), \"%s\", \"xxxxxxxx\");\n printf(\"strlen(foo) = %u, foolen = %u, foo = '%s'\\n\", strlen(foo), \nfoolen, foo);\n\nUsing standard snprintf (and not pg_snprintf), I get:\n\n strlen(foo) = 2, foolen = 8, foo = 'xx'\n\nPerhaps an analysis of the surrounding code would prove that in all \ncases this particular snprintf will return the same result that \nstrlen(ident_query) would return, but I don't care to do the analysis. \nI think the way it is coded is easier to read, and probably more robust \nagainst future changes, even if your proposed change happens to be safe \ntoday.\n\nAs for calling strlen(ident_query) just once, caching that result, and \nthen looping, I don't immediately see a problem, but I also don't expect \nthat loop to run more than one iteration except under unusual instances. \n Do you find that send() gets interrupted a lot? Is \nstrlen(ident_query) taking long enough to be significant compared to how \nlong send() takes?\n\nA bit more information about the performance problem you are \nencountering might make it easier to understand the motivation for this \npatch.\n\n-- \nMark Dilger\n\n\n",
"msg_date": "Fri, 8 Nov 2019 16:12:12 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Patch avoid call strlen repeatedly in loop."
},
{
"msg_contents": "\n\n________________________________________\nDe: Mark Dilger <hornschnorter@gmail.com>\nEnviado: sábado, 9 de novembro de 2019 00:12\nPara: Ranier VF; pgsql-hackers@lists.postgresql.org\nAssunto: Re: Patch avoid call strlen repeatedly in loop.\n\n\n\nOn 11/8/19 9:41 AM, Ranier VF wrote:\n> --- \\dll\\postgresql-12.0\\a\\backend\\libpq\\auth.c Mon Sep 30 17:06:55 2019\n> +++ auth.c Fri Nov 08 14:27:17 2019\n> @@ -1815,6 +1815,7 @@\n> char ident_user[IDENT_USERNAME_MAX + 1];\n> pgsocket sock_fd = PGINVALID_SOCKET; /* for talking to Ident server */\n> int rc; /* Return code from a locally called function */\n> + int ident_query_len;\n> bool ident_return;\n> char remote_addr_s[NI_MAXHOST];\n> char remote_port[NI_MAXSERV];\n> @@ -1913,7 +1914,7 @@\n> }\n>\n> /* The query we send to the Ident server */\n> - snprintf(ident_query, sizeof(ident_query), \"%s,%s\\r\\n\",\n> + ident_query_len = snprintf(ident_query, sizeof(ident_query), \"%s,%s\\r\\n\",\n> remote_port, local_port);\n>\n> /* loop in case send is interrupted */\n> @@ -1921,7 +1922,7 @@\n> {\n> CHECK_FOR_INTERRUPTS();\n>\n> - rc = send(sock_fd, ident_query, strlen(ident_query), 0);\n> + rc = send(sock_fd, ident_query, ident_query_len, 0);\n\nHello Ranier,\n\nIn general, writing a string with snprintf and then calling strlen on\nthat same string is not guaranteed to give the same lengths. You can\neasily construct a case where they differ:\n\n char foo[3] = {0};\n int foolen;\n foolen = snprintf(foo, sizeof(foo), \"%s\", \"xxxxxxxx\");\n printf(\"strlen(foo) = %u, foolen = %u, foo = '%s'\\n\", strlen(foo),\nfoolen, foo);\n\nUsing standard snprintf (and not pg_snprintf), I get:\n\n strlen(foo) = 2, foolen = 8, foo = 'xx'\n\nPerhaps an analysis of the surrounding code would prove that in all\ncases this particular snprintf will return the same result that\nstrlen(ident_query) would return, but I don't care to do the analysis.\nI think the way it is coded is easier to read, and probably more robust\nagainst future changes, even if your proposed change happens to be safe\ntoday.\n\nAs for calling strlen(ident_query) just once, caching that result, and\nthen looping, I don't immediately see a problem, but I also don't expect\nthat loop to run more than one iteration except under unusual instances.\n Do you find that send() gets interrupted a lot? Is\nstrlen(ident_query) taking long enough to be significant compared to how\nlong send() takes?\n\nA bit more information about the performance problem you are\nencountering might make it easier to understand the motivation for this\npatch.\n\n--\nMark Dilger\n\n\n",
"msg_date": "Sat, 9 Nov 2019 08:21:15 +0000",
"msg_from": "Ranier VF <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: Patch avoid call strlen repeatedly in loop."
},
{
"msg_contents": "Hi Mark,\n\"In general, writing a string with snprintf and then calling strlen on\nthat same string is not guaranteed to give the same lengths. You can\neasily construct a case where they differ:\n\n char foo[3] = {0};\n int foolen;\n foolen = snprintf(foo, sizeof(foo), \"%s\", \"xxxxxxxx\");\n printf(\"strlen(foo) = %u, foolen = %u, foo = '%s'\\n\", strlen(foo),\nfoolen, foo);\n\nUsing standard snprintf (and not pg_snprintf), I get:\n\n strlen(foo) = 2, foolen = 8, foo = 'xx'\"\n\nWell, I've been using snprintf, no problem for several years now.\nBut what you reported, I would easily solve with an assert.\n\nassert(foolen == strlen(foo));\n\nTo make sure things would stay under control.\n\n\"I think the way it is coded is easier to read, and probably more robust\nagainst future changes, even if your proposed change happens to be safe\ntoday.\"\n\nI find it amazing that software I admire so much, such as PostgreSQL, makes extensive and heavy use of functions like strlen.\nSpeed makes a lot of difference, for some people it is above safety.\nMaybe that's why PostgreSQL loses some battles against MySQL.\nNot using strlen is for educational purposes as well. Allowing is to encourage use!\nSo stupid things such as:\n#define CheckComplicatedStuff (a, b) (strlen (a) > strlen (b))\nfor (;;) {\n if CheckComplicatedStuff (x, y) {\n break;\n }\n}\nThey start to contaminate all the code.\nUsing features like strlen, the programmer begins to create easy shortcuts, but in the end, they are very slow.\n\nMaybe that's why I have things in my code like:\nchar sql [4096];\nPQexec (cn, sql);\n\nWhile MySQL for example, would look like this:\nchar sql [4096];\nint sql_len;\nsql_len = snprintf (sql, sizeof (sql), \"INSERT ...\");\nmysql_real_query (cn, sql, sql_len);\n\n\"A bit more information about the performance problem you are\nencountering might make it easier to understand the motivation for this\npatch.\"\nMy motivation? Speed.\nWin from MySQL, always.\n\nAnyway I'm redoing the patch with your suggestion.\nWhat about other functions that make extensive use of strlen?\n\nThank you.\nRanier Vilela\n\n________________________________________\nDe: Mark Dilger <hornschnorter@gmail.com>\nEnviado: sábado, 9 de novembro de 2019 00:12\nPara: Ranier VF; pgsql-hackers@lists.postgresql.org\nAssunto: Re: Patch avoid call strlen repeatedly in loop.\n\n\n\nOn 11/8/19 9:41 AM, Ranier VF wrote:\n> --- \\dll\\postgresql-12.0\\a\\backend\\libpq\\auth.c Mon Sep 30 17:06:55 2019\n> +++ auth.c Fri Nov 08 14:27:17 2019\n> @@ -1815,6 +1815,7 @@\n> char ident_user[IDENT_USERNAME_MAX + 1];\n> pgsocket sock_fd = PGINVALID_SOCKET; /* for talking to Ident server */\n> int rc; /* Return code from a locally called function */\n> + int ident_query_len;\n> bool ident_return;\n> char remote_addr_s[NI_MAXHOST];\n> char remote_port[NI_MAXSERV];\n> @@ -1913,7 +1914,7 @@\n> }\n>\n> /* The query we send to the Ident server */\n> - snprintf(ident_query, sizeof(ident_query), \"%s,%s\\r\\n\",\n> + ident_query_len = snprintf(ident_query, sizeof(ident_query), \"%s,%s\\r\\n\",\n> remote_port, local_port);\n>\n> /* loop in case send is interrupted */\n> @@ -1921,7 +1922,7 @@\n> {\n> CHECK_FOR_INTERRUPTS();\n>\n> - rc = send(sock_fd, ident_query, strlen(ident_query), 0);\n> + rc = send(sock_fd, ident_query, ident_query_len, 0);\n\nHello Ranier,\n\nIn general, writing a string with snprintf and then calling strlen on\nthat same string is not guaranteed to give the same lengths. You can\neasily construct a case where they differ:\n\n char foo[3] = {0};\n int foolen;\n foolen = snprintf(foo, sizeof(foo), \"%s\", \"xxxxxxxx\");\n printf(\"strlen(foo) = %u, foolen = %u, foo = '%s'\\n\", strlen(foo),\nfoolen, foo);\n\nUsing standard snprintf (and not pg_snprintf), I get:\n\n strlen(foo) = 2, foolen = 8, foo = 'xx'\n\nPerhaps an analysis of the surrounding code would prove that in all\ncases this particular snprintf will return the same result that\nstrlen(ident_query) would return, but I don't care to do the analysis.\nI think the way it is coded is easier to read, and probably more robust\nagainst future changes, even if your proposed change happens to be safe\ntoday.\n\nAs for calling strlen(ident_query) just once, caching that result, and\nthen looping, I don't immediately see a problem, but I also don't expect\nthat loop to run more than one iteration except under unusual instances.\n Do you find that send() gets interrupted a lot? Is\nstrlen(ident_query) taking long enough to be significant compared to how\nlong send() takes?\n\nA bit more information about the performance problem you are\nencountering might make it easier to understand the motivation for this\npatch.\n\n--\nMark Dilger\n\n\n",
"msg_date": "Sat, 9 Nov 2019 08:24:13 +0000",
"msg_from": "Ranier VF <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: Patch avoid call strlen repeatedly in loop."
}
] |
[
{
"msg_contents": "Page on disk has empty lp 1\n* Insert into page lp 1\n\ncheckpoint START. Redo eventually starts here.\n** Delete all rows on page.\nautovac truncate\nDropRelFileNodeBuffers - dirty page NOT written. lp 1 on disk still empty\ncheckpoint completes\ncrash\nsmgrtruncate - Not reached\n\nheap_xlog_delete reads page with empty lp 1 and the delete fails.\n\nThe checkpoint can not have yet written * or ** before DropRelFileNodeBuffers invalidates either of those dirty page versions for this to repro.\n\nEven if we reach the truncate we don't fsync it till the next checkpoint. So on filesystems which delay metadata updates a crash can lose the truncate.\n\nOnce we do the fsync(), for the truncate, the REDO read will return BLK_NOTFOUND and the DELETE REDO attempt will be skipped.\nWIthout the fsync() or crashing before the truncate, the delete redo depends on the most recent version of the page having been written by the checkpoint.\n\nFound during stress test and verified with pg_usleep's to test hypothesis.\n\nIs DropRelFileNodeBuffers purely for performance or would there be any correctness problems if not done.\n\n\n\n\n\n\n\n\n Page on disk has empty lp 1\n \n\n * Insert into page lp 1\n \n\n\n\n\n checkpoint START. Redo eventually starts here.\n \n\n ** Delete all rows on page.\n \n\n autovac truncate\n \n\n DropRelFileNodeBuffers - dirty page NOT written. lp 1 on disk still empty\n \n\n checkpoint completes\n \n\n crash\n \n\n smgrtruncate - Not reached\n \n\n\n\n\n heap_xlog_delete reads page with empty lp 1 and the delete fails.\n \n\n\n\n\n The checkpoint can not have yet written * or ** before DropRelFileNodeBuffers invalidates either of those dirty page versions for this to repro.\n \n\n\n\n\n Even if we reach the truncate we don't fsync it till the next checkpoint. So on filesystems which delay metadata updates a crash can lose the truncate.\n \n\n\n\n\n Once we do the fsync(), for the truncate, the REDO read will return BLK_NOTFOUND and the DELETE REDO attempt will be skipped.\n \n\n WIthout the fsync() or crashing before the truncate, the delete redo depends on the most recent version of the page having been written by the checkpoint.\n \n\n\n\n\n\n Found during stress test and verified with pg_usleep's to test hypothesis.\n \n\n\n\n\n Is DropRelFileNodeBuffers purely for performance or would there be any correctness problems if not done.",
"msg_date": "Fri, 8 Nov 2019 12:46:51 -0800 (PST)",
"msg_from": "Daniel Wood <hexexpert@comcast.net>",
"msg_from_op": true,
"msg_subject": "'Invalid lp' during heap_xlog_delete"
},
{
"msg_contents": "On Fri, Nov 08, 2019 at 12:46:51PM -0800, Daniel Wood wrote:\n> Is DropRelFileNodeBuffers purely for performance or would there be\n> any correctness problems if not done.\n\nOn which version did you find that? Only HEAD or did you use a\nversion on a stable branch? There has been some work done in this\narea lately as of 6d05086.\n--\nMichael",
"msg_date": "Sat, 9 Nov 2019 10:39:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: 'Invalid lp' during heap_xlog_delete"
},
{
"msg_contents": "I repro'ed on PG11 and PG10 STABLE but several months old.\nI looked at 6d05086 but it doesn't address the core issue.\n\nDropRelFileNodeBuffers prevents the checkpoint from writing all needed dirty pages for any REDO's that exist BEFORE the truncate. If we crash after a checkpoint but before the physical truncate then the REDO will need to replay the operation against the dirty page that the Drop invalidated.\n\nTeja Mupparti, an engineer I work with, suggested moving DropRelFileNodeBuffers to the bottom of smgrtruncate() after the physical truncate. Doing that along with a fsync() after the truncate seems to plug the hole.\n\n\n> On November 8, 2019 at 5:39 PM Michael Paquier < michael@paquier.xyz mailto:michael@paquier.xyz > wrote:\n> \n> \n> On Fri, Nov 08, 2019 at 12:46:51PM -0800, Daniel Wood wrote:\n> \n> > > Is DropRelFileNodeBuffers purely for performance or would there be\n> > any correctness problems if not done.\n> > \n> > > On which version did you find that? Only HEAD or did you use a\n> version on a stable branch? There has been some work done in this\n> area lately as of 6d05086.\n> --\n> Michael\n> \n\n\n\n\n\n\n\n I repro'ed on PG11 and PG10 STABLE but several months old.\n \n\n I looked at 6d05086 but it doesn't address the core issue.\n \n\n\n\n\nDropRelFileNodeBuffers prevents the checkpoint from writing all needed dirty pages for any REDO's that exist BEFORE the truncate. If we crash after a checkpoint but before the physical truncate then the REDO will need to replay the operation against the dirty page that the Drop invalidated.\n\n\n\n\n\n\n Teja Mupparti, an engineer I work with, suggested moving DropRelFileNodeBuffers to the bottom of smgrtruncate() after the physical truncate. Doing that along with a fsync() after the truncate seems to plug the hole.\n \n\n\n\n\n\n\n On November 8, 2019 at 5:39 PM Michael Paquier <\n michael@paquier.xyz> wrote:\n \n\n\n\n\n\n\n\n On Fri, Nov 08, 2019 at 12:46:51PM -0800, Daniel Wood wrote:\n \n\n\n Is DropRelFileNodeBuffers purely for performance or would there be\n \n\n any correctness problems if not done.\n \n\n\n On which version did you find that? Only HEAD or did you use a\n \n\n version on a stable branch? There has been some work done in this\n \n\n area lately as of 6d05086.\n \n\n --\n \n\n Michael",
"msg_date": "Fri, 8 Nov 2019 18:44:08 -0800 (PST)",
"msg_from": "Daniel Wood <hexexpert@comcast.net>",
"msg_from_op": true,
"msg_subject": "Re: 'Invalid lp' during heap_xlog_delete"
},
{
"msg_contents": "On Fri, Nov 08, 2019 at 06:44:08PM -0800, Daniel Wood wrote:\n> I repro'ed on PG11 and PG10 STABLE but several months old.\n> I looked at 6d05086 but it doesn't address the core issue.\n> \n> DropRelFileNodeBuffers prevents the checkpoint from writing all\n> needed dirty pages for any REDO's that exist BEFORE the truncate.\n> If we crash after a checkpoint but before the physical truncate then\n> the REDO will need to replay the operation against the dirty page\n> that the Drop invalidated. \n\nI am beginning to look at this thread more seriously, and I'd like to\nfirst try to reproduce that by myself. Could you share the steps you\nused to do that? This includes any manual sleep calls you may have\nadded, the timing of the crash, manual checkpoints, debugger\nbreakpoints, etc. It may be possible to extract some more generic\ntest from that.\n--\nMichael",
"msg_date": "Mon, 11 Nov 2019 16:51:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: 'Invalid lp' during heap_xlog_delete"
},
{
"msg_contents": "It's been tedious to get it exactly right but I think I got it. FYI, I was delayed because today we had yet another customer hit this: 'redo max offset' error. The system crashed as a number of autovacuums and a checkpoint happened and then the REDO failure.\n\nTwo tiny code changes:\nbufmgr.c:bufferSync() pg_usleep(10000000); // At begin of function\n\nsmgr.c:smgrtruncate(): Add the following just after CacheInvalidateSmgr()\nif (forknum == MAIN_FORKNUM && nblocks == 0) {\npg_usleep(40000000);\n{ char *cp=NULL; *cp=13; }\n}\n\nNow for the heavily commented SQL repro. It will require that you execute a checkpoint in another session when instructed by the repro.sql script. You have 4 seconds to do that. The repro script explains exactly what must happen.\n\n-----------------------------------------------------------\ncreate table t (c char(1111));\nalter table t alter column c set storage plain;\n-- Make sure there actually is an allocated page 0 and it is empty.\n-- REDO Delete would ignore a non-existant page: XLogReadBufferForRedoExtended: return BLK_NOTFOUND;\n-- Hopefully two row deletes don't trigger autovacuum and truncate the empty page.\ninsert into t values ('1'), ('2');\ndelete from t;\ncheckpoint; -- Checkpoint the empty page to disk.\n-- This insert should be before the next checkpoint 'start'. I don't want to replay it.\n-- And, yes, there needs to be another checkpoint completed to skip its replay and start\n-- with the replay of the delete below.\ninsert into t values ('1'), ('2');\n-- Checkpoint needs to start in another session. However, I need to stall the checkpoint\n-- to prevent it from writing the dirty page to disk until I get to the vacuum below.\nselect 'Please start checkpoint in another session';\nselect pg_sleep(4);\n-- Below is the problematic delete.\n-- It succeeds now(online) because the dirty page has two rows on it.\n-- However, with respect to crash recovery there are 3 possible scenarios depending on timing.\n-- 1) The ongoing checkpoint might write the page with the two rows on it before\n-- the deletes. This leads to BLK_NEEDS_REDO for the deletes. It works\n-- because the page read from disk has the rows on it.\n-- 2) The ongoing checkpoint might write the page just after the deletes.\n-- In that case BLK_DONE will happen and there'll be no problem. LSN check.\n-- 3) The checkpoint can fail to write the dirty page because a vacuum can call\n-- smgrtruncate->DropRelFileNodeBuffers() which invalidates the dirty page.\n-- If smgrtruncate safely completes the physical truncation then BLK_NOTFOUND\n-- happens and we skip the redo of the delete. So the skipped dirty write is OK.\n-- The problme happens if we crash after the 2nd checkpoint completes\n-- but before the physical truncate 'mdtruncate()'.\ndelete from t;\n-- The vacuum must complete DropRelFileNodeBuffers.\n-- The vacuum must sleep for a few seconds to allow the checkpoint to complete\n-- such that recovery starts with the Delete REDO.\n-- We must crash before mdtruncate() does the physical truncate. If the physical\n-- truncate happens the BLK_NOTFOUND will be returned and the Delete REDO skipped.\nvacuum t;\n--------------------------------------------------------\n\n\n> On November 10, 2019 at 11:51 PM Michael Paquier < michael@paquier.xyz mailto:michael@paquier.xyz > wrote:\n> \n> \n> On Fri, Nov 08, 2019 at 06:44:08PM -0800, Daniel Wood wrote:\n> \n> > > I repro'ed on PG11 and PG10 STABLE but several months old.\n> > I looked at 6d05086 but it doesn't address the core issue.\n> > \n> > DropRelFileNodeBuffers prevents the checkpoint from writing all\n> > needed dirty pages for any REDO's that exist BEFORE the truncate.\n> > If we crash after a checkpoint but before the physical truncate then\n> > the REDO will need to replay the operation against the dirty page\n> > that the Drop invalidated.\n> > \n> > > I am beginning to look at this thread more seriously, and I'd like to\n> first try to reproduce that by myself. Could you share the steps you\n> used to do that? This includes any manual sleep calls you may have\n> added, the timing of the crash, manual checkpoints, debugger\n> breakpoints, etc. It may be possible to extract some more generic\n> test from that.\n> --\n> Michael\n> \n\n\n\n\n\n\n\n It's been tedious to get it exactly right but I think I got it. FYI, I was delayed because today we had yet another customer hit this: 'redo max offset' error. The system crashed as a number of autovacuums and a checkpoint happened and then the REDO failure.\n \n\n\n\n\n Two tiny code changes:\n \n\n bufmgr.c:bufferSync() pg_usleep(10000000); // At begin of function\n \n\n\n\n\n smgr.c:smgrtruncate(): Add the following just after CacheInvalidateSmgr()\n \n\n if (forknum == MAIN_FORKNUM && nblocks == 0) {\n pg_usleep(40000000);\n { char *cp=NULL; *cp=13; }\n }\n \n\n\n\n\n\n Now for the heavily commented SQL repro. It will require that you execute a checkpoint in another session when instructed by the repro.sql script. You have 4 seconds to do that. The repro script explains exactly what must happen.\n \n\n\n\n\n -----------------------------------------------------------\n \n\n\n create table t (c char(1111));\n alter table t alter column c set storage plain;\n \n\n -- Make sure there actually is an allocated page 0 and it is empty.\n -- REDO Delete would ignore a non-existant page: XLogReadBufferForRedoExtended: return BLK_NOTFOUND;\n -- Hopefully two row deletes don't trigger autovacuum and truncate the empty page.\n insert into t values ('1'), ('2');\n delete from t;\n \n\n checkpoint; -- Checkpoint the empty page to disk.\n \n\n -- This insert should be before the next checkpoint 'start'. I don't want to replay it.\n -- And, yes, there needs to be another checkpoint completed to skip its replay and start\n -- with the replay of the delete below.\n insert into t values ('1'), ('2');\n \n\n -- Checkpoint needs to start in another session. However, I need to stall the checkpoint\n -- to prevent it from writing the dirty page to disk until I get to the vacuum below.\n select 'Please start checkpoint in another session';\n select pg_sleep(4);\n \n\n -- Below is the problematic delete.\n -- It succeeds now(online) because the dirty page has two rows on it.\n -- However, with respect to crash recovery there are 3 possible scenarios depending on timing.\n -- 1) The ongoing checkpoint might write the page with the two rows on it before\n -- the deletes. This leads to BLK_NEEDS_REDO for the deletes. It works\n -- because the page read from disk has the rows on it.\n -- 2) The ongoing checkpoint might write the page just after the deletes.\n -- In that case BLK_DONE will happen and there'll be no problem. LSN check.\n -- 3) The checkpoint can fail to write the dirty page because a vacuum can call\n -- smgrtruncate->DropRelFileNodeBuffers() which invalidates the dirty page.\n -- If smgrtruncate safely completes the physical truncation then BLK_NOTFOUND\n -- happens and we skip the redo of the delete. So the skipped dirty write is OK.\n -- The problme happens if we crash after the 2nd checkpoint completes\n -- but before the physical truncate 'mdtruncate()'.\n delete from t;\n \n\n -- The vacuum must complete DropRelFileNodeBuffers.\n -- The vacuum must sleep for a few seconds to allow the checkpoint to complete\n -- such that recovery starts with the Delete REDO.\n -- We must crash before mdtruncate() does the physical truncate. If the physical\n -- truncate happens the BLK_NOTFOUND will be returned and the Delete REDO skipped.\n \n\n\n vacuum t;\n \n\n --------------------------------------------------------\n \n\n\n\n\n\n\n On November 10, 2019 at 11:51 PM Michael Paquier <\n michael@paquier.xyz> wrote:\n \n\n\n\n\n\n\n\n On Fri, Nov 08, 2019 at 06:44:08PM -0800, Daniel Wood wrote:\n \n\n\n I repro'ed on PG11 and PG10 STABLE but several months old.\n \n\n I looked at 6d05086 but it doesn't address the core issue.\n \n\n\n\n\n DropRelFileNodeBuffers prevents the checkpoint from writing all\n \n\n needed dirty pages for any REDO's that exist BEFORE the truncate.\n \n\n If we crash after a checkpoint but before the physical truncate then\n \n\n the REDO will need to replay the operation against the dirty page\n \n\n that the Drop invalidated.\n \n\n\n I am beginning to look at this thread more seriously, and I'd like to\n \n\n first try to reproduce that by myself. Could you share the steps you\n \n\n used to do that? This includes any manual sleep calls you may have\n \n\n added, the timing of the crash, manual checkpoints, debugger\n \n\n breakpoints, etc. It may be possible to extract some more generic\n \n\n test from that.\n \n\n --\n \n\n Michael",
"msg_date": "Tue, 12 Nov 2019 18:23:33 -0800 (PST)",
"msg_from": "Daniel Wood <hexexpert@comcast.net>",
"msg_from_op": true,
"msg_subject": "Re: 'Invalid lp' during heap_xlog_delete"
},
{
"msg_contents": "Sorry I missed one thing. Turn off full page writes. I'm running in an env. with atomic 8K writes.\n\n> On November 12, 2019 at 6:23 PM Daniel Wood <hexexpert@comcast.net> wrote:\n> \n> It's been tedious to get it exactly right but I think I got it. FYI, I was delayed because today we had yet another customer hit this: 'redo max offset' error. The system crashed as a number of autovacuums and a checkpoint happened and then the REDO failure\n> \n\n\n\n\n\n\n\n Sorry I missed one thing. Turn off full page writes. I'm running in an env. with atomic 8K writes.\n \n\n On November 12, 2019 at 6:23 PM Daniel Wood <hexexpert@comcast.net> wrote: \n \n\n\n It's been tedious to get it exactly right but I think I got it. FYI, I was delayed because today we had yet another customer hit this: 'redo max offset' error. The system crashed as a number of autovacuums and a checkpoint happened and then the REDO failure",
"msg_date": "Thu, 14 Nov 2019 19:38:19 -0800 (PST)",
"msg_from": "Daniel Wood <hexexpert@comcast.net>",
"msg_from_op": true,
"msg_subject": "Re: 'Invalid lp' during heap_xlog_delete"
},
{
"msg_contents": "On Thu, Nov 14, 2019 at 07:38:19PM -0800, Daniel Wood wrote:\n> Sorry I missed one thing. Turn off full page writes.\n\nHmm. Linux FSes use typically 4kB pages. I'll try to look into that\nwith lower page sizes for relation and WAL pages.\n\n> I'm running in an env. with atomic 8K writes.\n\nWhat's that?\n--\nMichael",
"msg_date": "Mon, 18 Nov 2019 21:58:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: 'Invalid lp' during heap_xlog_delete"
},
{
"msg_contents": "> I'll try to look into that with lower page sizes for relation and WAL pages.\n\nThe page size is totally unrelated to this bug. When you repro the redo failure it is because the log record is being applied to an old page version. The correct newer page version never got written because of the truncate page invalidation. The cause is not a torn write.\n\n> What's that?\n\nAzure PostgreSQL on Windows has proprietary mechanisms in its filesystem to guarantee that a write is atomic.\n\n- Dan\n\n> On November 18, 2019 at 4:58 AM Michael Paquier < michael@paquier.xyz mailto:michael@paquier.xyz > wrote:\n> \n> \n> On Thu, Nov 14, 2019 at 07:38:19PM -0800, Daniel Wood wrote:\n> \n> > > Sorry I missed one thing. Turn off full page writes.\n> > \n> > > Hmm. Linux FSes use typically 4kB pages. I'll try to look into that\n> with lower page sizes for relation and WAL pages.\n> \n> \n> > > I'm running in an env. with atomic 8K writes.\n> > \n> > > What's that?\n> --\n> Michael\n> \n\n\n\n\n\n\n\n > \n I'll try to look into that \nwith lower page sizes for relation and WAL pages.\n\n\n\n\n\nThe page size is totally unrelated to this bug. When you repro the redo failure it is because the log record is being applied to an old page version. The correct newer page version never got written because of the truncate page invalidation. The cause is not a torn write.\n\n\n\n\n\n> What's that?\n\n\n\n\n\nAzure PostgreSQL on Windows has proprietary mechanisms in its filesystem to guarantee that a write is atomic.\n\n\n\n\n\n- Dan\n\n\n\n On November 18, 2019 at 4:58 AM Michael Paquier <\n michael@paquier.xyz> wrote:\n \n\n\n\n\n\n\n\n On Thu, Nov 14, 2019 at 07:38:19PM -0800, Daniel Wood wrote:\n \n\n\n Sorry I missed one thing. Turn off full page writes.\n \n\n\n Hmm. Linux FSes use typically 4kB pages. I'll try to look into that\n \n\n with lower page sizes for relation and WAL pages.\n \n\n\n\n\n\n I'm running in an env. with atomic 8K writes.\n \n\n\n What's that?\n \n\n --\n \n\n Michael",
"msg_date": "Mon, 18 Nov 2019 10:49:36 -0800 (PST)",
"msg_from": "Daniel Wood <hexexpert@comcast.net>",
"msg_from_op": true,
"msg_subject": "Re: 'Invalid lp' during heap_xlog_delete"
},
{
"msg_contents": "Hi,\n\nOn 2019-11-08 12:46:51 -0800, Daniel Wood wrote:\n> Page on disk has empty lp 1\n> * Insert into page lp 1\n> \n> checkpoint START. Redo eventually starts here.\n> ** Delete all rows on page.\n\nIt's worthwhile to note that this part cannot happen without full page\nwrites disabled. By dint of a checkpoint having stared previously, this\nwill otherwise always include an FPW (or be marked as WILL_INIT by a\nprevious record, which functionally is equivalent).\n\n\n> autovac truncate\n> DropRelFileNodeBuffers - dirty page NOT written. lp 1 on disk still empty\n> checkpoint completes\n\nIf I understand correctly, the DropRelFileNodeBuffers() needs to happen\nbefore the BufferSync() reaches the buffer containing the page with the\ndeletion, but before the relevant file(s) are truncated. And obviously\nthe deletion needs to have finished modifying the page in question. Not\na conflict with what you wrote, just confirming.\n\n\n> crash\n> smgrtruncate - Not reached\n\nThis seems like a somewhat confusing description to me, because\nsmgrtruncate() is what calls DropRelFileNodeBuffers(). I assume what you\nmean by \"smgrtruncate\" is not the function, but the smgr_truncate()\ncallback?\n\n\n\n> Even if we reach the truncate we don't fsync it till the next\n> checkpoint. So on filesystems which delay metadata updates a crash\n> can lose the truncate.\n\nI think that's probably fine though. Leaving the issue of checkpoint\ncompleting inbetween the DropRelFileNodeBuffers() and the actual\ntruncation aside, we'd have the WAL logged truncation truncating the\nfile. I don't think it's unreasonable to except a filesystem that claims\nto support running without full_page_writes (I've seen several such\nclaims turning out not to be true under load), to preserve either the\noriginal page contents or the new file size after a a crash. If your\nfilesystem doesn't, you really ought not to use it with FPW = off.\n\n\nI do wonder, halfway related, if there's an argument that\nXLogReadBufferForRedoExtended() ought to return something other than\nBLK_NEEDS_REDO for pages read during recovery that are all-zeroes, at\nleast for some RBM_* modes.\n\n\n> Once we do the fsync(), for the truncate, the REDO read will return\n> BLK_NOTFOUND and the DELETE REDO attempt will be skipped. WIthout the\n> fsync() or crashing before the truncate, the delete redo depends on\n> the most recent version of the page having been written by the\n> checkpoint.\n\nWe also need to correctly replay this on a standby, so I don't think\njust adding an smgrimmedsync() is the answer. We'd not replay the\ntruncation standbys / during PITR, unless I miss something. So we'd just\nend up with the same problem in slightly different situations.\n\n\n> Is DropRelFileNodeBuffers purely for performance or would there be any\n> correctness problems if not done.\n\nThere would be correctness problems if we left that out - the on-disk\nstate and the in-memory state would diverge.\n\n\nTo me it sounds the fix here would be to rejigger the RelationTruncate()\nsequence for truncation of the main for as follows:\n\n1) MyPgXact->delayChkpt = true\n2) XLogInsert(XLOG_SMGR_TRUNCATE)\n3) smgrtruncate() (which, as now, first does a DropRelFileNodeBuffers(),\n and then calls the smgr_truncate callback)\n4) MyPgXact->delayChkpt = false\n\nI'm not worried about the increased delayChkpt = true time. Compared\nwith the frequency of RecordTransactionCommit() this seems harmless.\n\n\nI'm inclined to think that we should make the XLogFlush() in the\nRelationNeedsWAL() branch of RelationTruncate()\nunconditional. Performing the truncation on the filesystem level without\nactually having persisted the corresponding WAL record is dangerous.\n\n\nI think we need to backpatch a fix for this (even if one were to\nconsider fpw = off unsupported). I think there's enough other nasty edge\ncases here. While fpw=on fixes the deletion case at hand, I think you\ncould very well end up in a nasty situation in other cases where either\nredo location or the actual checkpoint record would fall between the WAL\nrecord and the actual truncation. Imagine e.g. a base backup starting in\nsuch a situation - you'd potentially end up with a relation that\ncontains old data, without later replaying the truncation record. The\nwindow isn't huge, but also not negligible.\n\n\nI'll start a separate thread about whether we need to do a good bit of\nthe work in smgrtruncate() / smgrdounlink() / ... in critical sections.\n\n- Andres\n\n\n",
"msg_date": "Fri, 6 Dec 2019 15:06:40 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: 'Invalid lp' during heap_xlog_delete"
},
{
"msg_contents": "> On December 6, 2019 at 3:06 PM Andres Freund <andres@anarazel.de> wrote:\n...\n> > crash\n> > smgrtruncate - Not reached\n> \n> This seems like a somewhat confusing description to me, because\n> smgrtruncate() is what calls DropRelFileNodeBuffers(). I assume what you\n> mean by \"smgrtruncate\" is not the function, but the smgr_truncate()\n> callback?\n\nMy mistake. Yes, smgr_truncate()\n\n\n> > Even if we reach the truncate we don't fsync it till the next\n> > checkpoint. So on filesystems which delay metadata updates a crash\n> > can lose the truncate.\n> \n> I think that's probably fine though. Leaving the issue of checkpoint\n> completing inbetween the DropRelFileNodeBuffers() and the actual\n> truncation aside, we'd have the WAL logged truncation truncating the\n> file. I don't think it's unreasonable to except a filesystem that claims\n> to support running without full_page_writes (I've seen several such\n> claims turning out not to be true under load), to preserve either the\n> original page contents or the new file size after a a crash. If your\n> filesystem doesn't, you really ought not to use it with FPW = off.\n\nIf the phsyical truncate doesn't occur in the seconds after the cache invalidation\nnor the fsync within the minutes till the next checkpoint you are NOT left\nwith a torn page on disk. You are left with the 'incorrect' page on disk.\nIn other words, an older page because the invalidation prevent the write\nof the most recent dirty page. Redos don't like old incorrect pages.\nBut, yes, fullpage writes covers up this anomaly(To be generous).\n\n> > Once we do the fsync(), for the truncate, the REDO read will return\n> > BLK_NOTFOUND and the DELETE REDO attempt will be skipped. WIthout the\n> > fsync() or crashing before the truncate, the delete redo depends on\n> > the most recent version of the page having been written by the\n> > checkpoint.\n> \n> We also need to correctly replay this on a standby, so I don't think\n> just adding an smgrimmedsync() is the answer. We'd not replay the\n> truncation standbys / during PITR, unless I miss something. So we'd just\n> end up with the same problem in slightly different situations.\n\nI haven't mentioned to you that we have seen what appears to be the same\nproblem during PITR's depending on which base backup we start with. I didn't\nmention it because I haven't created a repro to prove it. I simply suspect it.\n\n> To me it sounds the fix here would be to rejigger the RelationTruncate()\n> sequence for truncation of the main for as follows:\n> \n> 1) MyPgXact->delayChkpt = true\n> 2) XLogInsert(XLOG_SMGR_TRUNCATE)\n> 3) smgrtruncate() (which, as now, first does a DropRelFileNodeBuffers(),\n> and then calls the smgr_truncate callback)\n> 4) MyPgXact->delayChkpt = false\n> \n> I'm not worried about the increased delayChkpt = true time. Compared\n> with the frequency of RecordTransactionCommit() this seems harmless.\n\nSeems reasonable.\n\n> I'm inclined to think that we should make the XLogFlush() in the\n> RelationNeedsWAL() branch of RelationTruncate()\n> unconditional. Performing the truncation on the filesystem level without\n> actually having persisted the corresponding WAL record is dangerous.\n\nYes, I also was curious about why it was conditional.\n\n- Dan Wood\n\n\n",
"msg_date": "Tue, 10 Dec 2019 18:24:46 -0800 (PST)",
"msg_from": "Daniel Wood <hexexpert@comcast.net>",
"msg_from_op": true,
"msg_subject": "Re: 'Invalid lp' during heap_xlog_delete"
}
] |
[
{
"msg_contents": "See\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=1add2e09b9a4c2d2c72ce51991fa4efaf577a29f\n\nPlease send any corrections by Sunday.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 08 Nov 2019 20:21:46 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "First-draft back-branch release notes are up for review"
},
{
"msg_contents": "On Sat, Nov 9, 2019 at 6:52 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> See\n>\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=1add2e09b9a4c2d2c72ce51991fa4efaf577a29f\n>\n> Please send any corrections by Sunday.\n>\n\nI have read it once and didn't find any obvious errors.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 10 Nov 2019 12:07:24 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: First-draft back-branch release notes are up for review"
}
] |
[
{
"msg_contents": "Hi Mark,\nAnother example, can you take a look?\n\n--- \\dll\\postgresql-12.0\\a\\backend\\tsearch\\spell.c\tMon Sep 30 17:06:55 2019\n+++ spell.c\tSat Nov 09 05:55:23 2019\n@@ -186,7 +186,7 @@\n #define MAX_NORM 1024\n #define MAXNORMLEN 256\n \n-#define STRNCMP(s,p)\tstrncmp( (s), (p), strlen(p) )\n+#define STRNCMP(s,p)\tstrncmp( (s), (p), sizeof(p) - 1 )\n #define GETWCHAR(W,L,N,T) ( ((const uint8*)(W))[ ((T)==FF_PREFIX) ? (N) : ( (L) - 1 - (N) ) ] )\n #define GETCHAR(A,N,T)\t GETWCHAR( (A)->repl, (A)->replen, N, T )\n \n@@ -1220,31 +1220,31 @@\n \t\t}\n \n \t\tif (STRNCMP(recoded, \"COMPOUNDFLAG\") == 0)\n-\t\t\taddCompoundAffixFlagValue(Conf, recoded + strlen(\"COMPOUNDFLAG\"),\n+\t\t\taddCompoundAffixFlagValue(Conf, recoded + sizeof(\"COMPOUNDFLAG\") - 1,\n \t\t\t\t\t\t\t\t\t FF_COMPOUNDFLAG);\n \t\telse if (STRNCMP(recoded, \"COMPOUNDBEGIN\") == 0)\n-\t\t\taddCompoundAffixFlagValue(Conf, recoded + strlen(\"COMPOUNDBEGIN\"),\n+\t\t\taddCompoundAffixFlagValue(Conf, recoded + sizeof(\"COMPOUNDBEGIN\") - 1,\n \t\t\t\t\t\t\t\t\t FF_COMPOUNDBEGIN);\n \t\telse if (STRNCMP(recoded, \"COMPOUNDLAST\") == 0)\n-\t\t\taddCompoundAffixFlagValue(Conf, recoded + strlen(\"COMPOUNDLAST\"),\n+\t\t\taddCompoundAffixFlagValue(Conf, recoded + sizeof(\"COMPOUNDLAST\") - 1,\n \t\t\t\t\t\t\t\t\t FF_COMPOUNDLAST);\n \t\t/* COMPOUNDLAST and COMPOUNDEND are synonyms */\n \t\telse if (STRNCMP(recoded, \"COMPOUNDEND\") == 0)\n-\t\t\taddCompoundAffixFlagValue(Conf, recoded + strlen(\"COMPOUNDEND\"),\n+\t\t\taddCompoundAffixFlagValue(Conf, recoded + sizeof(\"COMPOUNDEND\") - 1,\n \t\t\t\t\t\t\t\t\t FF_COMPOUNDLAST);\n \t\telse if (STRNCMP(recoded, \"COMPOUNDMIDDLE\") == 0)\n-\t\t\taddCompoundAffixFlagValue(Conf, recoded + strlen(\"COMPOUNDMIDDLE\"),\n+\t\t\taddCompoundAffixFlagValue(Conf, recoded + sizeof(\"COMPOUNDMIDDLE\") - 1,\n \t\t\t\t\t\t\t\t\t FF_COMPOUNDMIDDLE);\n \t\telse if (STRNCMP(recoded, \"ONLYINCOMPOUND\") == 0)\n-\t\t\taddCompoundAffixFlagValue(Conf, recoded + strlen(\"ONLYINCOMPOUND\"),\n+\t\t\taddCompoundAffixFlagValue(Conf, recoded + sizeof(\"ONLYINCOMPOUND\") - 1,\n \t\t\t\t\t\t\t\t\t FF_COMPOUNDONLY);\n \t\telse if (STRNCMP(recoded, \"COMPOUNDPERMITFLAG\") == 0)\n \t\t\taddCompoundAffixFlagValue(Conf,\n-\t\t\t\t\t\t\t\t\t recoded + strlen(\"COMPOUNDPERMITFLAG\"),\n+\t\t\t\t\t\t\t\t\t recoded + sizeof(\"COMPOUNDPERMITFLAG\") - 1,\n \t\t\t\t\t\t\t\t\t FF_COMPOUNDPERMITFLAG);\n \t\telse if (STRNCMP(recoded, \"COMPOUNDFORBIDFLAG\") == 0)\n \t\t\taddCompoundAffixFlagValue(Conf,\n-\t\t\t\t\t\t\t\t\t recoded + strlen(\"COMPOUNDFORBIDFLAG\"),\n+\t\t\t\t\t\t\t\t\t recoded + sizeof(\"COMPOUNDFORBIDFLAG\") - 1,\n \t\t\t\t\t\t\t\t\t FF_COMPOUNDFORBIDFLAG);\n \t\telse if (STRNCMP(recoded, \"FLAG\") == 0)\n \t\t{",
"msg_date": "Sat, 9 Nov 2019 09:02:29 +0000",
"msg_from": "Ranier VF <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "[PATH] spell.c (avoid call strlen repeatedly in loop."
}
] |
[
{
"msg_contents": "Commits a7145f6bc et al. added a test to verify integer overflow\ndetection in interval_mul. The buildfarm has now reminded me that\nyou're not going to get integer overflow if timestamps ain't integers,\ncf\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&dt=2019-11-08%2019%3A42%3A32\n\nI think the most expedient answer is just to remove that test case\nin the pre-v10 branches. It's already served its purpose by showing\nthat the rest of the buildfarm is OK. I'd work harder on this if\n--disable-integer-timestamps were still a live option, but it's\nhard to justify any complicated solution.\n\n\t\t\tregards, tom lane\n\n[ wanders away wondering if we should have more than one critter testing\n--disable-integer-timestamps ]\n\n\n",
"msg_date": "Sat, 09 Nov 2019 12:06:33 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "int64-timestamp-dependent test vs. --disable-integer-timestamps"
},
{
"msg_contents": "Hi,\n\nOn 2019-11-09 12:06:33 -0500, Tom Lane wrote:\n> Commits a7145f6bc et al. added a test to verify integer overflow\n> detection in interval_mul. The buildfarm has now reminded me that\n> you're not going to get integer overflow if timestamps ain't integers,\n> cf\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&dt=2019-11-08%2019%3A42%3A32\n> \n> I think the most expedient answer is just to remove that test case\n> in the pre-v10 branches. It's already served its purpose by showing\n> that the rest of the buildfarm is OK. I'd work harder on this if\n> --disable-integer-timestamps were still a live option, but it's\n> hard to justify any complicated solution.\n\nMakes sense to me.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 9 Nov 2019 13:57:58 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: int64-timestamp-dependent test vs. --disable-integer-timestamps"
}
] |
[
{
"msg_contents": "Hi,\n\nfour years ago Marko Tiikkaja send a patch for numeric_trim functions. This\nfunctions removed ending zeroes from numeric value. This is useful feature,\nbut there was not any progress on this patch. I think so this feature can\nbe interesting, so I would to revitalize this patch.\n\nOriginal discussion\nhttps://www.postgresql-archive.org/Add-numeric-trim-numeric-td5874444.html\n\nBased on this discussion I would to implement three functions - prototype\nimplementation is in plpsql and sql - final implementation will be in C.\n\n-- returns minimal scale when the rounding the value to this scale doesn't\n-- lost any informations.\nCREATE OR REPLACE FUNCTION pg_catalog.minscale(numeric)\n RETURNS integer\n LANGUAGE plpgsql\nAS $function$\nbegin\n for i in 0..256\n loop\n if round($1, i) = $1 then\n return i;\n end if;\n end loop;\nend;\n$function$\n\n-- trailing zeroes from end\n-- trimming only zero for numeric type has sense\nCREATE OR REPLACE FUNCTION pg_catalog.rtrim(numeric)\nRETURNS numeric AS $$\n SELECT round($1, pg_catalog.minscale($1))\n$$ LANGUAGE sql;\n\n-- this is due support trim function\nCREATE OR REPLACE FUNCTION pg_catalog.btrim(numeric)\nRETURNS numeric AS $$\n SELECT pg_catalog.rtrim($1)\n$$ LANGUAGE sql;\n\npostgres=# select trim(10.22000);\n┌───────┐\n│ btrim │\n╞═══════╡\n│ 10.22 │\n└───────┘\n(1 row)\n\npostgres=# select rtrim(10.34900);\n┌────────┐\n│ rtrim │\n╞════════╡\n│ 10.349 │\n└────────┘\n(1 row)\n\nWhat do you think about it?\n\nRegards\n\nPavel\n\nHi,four years ago Marko Tiikkaja send a patch for numeric_trim functions. This functions removed ending zeroes from numeric value. This is useful feature, but there was not any progress on this patch. I think so this feature can be interesting, so I would to revitalize this patch. Original discussion https://www.postgresql-archive.org/Add-numeric-trim-numeric-td5874444.htmlBased on this discussion I would to implement three functions - prototype implementation is in plpsql and sql - final implementation will be in C.-- returns minimal scale when the rounding the value to this scale doesn't-- lost any informations.CREATE OR REPLACE FUNCTION pg_catalog.minscale(numeric) RETURNS integer LANGUAGE plpgsqlAS $function$begin for i in 0..256 loop if round($1, i) = $1 then return i; end if; end loop;end;$function$-- trailing zeroes from end-- trimming only zero for numeric type has senseCREATE OR REPLACE FUNCTION pg_catalog.rtrim(numeric)RETURNS numeric AS $$ SELECT round($1, pg_catalog.minscale($1))$$ LANGUAGE sql;-- this is due support trim functionCREATE OR REPLACE FUNCTION pg_catalog.btrim(numeric)RETURNS numeric AS $$ SELECT pg_catalog.rtrim($1)$$ LANGUAGE sql;postgres=# select trim(10.22000);┌───────┐│ btrim │╞═══════╡│ 10.22 │└───────┘(1 row)postgres=# select rtrim(10.34900);┌────────┐│ rtrim │╞════════╡│ 10.349 │└────────┘(1 row)What do you think about it?RegardsPavel",
"msg_date": "Sat, 9 Nov 2019 20:48:11 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "proposal: minscale, rtrim, btrim functions for numeric"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> four years ago Marko Tiikkaja send a patch for numeric_trim functions. This\n> functions removed ending zeroes from numeric value. This is useful feature,\n> but there was not any progress on this patch. I think so this feature can\n> be interesting, so I would to revitalize this patch.\n\n> Original discussion\n> https://www.postgresql-archive.org/Add-numeric-trim-numeric-td5874444.html\n\nA more useful link is\nhttps://www.postgresql.org/message-id/flat/564D3ADB.7000808%40joh.to\nand the earlier discussion is at\nhttps://www.postgresql.org/message-id/flat/5643125E.1030605%40joh.to\n\nRe-reading that thread, I don't really think there's much support for\nanything beyond the minscale() function. The rest are just inviting\nconfusion with string-related functions. And I really don't like\nestablishing a precedent that btrim() and rtrim() are the same.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 09 Nov 2019 15:34:46 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: proposal: minscale, rtrim, btrim functions for numeric"
},
{
"msg_contents": "so 9. 11. 2019 v 21:34 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > four years ago Marko Tiikkaja send a patch for numeric_trim functions.\n> This\n> > functions removed ending zeroes from numeric value. This is useful\n> feature,\n> > but there was not any progress on this patch. I think so this feature can\n> > be interesting, so I would to revitalize this patch.\n>\n> > Original discussion\n> >\n> https://www.postgresql-archive.org/Add-numeric-trim-numeric-td5874444.html\n>\n> A more useful link is\n> https://www.postgresql.org/message-id/flat/564D3ADB.7000808%40joh.to\n> and the earlier discussion is at\n> https://www.postgresql.org/message-id/flat/5643125E.1030605%40joh.to\n>\n> Re-reading that thread, I don't really think there's much support for\n> anything beyond the minscale() function. The rest are just inviting\n> confusion with string-related functions. And I really don't like\n> establishing a precedent that btrim() and rtrim() are the same.\n>\n\nI have to agree, so using trim, rtrim names is not best. On second hand,\nprobably to most often usage of minscale function will be inside expression\nround(x, minscale(x)), so this functionality can be in core. A question is\na name.\n\nmaybe to_minscale(numeric) ?\n\nRegards\n\nPavel\n\n\n> regards, tom lane\n>\n\nso 9. 11. 2019 v 21:34 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> four years ago Marko Tiikkaja send a patch for numeric_trim functions. This\n> functions removed ending zeroes from numeric value. This is useful feature,\n> but there was not any progress on this patch. I think so this feature can\n> be interesting, so I would to revitalize this patch.\n\n> Original discussion\n> https://www.postgresql-archive.org/Add-numeric-trim-numeric-td5874444.html\n\nA more useful link is\nhttps://www.postgresql.org/message-id/flat/564D3ADB.7000808%40joh.to\nand the earlier discussion is at\nhttps://www.postgresql.org/message-id/flat/5643125E.1030605%40joh.to\n\nRe-reading that thread, I don't really think there's much support for\nanything beyond the minscale() function. The rest are just inviting\nconfusion with string-related functions. And I really don't like\nestablishing a precedent that btrim() and rtrim() are the same.I have to agree, so using trim, rtrim names is not best. On second hand, probably to most often usage of minscale function will be inside expression round(x, minscale(x)), so this functionality can be in core. A question is a name.maybe to_minscale(numeric) ?RegardsPavel\n\n regards, tom lane",
"msg_date": "Sun, 10 Nov 2019 07:35:40 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: proposal: minscale, rtrim, btrim functions for numeric"
},
{
"msg_contents": "ne 10. 11. 2019 v 7:35 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> so 9. 11. 2019 v 21:34 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>\n>> Pavel Stehule <pavel.stehule@gmail.com> writes:\n>> > four years ago Marko Tiikkaja send a patch for numeric_trim functions.\n>> This\n>> > functions removed ending zeroes from numeric value. This is useful\n>> feature,\n>> > but there was not any progress on this patch. I think so this feature\n>> can\n>> > be interesting, so I would to revitalize this patch.\n>>\n>> > Original discussion\n>> >\n>> https://www.postgresql-archive.org/Add-numeric-trim-numeric-td5874444.html\n>>\n>> A more useful link is\n>> https://www.postgresql.org/message-id/flat/564D3ADB.7000808%40joh.to\n>> and the earlier discussion is at\n>> https://www.postgresql.org/message-id/flat/5643125E.1030605%40joh.to\n>>\n>> Re-reading that thread, I don't really think there's much support for\n>> anything beyond the minscale() function. The rest are just inviting\n>> confusion with string-related functions. And I really don't like\n>> establishing a precedent that btrim() and rtrim() are the same.\n>>\n>\n> I have to agree, so using trim, rtrim names is not best. On second hand,\n> probably to most often usage of minscale function will be inside expression\n> round(x, minscale(x)), so this functionality can be in core. A question is\n> a name.\n>\n> maybe to_minscale(numeric) ?\n>\n\nHere is a patch. It's based on Dean's suggestions.\n\nI implemented two functions - first minscale, second trim_scale. The\noverhead of second is minimal - so I think it can be good to have it. I\nstarted design with the name \"trim_scale\", but the name can be any other.\n\nRegards\n\nPavel\n\n\n>\n> Regards\n>\n> Pavel\n>\n>\n>> regards, tom lane\n>>\n>",
"msg_date": "Mon, 11 Nov 2019 15:47:37 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: proposal: minscale, rtrim, btrim functions for numeric"
},
{
"msg_contents": "Hello Pavel,\n\nOn Mon, 11 Nov 2019 15:47:37 +0100\nPavel Stehule <pavel.stehule@gmail.com> wrote:\n\n> Here is a patch. It's based on Dean's suggestions.\n> \n> I implemented two functions - first minscale, second trim_scale. The\n> overhead of second is minimal - so I think it can be good to have it.\n> I started design with the name \"trim_scale\", but the name can be any\n> other.\n\nHere are my thoughts on your patch.\n\nMy one substantial criticism is that I believe that\ntrim_scale('NaN'::numeric) should return NaN.\nSo the test output should look like:\n\ntemplate1=# select trim_scale('nan'::numeric) = 'nan'::numeric;\n ?column? \n----------\n t \n(1 row)\n\n\nFWIW, I bumped around the Internet and looked at Oracle docs to see if\nthere's any reason why minscale() might not be a good function name.\nI couldn't find any problems.\n\nI also couldn't think of a better name than trim_scale() and don't\nhave any problems with the name.\n\nMy other suggestions mostly have to do with documentation. Your code\nlooks pretty good to me, looks like the existing code, you name\nvariables and function names as in existing code, etc.\n\nI comment on various hunks in line below:\n\ndiff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml\nindex 28eb322f3f..6f142cd679 100644\n--- a/doc/src/sgml/func.sgml\n+++ b/doc/src/sgml/func.sgml\n@@ -918,6 +918,19 @@\n <entry><literal>6.0000000000</literal></entry>\n </row>\n \n+ <row>\n+ <entry>\n+ <indexterm>\n+ <primary>minscale</primary>\n+ </indexterm>\n+\n<literal><function>minscale(<type>numeric</type>)</function></literal>\n+ </entry>\n+ <entry><type>integer</type></entry>\n+ <entry>returns minimal scale of the argument (the number of\ndecimal digits in the fractional part)</entry>\n+ <entry><literal>scale(8.4100)</literal></entry>\n+ <entry><literal>2</literal></entry>\n+ </row>\n+\n <row>\n <entry>\n <indexterm>\n\n*****\nYour description does not say what the minimal scale is. How about:\n\nminimal scale (number of decimal digits in the fractional part) needed\nto store the supplied value without data loss\n*****\n\n@@ -1041,6 +1054,19 @@\n <entry><literal>1.4142135623731</literal></entry>\n </row>\n \n+ <row>\n+ <entry>\n+ <indexterm>\n+ <primary>trim_scale</primary>\n+ </indexterm>\n+\n<literal><function>trim_scale(<type>numeric</type>)</function></literal>\n+ </entry>\n+ <entry><type>numeric</type></entry>\n+ <entry>reduce scale of the argument (the number of decimal\ndigits in the fractional part)</entry>\n+ <entry><literal>scale(8.4100)</literal></entry>\n+ <entry><literal>8.41</literal></entry>\n+ </row>\n+\n <row>\n <entry>\n <indexterm>\n\n****\nHow about:\n\nreduce the scale (the number of decimal digits in the fractional part)\nto the minimum needed to store the supplied value without data loss\n*****\n\ndiff --git a/src/backend/utils/adt/numeric.c\nb/src/backend/utils/adt/numeric.c index a00db3ce7a..35234aee4c 100644\n--- a/src/backend/utils/adt/numeric.c\n+++ b/src/backend/utils/adt/numeric.c\n\n****\nI believe the hunks in this file should start at about line# 3181.\nThis is right after numeric_scale(). Seems like all the scale\nrelated functions should be together.\n\nThere's no hard standard but I don't see why lines (comment lines in\nyour case) should be longer than 78 characters without good reason.\nPlease reformat.\n****\n\n@@ -5620,6 +5620,88 @@ int2int4_sum(PG_FUNCTION_ARGS)\n \tPG_RETURN_DATUM(Int64GetDatumFast(transdata->sum));\n }\n \n+/*\n+ * Calculate minimal display scale. The var should be stripped already.\n\n****\nI think you can get rid of the word \"display\" in the comment.\n****\n\n+ */\n+static int\n+get_min_scale(NumericVar *var)\n+{\n+\tint\t\tminscale = 0;\n+\n+\tif (var->ndigits > 0)\n+\t{\n+\t\tNumericDigit last_digit;\n+\n+\t\t/* maximal size of minscale, can be lower */\n+\t\tminscale = (var->ndigits - var->weight - 1) *\n DEC_DIGITS; +\n+\t\t/*\n+\t\t * When there are not digits after decimal point, the\n previous expression\n\n****\ns/not/no/\n****\n\n+\t\t * can be negative. In this case, the minscale must be\n zero.\n+\t\t */\n\n****\ns/can be/is/\n****\n\n+\t\tif (minscale > 0)\n+\t\t{\n+\t\t\t/* reduce minscale if trailing digits in last\n numeric digits are zero */\n+\t\t\tlast_digit = var->digits[var->ndigits - 1];\n+\n+\t\t\twhile (last_digit % 10 == 0)\n+\t\t\t{\n+\t\t\t\tminscale--;\n+\t\t\t\tlast_digit /= 10;\n+\t\t\t}\n+\t\t}\n+\t\telse\n+\t\t\tminscale = 0;\n+\t}\n+\n+\treturn minscale;\n+}\n+\n+/*\n+ * Returns minimal scale of numeric value when value is not changed\n\n****\nImprove comment, something like:\n minimal scale required to represent supplied value without loss\n****\n\n+ */\n+Datum\n+numeric_minscale(PG_FUNCTION_ARGS)\n+{\n+\tNumeric\t\tnum = PG_GETARG_NUMERIC(0);\n+\tNumericVar\targ;\n+\tint\t\t\tminscale;\n+\n+\tif (NUMERIC_IS_NAN(num))\n+\t\tPG_RETURN_NULL();\n+\n+\tinit_var_from_num(num, &arg);\n+\tstrip_var(&arg);\n+\n+\tminscale = get_min_scale(&arg);\n+\tfree_var(&arg);\n+\n+\tPG_RETURN_INT32(minscale);\n+}\n+\n+/*\n+ * Reduce scale of numeric value so value is not changed\n\n****\nLikewise, comment text could be improved\n****\n\n+ */\n+Datum\n+numeric_trim_scale(PG_FUNCTION_ARGS)\n+{\n+\tNumeric\t\tnum = PG_GETARG_NUMERIC(0);\n+\tNumeric\t\tres;\n+\tNumericVar\tresult;\n+\n+\tif (NUMERIC_IS_NAN(num))\n+\t\tPG_RETURN_NULL();\n+\n+\tinit_var_from_num(num, &result);\n+\tstrip_var(&result);\n+\n+\tresult.dscale = get_min_scale(&result);\n+\n+\tres = make_result(&result);\n+\tfree_var(&result);\n+\n+\tPG_RETURN_NUMERIC(res);\n+}\n \n /*\n ----------------------------------------------------------------------\n * diff --git a/src/include/catalog/pg_proc.dat\n b/src/include/catalog/pg_proc.dat index 58ea5b982b..e603a5d8dd 100644\n\n****\nHow about moving these new lines to right after line# 4255, the\nscale() function?\n****\n\n--- a/src/include/catalog/pg_proc.dat\n+++ b/src/include/catalog/pg_proc.dat\n@@ -4288,6 +4288,12 @@\n proname => 'width_bucket', prorettype => 'int4',\n proargtypes => 'numeric numeric numeric int4',\n prosrc => 'width_bucket_numeric' },\n+{ oid => '3434', descr => 'returns minimal scale of numeric value',\n\n****\nHow about a descr of?:\n\n minimal scale needed to store the supplied value without data loss\n****\n\n+ proname => 'minscale', prorettype => 'int4', proargtypes =>\n 'numeric',\n+ prosrc => 'numeric_minscale' },\n+{ oid => '3435', descr => 'returns numeric value with minimal scale',\n\n****\nAnd likewise a descr of?:\n\n numeric with minimal scale needed to represent the given value\n****\n\n+ proname => 'trim_scale', prorettype => 'numeric', proargtypes =>\n 'numeric',\n+ prosrc => 'numeric_trim_scale' },\n \n { oid => '1747',\n proname => 'time_pl_interval', prorettype => 'time',\ndiff --git a/src/test/regress/expected/numeric.out\n b/src/test/regress/expected/numeric.out index 1cb3c3bfab..778c204b13\n 100644\n\n****\nI have suggestions:\n\nGive the 2 functions separate comments (-- Tests for minscale() and\n-- Tests for trim_scale())\n\nPut () after the function names in the comments\nbecause that's what scale() does.\n\nMove the lines so the tests are right after the tests of scale().\n\nBe explicit when testing for NULL or NaN. I don't know that this is\nconsistent with the rest of the regression tests but I don't see how\nbeing explicit could be wrong. Otherwise NULL and NaN are output the\nsame (\"\") and you're not really testing.\n\nSo test with expressions like \"foo(NULL) IS NULL\" or\n\"foo('NaN'::NUMERIC) = 'NaN::NUMERIC\" and look for t (or f) results.\n\n****\n\nRegards,\n\nKarl <kop@meme.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Sat, 7 Dec 2019 19:23:25 -0600",
"msg_from": "\"Karl O. Pinc\" <kop@meme.com>",
"msg_from_op": false,
"msg_subject": "Re: proposal: minscale, rtrim, btrim functions for numeric"
},
{
"msg_contents": "Hi\n\nne 8. 12. 2019 v 2:23 odesílatel Karl O. Pinc <kop@meme.com> napsal:\n\n> Hello Pavel,\n>\n> On Mon, 11 Nov 2019 15:47:37 +0100\n> Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> > Here is a patch. It's based on Dean's suggestions.\n> >\n> > I implemented two functions - first minscale, second trim_scale. The\n> > overhead of second is minimal - so I think it can be good to have it.\n> > I started design with the name \"trim_scale\", but the name can be any\n> > other.\n>\n> Here are my thoughts on your patch.\n>\n> My one substantial criticism is that I believe that\n> trim_scale('NaN'::numeric) should return NaN.\n> So the test output should look like:\n>\n> template1=# select trim_scale('nan'::numeric) = 'nan'::numeric;\n> ?column?\n> ----------\n> t\n> (1 row)\n>\n\nfixed\n\n\n>\n> FWIW, I bumped around the Internet and looked at Oracle docs to see if\n> there's any reason why minscale() might not be a good function name.\n> I couldn't find any problems.\n>\n> I also couldn't think of a better name than trim_scale() and don't\n> have any problems with the name.\n>\n> My other suggestions mostly have to do with documentation. Your code\n> looks pretty good to me, looks like the existing code, you name\n> variables and function names as in existing code, etc.\n>\n> I comment on various hunks in line below:\n>\n> diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml\n> index 28eb322f3f..6f142cd679 100644\n> --- a/doc/src/sgml/func.sgml\n> +++ b/doc/src/sgml/func.sgml\n> @@ -918,6 +918,19 @@\n> <entry><literal>6.0000000000</literal></entry>\n> </row>\n>\n> + <row>\n> + <entry>\n> + <indexterm>\n> + <primary>minscale</primary>\n> + </indexterm>\n> +\n> <literal><function>minscale(<type>numeric</type>)</function></literal>\n> + </entry>\n> + <entry><type>integer</type></entry>\n> + <entry>returns minimal scale of the argument (the number of\n> decimal digits in the fractional part)</entry>\n> + <entry><literal>scale(8.4100)</literal></entry>\n> + <entry><literal>2</literal></entry>\n> + </row>\n> +\n> <row>\n> <entry>\n> <indexterm>\n>\n> *****\n> Your description does not say what the minimal scale is. How about:\n>\n> minimal scale (number of decimal digits in the fractional part) needed\n> to store the supplied value without data loss\n> *****\n>\n\nsounds better, updated\n\n\n> @@ -1041,6 +1054,19 @@\n> <entry><literal>1.4142135623731</literal></entry>\n> </row>\n>\n> + <row>\n> + <entry>\n> + <indexterm>\n> + <primary>trim_scale</primary>\n> + </indexterm>\n> +\n> <literal><function>trim_scale(<type>numeric</type>)</function></literal>\n> + </entry>\n> + <entry><type>numeric</type></entry>\n> + <entry>reduce scale of the argument (the number of decimal\n> digits in the fractional part)</entry>\n> + <entry><literal>scale(8.4100)</literal></entry>\n> + <entry><literal>8.41</literal></entry>\n> + </row>\n> +\n> <row>\n> <entry>\n> <indexterm>\n>\n> ****\n> How about:\n>\n> reduce the scale (the number of decimal digits in the fractional part)\n> to the minimum needed to store the supplied value without data loss\n> *****\n>\n\nok, changed\n\n\n> diff --git a/src/backend/utils/adt/numeric.c\n> b/src/backend/utils/adt/numeric.c index a00db3ce7a..35234aee4c 100644\n> --- a/src/backend/utils/adt/numeric.c\n> +++ b/src/backend/utils/adt/numeric.c\n>\n> ****\n> I believe the hunks in this file should start at about line# 3181.\n> This is right after numeric_scale(). Seems like all the scale\n> related functions should be together.\n>\n> There's no hard standard but I don't see why lines (comment lines in\n> your case) should be longer than 78 characters without good reason.\n> Please reformat.\n> ****\n>\n> @@ -5620,6 +5620,88 @@ int2int4_sum(PG_FUNCTION_ARGS)\n> PG_RETURN_DATUM(Int64GetDatumFast(transdata->sum));\n> }\n>\n> +/*\n> + * Calculate minimal display scale. The var should be stripped already.\n>\n> ****\n> I think you can get rid of the word \"display\" in the comment.\n> ****\n>\n\ndone\n\n\n> + */\n> +static int\n> +get_min_scale(NumericVar *var)\n> +{\n> + int minscale = 0;\n> +\n> + if (var->ndigits > 0)\n> + {\n> + NumericDigit last_digit;\n> +\n> + /* maximal size of minscale, can be lower */\n> + minscale = (var->ndigits - var->weight - 1) *\n> DEC_DIGITS; +\n> + /*\n> + * When there are not digits after decimal point, the\n> previous expression\n>\n> ****\n> s/not/no/\n> ****\n>\n> + * can be negative. In this case, the minscale must be\n> zero.\n> + */\n>\n> ****\n> s/can be/is/\n> ****\n>\n> + if (minscale > 0)\n> + {\n> + /* reduce minscale if trailing digits in last\n> numeric digits are zero */\n> + last_digit = var->digits[var->ndigits - 1];\n> +\n> + while (last_digit % 10 == 0)\n> + {\n> + minscale--;\n> + last_digit /= 10;\n> + }\n> + }\n> + else\n> + minscale = 0;\n> + }\n> +\n> + return minscale;\n> +}\n> +\n> +/*\n> + * Returns minimal scale of numeric value when value is not changed\n>\n> ****\n> Improve comment, something like:\n> minimal scale required to represent supplied value without loss\n>\n\nok\n\n****\n>\n> + */\n> +Datum\n> +numeric_minscale(PG_FUNCTION_ARGS)\n> +{\n> + Numeric num = PG_GETARG_NUMERIC(0);\n> + NumericVar arg;\n> + int minscale;\n> +\n> + if (NUMERIC_IS_NAN(num))\n> + PG_RETURN_NULL();\n> +\n> + init_var_from_num(num, &arg);\n> + strip_var(&arg);\n> +\n> + minscale = get_min_scale(&arg);\n> + free_var(&arg);\n> +\n> + PG_RETURN_INT32(minscale);\n> +}\n> +\n> +/*\n> + * Reduce scale of numeric value so value is not changed\n>\n> ****\n> Likewise, comment text could be improved\n> ****\n>\n> + */\n> +Datum\n> +numeric_trim_scale(PG_FUNCTION_ARGS)\n> +{\n> + Numeric num = PG_GETARG_NUMERIC(0);\n> + Numeric res;\n> + NumericVar result;\n> +\n> + if (NUMERIC_IS_NAN(num))\n> + PG_RETURN_NULL();\n> +\n> + init_var_from_num(num, &result);\n> + strip_var(&result);\n> +\n> + result.dscale = get_min_scale(&result);\n> +\n> + res = make_result(&result);\n> + free_var(&result);\n> +\n> + PG_RETURN_NUMERIC(res);\n> +}\n>\n> /*\n> ----------------------------------------------------------------------\n> * diff --git a/src/include/catalog/pg_proc.dat\n> b/src/include/catalog/pg_proc.dat index 58ea5b982b..e603a5d8dd 100644\n>\n> ****\n> How about moving these new lines to right after line# 4255, the\n> scale() function?\n> ****\n>\n\nhas sense, moved\n\n\n> --- a/src/include/catalog/pg_proc.dat\n> +++ b/src/include/catalog/pg_proc.dat\n> @@ -4288,6 +4288,12 @@\n> proname => 'width_bucket', prorettype => 'int4',\n> proargtypes => 'numeric numeric numeric int4',\n> prosrc => 'width_bucket_numeric' },\n> +{ oid => '3434', descr => 'returns minimal scale of numeric value',\n>\n> ****\n> How about a descr of?:\n>\n> minimal scale needed to store the supplied value without data loss\n> ****\n>\n\ndone\n\n>\n> + proname => 'minscale', prorettype => 'int4', proargtypes =>\n> 'numeric',\n> + prosrc => 'numeric_minscale' },\n> +{ oid => '3435', descr => 'returns numeric value with minimal scale',\n>\n> ****\n> And likewise a descr of?:\n>\n> numeric with minimal scale needed to represent the given value\n> ****\n>\n> + proname => 'trim_scale', prorettype => 'numeric', proargtypes =>\n> 'numeric',\n> + prosrc => 'numeric_trim_scale' },\n>\n\ndone\n\n\n> { oid => '1747',\n> proname => 'time_pl_interval', prorettype => 'time',\n> diff --git a/src/test/regress/expected/numeric.out\n> b/src/test/regress/expected/numeric.out index 1cb3c3bfab..778c204b13\n> 100644\n>\n> ****\n> I have suggestions:\n>\n> Give the 2 functions separate comments (-- Tests for minscale() and\n> -- Tests for trim_scale())\n>\n> Put () after the function names in the comments\n> because that's what scale() does.\n>\n> Move the lines so the tests are right after the tests of scale().\n>\n> Be explicit when testing for NULL or NaN. I don't know that this is\n> consistent with the rest of the regression tests but I don't see how\n> being explicit could be wrong. Otherwise NULL and NaN are output the\n> same (\"\") and you're not really testing.\n>\n> So test with expressions like \"foo(NULL) IS NULL\" or\n> \"foo('NaN'::NUMERIC) = 'NaN::NUMERIC\" and look for t (or f) results.\n>\n\nok fixed\n\nThank you for review - I am sending updated rebased patch. Please, update\ncomments freely - my language skills (about English lang) are basic.\n\nRegards\n\nPavel\n\n\n\n> ****\n>\n> Regards,\n>\n> Karl <kop@meme.com>\n> Free Software: \"You don't pay back, you pay forward.\"\n> -- Robert A. Heinlein\n>",
"msg_date": "Sun, 8 Dec 2019 08:38:38 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: proposal: minscale, rtrim, btrim functions for numeric"
},
{
"msg_contents": "\"Karl O. Pinc\" <kop@meme.com> writes:\n> FWIW, I bumped around the Internet and looked at Oracle docs to see if\n> there's any reason why minscale() might not be a good function name.\n> I couldn't find any problems.\n\n> I also couldn't think of a better name than trim_scale() and don't\n> have any problems with the name.\n\nI'd just comment that it seems weird that the same patch is introducing\ntwo functions with inconsistently chosen names. Why does one have\nan underscore separating the words and the other not? I haven't got\na large investment in either naming convention specifically, but it'd\nbe nice if we could at least pretend to be considering consistency.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 08 Dec 2019 13:57:00 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: proposal: minscale, rtrim, btrim functions for numeric"
},
{
"msg_contents": "On Sun, 08 Dec 2019 13:57:00 -0500\nTom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"Karl O. Pinc\" <kop@meme.com> writes:\n> > FWIW, I bumped around the Internet and looked at Oracle docs to see\n> > if there's any reason why minscale() might not be a good function\n> > name. I couldn't find any problems. \n> \n> > I also couldn't think of a better name than trim_scale() and don't\n> > have any problems with the name. \n> \n> I'd just comment that it seems weird that the same patch is\n> introducing two functions with inconsistently chosen names. Why does\n> one have an underscore separating the words and the other not? I\n> haven't got a large investment in either naming convention\n> specifically, but it'd be nice if we could at least pretend to be\n> considering consistency.\n\nConsistency would be lovely. I don't feel qualified\nto make the decision but here's what I came up with:\n\nI re-read the back-threads and don't see any discussion\nof the naming of minscale().\n\nMy thoughts run toward asking\nthe \"what is a word?\" question, along with \"what is the\npolicy for separating a word?\". Is \"min\" different\nfrom the prefix \"sub\"?\n\n\"Trim\" seems to clearly count as a word and trim_scale()\nseems mostly consistent with existing function names.\n(E.g. width_bucket(), convert_to(). But there's no\ntrue consistency. Plenty of functions don't separate\nwords with \"_\". E.g. setseed().)\n\nAs far as \"min\" goes, glancing through function names [1]\ndoes not help much. The index indicates that when PG puts \"min\"\nin a configuration parameter it separates it with \"_\".\n(Looking at \"min\" in the index.)\nLooking at the function names containing \"min\" [2] yields:\n\n aclitemin\n brin_minmax_add_value\n brin_minmax_consistent\n brin_minmax_opcinfo\n brin_minmax_union\n min\n numeric_uminus\n pg_terminate_backend\n range_minus\n txid_snapshot_xmin\n\nNot especially helpful. \n\nI'm inclined to want\nmin_scale() instead of \"minscale()\" based on\nhow config parameters are named and for consistency\nwith trim_scale(). Pavel, if you agree then\nlet's just change minscale() to min_scale() and\nlet people object if they don't like it.\n\nRegards.\n\nKarl <kop@meme.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n[1] \nselect pg_proc.proname\n from pg_proc\n group by pg_proc.proname\n order by pg_proc.proname;\n\n[2]\nselect pg_proc.proname\n from pg_proc\n where pg_proc.proname like '%min%'\n group by pg_proc.proname\n order by pg_proc.proname;\n\n\n",
"msg_date": "Sun, 8 Dec 2019 20:22:39 -0600",
"msg_from": "\"Karl O. Pinc\" <kop@meme.com>",
"msg_from_op": false,
"msg_subject": "Re: proposal: minscale, rtrim, btrim functions for numeric"
},
{
"msg_contents": "Hi Pavel,\n\nThanks for your changes. More inline below:\n\nOn Sun, 8 Dec 2019 08:38:38 +0100\nPavel Stehule <pavel.stehule@gmail.com> wrote:\n\n> ne 8. 12. 2019 v 2:23 odesílatel Karl O. Pinc <kop@meme.com> napsal:\n\n> > On Mon, 11 Nov 2019 15:47:37 +0100\n> > Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n> > > I implemented two functions - first minscale, second trim_scale.\n> > > The overhead of second is minimal - so I think it can be good to\n> > > have it. I started design with the name \"trim_scale\", but the\n> > > name can be any other. \n\n\n> > I comment on various hunks in line below:\n\n> \n> > diff --git a/src/backend/utils/adt/numeric.c\n> > b/src/backend/utils/adt/numeric.c index a00db3ce7a..35234aee4c\n> > 100644 --- a/src/backend/utils/adt/numeric.c\n> > +++ b/src/backend/utils/adt/numeric.c\n> >\n> > ****\n> > I believe the hunks in this file should start at about line# 3181.\n> > This is right after numeric_scale(). Seems like all the scale\n> > related functions should be together.\n> >\n> > There's no hard standard but I don't see why lines (comment lines in\n> > your case) should be longer than 78 characters without good reason.\n> > Please reformat.\n> > ****\n\nI don't see any response from you regarding the above two suggestions.\n\n\n> \n> > + */\n> > +static int\n> > +get_min_scale(NumericVar *var)\n> > +{\n> > + int minscale = 0;\n> > +\n> > + if (var->ndigits > 0)\n> > + {\n> > + NumericDigit last_digit;\n> > +\n> > + /* maximal size of minscale, can be lower */\n> > + minscale = (var->ndigits - var->weight - 1) *\n> > DEC_DIGITS; +\n> > + /*\n> > + * When there are not digits after decimal point,\n> > the previous expression\n> >\n> > ****\n> > s/not/no/\n> > ****\n> >\n> > + * can be negative. In this case, the minscale must\n> > be zero.\n> > + */\n> >\n> > ****\n> > s/can be/is/\n> > ****\n\nBy the above, I intended the comment be changed (after line wrapping)\nto:\n /*\n * When there are no digits after decimal point,\n * the previous expression is negative. In this\n * case the minscale must be zero.\n */\n\n(Oh yes, on re-reading I think the comma is unnecessary so I removed it too.)\n\n\n\n> >\n> > + if (minscale > 0)\n> > + {\n> > + /* reduce minscale if trailing digits in\n> > last numeric digits are zero */\n\nAnd the above comment should either be wrapped (as requested above)\nor eliminated. I like comments but I'm not sure this one contributes\nanything.\n\n\n> > --- a/src/include/catalog/pg_proc.dat\n> > +++ b/src/include/catalog/pg_proc.dat\n> > @@ -4288,6 +4288,12 @@\n> > proname => 'width_bucket', prorettype => 'int4',\n> > proargtypes => 'numeric numeric numeric int4',\n> > prosrc => 'width_bucket_numeric' },\n> > +{ oid => '3434', descr => 'returns minimal scale of numeric value',\n> >\n> > ****\n> > How about a descr of?:\n> >\n> > minimal scale needed to store the supplied value without data loss\n> > ****\n> > \n> \n> done\n> \n> >\n> > + proname => 'minscale', prorettype => 'int4', proargtypes =>\n> > 'numeric',\n> > + prosrc => 'numeric_minscale' },\n> > +{ oid => '3435', descr => 'returns numeric value with minimal\n> > scale',\n> >\n> > ****\n> > And likewise a descr of?:\n> >\n> > numeric with minimal scale needed to represent the given value\n> > ****\n> >\n> > + proname => 'trim_scale', prorettype => 'numeric', proargtypes =>\n> > 'numeric',\n> > + prosrc => 'numeric_trim_scale' },\n> > \n> \n> done\n\nThanks for these changes. Looking at pg_proc.dat there seems to\nbe an effort made to keep the lines to a maximum of 78 or 80\ncharacters. This means starting \"descr => '...\" on new lines\nwhen the description is long. Please reformat, doing this or,\nif you like, something even more clever to keep the lines short.\n\nLooking good. We're making progress.\n\nRegards,\n\nKarl <kop@meme.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Sun, 8 Dec 2019 20:51:18 -0600",
"msg_from": "\"Karl O. Pinc\" <kop@meme.com>",
"msg_from_op": false,
"msg_subject": "Re: proposal: minscale, rtrim, btrim functions for numeric"
},
{
"msg_contents": "Hi Pavel,\n\nI've had some thoughts about the regression tests.\n\nIt wouldn't hurt to move them to right after the\nscale() tests in numeric.sql.\n\nI believe your tests are covering all the code paths\nbut it is not clear just what test does what.\nI don't see a lot of comments in the tests so I don't\nknow that it'd be appropriate to put them in to\ndescribe just what's tested. But in any case it\ncould be nice to choose values where it is at least\nsort of apparent what part of the codebase is tested.\n\nFWIW, although the code paths are covered, the possible\ndata permutations are not. E.g. I don't see a case\nwhere scale > 0 and the NDIGITS of the last digit is full.\n\nThere are also some tests (the 0 and 0.00 tests) that duplicates\nthe execution path. In the 0 case I don't see a problem\nbut as a rule there's not a lot of point. Better test\nvalues would (mostly) eliminate these.\n\nSo, my thoughts run along these lines:\n\nselect minscale(numeric 'NaN') is NULL; -- should be true\nselect minscale(NULL::numeric) is NULL; -- should be true\nselect minscale(0); -- no digits\nselect minscale(0.00); -- no digits again\nselect minscale(1.0); -- no scale\nselect minscale(1.1); -- scale 1\nselect minscale(1.12); -- scale 2\nselect minscale(1.123); -- scale 3\nselect minscale(1.1234); -- scale 4, filled digit\nselect minscale(1.12345); -- scale 5, 2 NDIGITS\nselect minscale(1.1000); -- 1 pos in NDIGITS\nselect minscale(1.1200); -- 2 pos in NDIGITS\nselect minscale(1.1230); -- 3 pos in NDIGITS\nselect minscale(1.1234); -- all pos in NDIGITS\nselect minscale(1.12345000); -- 2 NDIGITS\nselect minscale(1.123400000000); -- strip() required/done\nselect minscale(12345.123456789012345); -- \"big\" number\nselect minscale(-12345.12345); -- negative number\nselect minscale(1e100); -- very big number\nselect minscale(1e100::numeric + 0.1); -- big number with scale\n\nI don't know why you chose some of your values so if there's\nsomething you were testing for that the above does not cover\nplease include it.\n\nSo, a combination of white and black box testing. Having written\nit out it seems like a lot of testing for such a simple function.\nOn the other hand I don't see a lot of cost in having all\nthese tests. Opinions welcome.\n\nRegards,\n\nKarl <kop@meme.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Mon, 9 Dec 2019 12:15:22 -0600",
"msg_from": "\"Karl O. Pinc\" <kop@meme.com>",
"msg_from_op": false,
"msg_subject": "Re: proposal: minscale, rtrim, btrim functions for numeric"
},
{
"msg_contents": "On Mon, 9 Dec 2019 12:15:22 -0600\n\"Karl O. Pinc\" <kop@meme.com> wrote:\n\n> I've had some thoughts about the regression tests.\n\n> Having written\n> it out it seems like a lot of testing for such a simple function.\n\nFYI.\n\nI don't see trim_scale() needing such exhaustive testing because you'll\nhave already tested a lot with the min_scale() tests.\n\nRegards,\n\nKarl <kop@meme.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Mon, 9 Dec 2019 12:25:23 -0600",
"msg_from": "\"Karl O. Pinc\" <kop@meme.com>",
"msg_from_op": false,
"msg_subject": "Re: proposal: minscale, rtrim, btrim functions for numeric"
},
{
"msg_contents": "po 9. 12. 2019 v 19:15 odesílatel Karl O. Pinc <kop@meme.com> napsal:\n\n> Hi Pavel,\n>\n> I've had some thoughts about the regression tests.\n>\n> It wouldn't hurt to move them to right after the\n> scale() tests in numeric.sql.\n>\n> I believe your tests are covering all the code paths\n> but it is not clear just what test does what.\n> I don't see a lot of comments in the tests so I don't\n> know that it'd be appropriate to put them in to\n> describe just what's tested. But in any case it\n> could be nice to choose values where it is at least\n> sort of apparent what part of the codebase is tested.\n>\n> FWIW, although the code paths are covered, the possible\n> data permutations are not. E.g. I don't see a case\n> where scale > 0 and the NDIGITS of the last digit is full.\n>\n> There are also some tests (the 0 and 0.00 tests) that duplicates\n> the execution path. In the 0 case I don't see a problem\n> but as a rule there's not a lot of point. Better test\n> values would (mostly) eliminate these.\n>\n> So, my thoughts run along these lines:\n>\n> select minscale(numeric 'NaN') is NULL; -- should be true\n> select minscale(NULL::numeric) is NULL; -- should be true\n> select minscale(0); -- no digits\n> select minscale(0.00); -- no digits again\n> select minscale(1.0); -- no scale\n> select minscale(1.1); -- scale 1\n> select minscale(1.12); -- scale 2\n> select minscale(1.123); -- scale 3\n> select minscale(1.1234); -- scale 4, filled digit\n> select minscale(1.12345); -- scale 5, 2 NDIGITS\n> select minscale(1.1000); -- 1 pos in NDIGITS\n> select minscale(1.1200); -- 2 pos in NDIGITS\n> select minscale(1.1230); -- 3 pos in NDIGITS\n> select minscale(1.1234); -- all pos in NDIGITS\n> select minscale(1.12345000); -- 2 NDIGITS\n> select minscale(1.123400000000); -- strip() required/done\n> select minscale(12345.123456789012345); -- \"big\" number\n> select minscale(-12345.12345); -- negative number\n> select minscale(1e100); -- very big number\n> select minscale(1e100::numeric + 0.1); -- big number with scale\n>\n> I don't know why you chose some of your values so if there's\n> something you were testing for that the above does not cover\n> please include it.\n>\n>\nsome values was proposed in discussion, others are from tests of scale\nfunction.\n\nI used proposed tests by you.\n\nRegards\n\nPavel\n\n\n> So, a combination of white and black box testing. Having written\n> it out it seems like a lot of testing for such a simple function.\n> On the other hand I don't see a lot of cost in having all\n> these tests. Opinions welcome.\n>\n> Regards,\n>\n> Karl <kop@meme.com>\n> Free Software: \"You don't pay back, you pay forward.\"\n> -- Robert A. Heinlein\n>\n\npo 9. 12. 2019 v 19:15 odesílatel Karl O. Pinc <kop@meme.com> napsal:Hi Pavel,\n\nI've had some thoughts about the regression tests.\n\nIt wouldn't hurt to move them to right after the\nscale() tests in numeric.sql.\n\nI believe your tests are covering all the code paths\nbut it is not clear just what test does what.\nI don't see a lot of comments in the tests so I don't\nknow that it'd be appropriate to put them in to\ndescribe just what's tested. But in any case it\ncould be nice to choose values where it is at least\nsort of apparent what part of the codebase is tested.\n\nFWIW, although the code paths are covered, the possible\ndata permutations are not. E.g. I don't see a case\nwhere scale > 0 and the NDIGITS of the last digit is full.\n\nThere are also some tests (the 0 and 0.00 tests) that duplicates\nthe execution path. In the 0 case I don't see a problem\nbut as a rule there's not a lot of point. Better test\nvalues would (mostly) eliminate these.\n\nSo, my thoughts run along these lines:\n\nselect minscale(numeric 'NaN') is NULL; -- should be true\nselect minscale(NULL::numeric) is NULL; -- should be true\nselect minscale(0); -- no digits\nselect minscale(0.00); -- no digits again\nselect minscale(1.0); -- no scale\nselect minscale(1.1); -- scale 1\nselect minscale(1.12); -- scale 2\nselect minscale(1.123); -- scale 3\nselect minscale(1.1234); -- scale 4, filled digit\nselect minscale(1.12345); -- scale 5, 2 NDIGITS\nselect minscale(1.1000); -- 1 pos in NDIGITS\nselect minscale(1.1200); -- 2 pos in NDIGITS\nselect minscale(1.1230); -- 3 pos in NDIGITS\nselect minscale(1.1234); -- all pos in NDIGITS\nselect minscale(1.12345000); -- 2 NDIGITS\nselect minscale(1.123400000000); -- strip() required/done\nselect minscale(12345.123456789012345); -- \"big\" number\nselect minscale(-12345.12345); -- negative number\nselect minscale(1e100); -- very big number\nselect minscale(1e100::numeric + 0.1); -- big number with scale\n\nI don't know why you chose some of your values so if there's\nsomething you were testing for that the above does not cover\nplease include it.\nsome values was proposed in discussion, others are from tests of scale function.I used proposed tests by you.RegardsPavel \nSo, a combination of white and black box testing. Having written\nit out it seems like a lot of testing for such a simple function.\nOn the other hand I don't see a lot of cost in having all\nthese tests. Opinions welcome.\n\nRegards,\n\nKarl <kop@meme.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein",
"msg_date": "Mon, 9 Dec 2019 20:51:21 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: proposal: minscale, rtrim, btrim functions for numeric"
},
{
"msg_contents": "po 9. 12. 2019 v 3:51 odesílatel Karl O. Pinc <kop@meme.com> napsal:\n\n> Hi Pavel,\n>\n> Thanks for your changes. More inline below:\n>\n> On Sun, 8 Dec 2019 08:38:38 +0100\n> Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> > ne 8. 12. 2019 v 2:23 odesílatel Karl O. Pinc <kop@meme.com> napsal:\n>\n> > > On Mon, 11 Nov 2019 15:47:37 +0100\n> > > Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> > > > I implemented two functions - first minscale, second trim_scale.\n> > > > The overhead of second is minimal - so I think it can be good to\n> > > > have it. I started design with the name \"trim_scale\", but the\n> > > > name can be any other.\n>\n>\n> > > I comment on various hunks in line below:\n>\n> >\n> > > diff --git a/src/backend/utils/adt/numeric.c\n> > > b/src/backend/utils/adt/numeric.c index a00db3ce7a..35234aee4c\n> > > 100644 --- a/src/backend/utils/adt/numeric.c\n> > > +++ b/src/backend/utils/adt/numeric.c\n> > >\n> > > ****\n> > > I believe the hunks in this file should start at about line# 3181.\n> > > This is right after numeric_scale(). Seems like all the scale\n> > > related functions should be together.\n> > >\n> > > There's no hard standard but I don't see why lines (comment lines in\n> > > your case) should be longer than 78 characters without good reason.\n> > > Please reformat.\n> > > ****\n>\n> I don't see any response from you regarding the above two suggestions.\n>\n>\n> >\n> > > + */\n> > > +static int\n> > > +get_min_scale(NumericVar *var)\n> > > +{\n> > > + int minscale = 0;\n> > > +\n> > > + if (var->ndigits > 0)\n> > > + {\n> > > + NumericDigit last_digit;\n> > > +\n> > > + /* maximal size of minscale, can be lower */\n> > > + minscale = (var->ndigits - var->weight - 1) *\n> > > DEC_DIGITS; +\n> > > + /*\n> > > + * When there are not digits after decimal point,\n> > > the previous expression\n> > >\n> > > ****\n> > > s/not/no/\n> > > ****\n> > >\n> > > + * can be negative. In this case, the minscale must\n> > > be zero.\n> > > + */\n> > >\n> > > ****\n> > > s/can be/is/\n> > > ****\n>\n> By the above, I intended the comment be changed (after line wrapping)\n> to:\n> /*\n> * When there are no digits after decimal point,\n> * the previous expression is negative. In this\n> * case the minscale must be zero.\n> */\n>\n> (Oh yes, on re-reading I think the comma is unnecessary so I removed it\n> too.)\n>\n>\n>\n> > >\n> > > + if (minscale > 0)\n> > > + {\n> > > + /* reduce minscale if trailing digits in\n> > > last numeric digits are zero */\n>\n> And the above comment should either be wrapped (as requested above)\n> or eliminated. I like comments but I'm not sure this one contributes\n> anything.\n>\n>\n> > > --- a/src/include/catalog/pg_proc.dat\n> > > +++ b/src/include/catalog/pg_proc.dat\n> > > @@ -4288,6 +4288,12 @@\n> > > proname => 'width_bucket', prorettype => 'int4',\n> > > proargtypes => 'numeric numeric numeric int4',\n> > > prosrc => 'width_bucket_numeric' },\n> > > +{ oid => '3434', descr => 'returns minimal scale of numeric value',\n> > >\n> > > ****\n> > > How about a descr of?:\n> > >\n> > > minimal scale needed to store the supplied value without data loss\n> > > ****\n> > >\n> >\n> > done\n> >\n> > >\n> > > + proname => 'minscale', prorettype => 'int4', proargtypes =>\n> > > 'numeric',\n> > > + prosrc => 'numeric_minscale' },\n> > > +{ oid => '3435', descr => 'returns numeric value with minimal\n> > > scale',\n> > >\n> > > ****\n> > > And likewise a descr of?:\n> > >\n> > > numeric with minimal scale needed to represent the given value\n> > > ****\n> > >\n> > > + proname => 'trim_scale', prorettype => 'numeric', proargtypes =>\n> > > 'numeric',\n> > > + prosrc => 'numeric_trim_scale' },\n> > >\n> >\n> > done\n>\n> Thanks for these changes. Looking at pg_proc.dat there seems to\n> be an effort made to keep the lines to a maximum of 78 or 80\n> characters. This means starting \"descr => '...\" on new lines\n> when the description is long. Please reformat, doing this or,\n> if you like, something even more clever to keep the lines short.\n>\n> Looking good. We're making progress.\n>\n\nI fixed almost all mentioned issues (that I understand)\n\nI am sending updated patch\n\nRegards\n\nPavel\n\n>\n> Regards,\n>\n> Karl <kop@meme.com>\n> Free Software: \"You don't pay back, you pay forward.\"\n> -- Robert A. Heinlein\n>",
"msg_date": "Mon, 9 Dec 2019 21:04:21 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: proposal: minscale, rtrim, btrim functions for numeric"
},
{
"msg_contents": "On Mon, 9 Dec 2019 21:04:21 +0100\nPavel Stehule <pavel.stehule@gmail.com> wrote:\n\n> I fixed almost all mentioned issues (that I understand)\n\nIf you don't understand you might ask, or at least say.\nThat way I know you've noticed my remarks and I don't\nhave to repeat them.\n\nI have 2 remaining suggestions.\n\n1) As previously suggested: Consider moving\nall the code you added to numeric.c to right after\nthe scale() related code. This is equivalent to\nwhat was done in pg_proc.dat and regression tests\nwhere all the scale related stuff is in one\nplace in the file.\n\n2) Now that the function is called min_scale()\nit might be nice if your \"minscale\" variable\nin numeric.c was named \"min_scale\".\n\nI don't feel particularly strongly about either\nof the above but think them a slight improvement.\n\nI also wonder whether all the trim_scale() tests\nare now necessary, but not enough to make any suggestions.\nEspecially because, well, tests are good.\n\nRegards,\n\nKarl <kop@meme.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Mon, 9 Dec 2019 17:03:43 -0600",
"msg_from": "\"Karl O. Pinc\" <kop@meme.com>",
"msg_from_op": false,
"msg_subject": "Re: proposal: minscale, rtrim, btrim functions for numeric"
},
{
"msg_contents": "út 10. 12. 2019 v 0:03 odesílatel Karl O. Pinc <kop@meme.com> napsal:\n\n> On Mon, 9 Dec 2019 21:04:21 +0100\n> Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> > I fixed almost all mentioned issues (that I understand)\n>\n> If you don't understand you might ask, or at least say.\n> That way I know you've noticed my remarks and I don't\n> have to repeat them.\n>\n> I have 2 remaining suggestions.\n>\n> 1) As previously suggested: Consider moving\n> all the code you added to numeric.c to right after\n> the scale() related code. This is equivalent to\n> what was done in pg_proc.dat and regression tests\n> where all the scale related stuff is in one\n> place in the file.\n>\n> 2) Now that the function is called min_scale()\n> it might be nice if your \"minscale\" variable\n> in numeric.c was named \"min_scale\".\n>\n> I don't feel particularly strongly about either\n> of the above but think them a slight improvement.\n>\n\ndone\n\n\n> I also wonder whether all the trim_scale() tests\n> are now necessary, but not enough to make any suggestions.\n> Especially because, well, tests are good.\n>\n\nI don't think so tests should be minimalistic - there can be some\nredundancy to coverage some less probable size effects of some future\nchanges. More - there is a small symmetry with min_scale tests - and third\nargument - some times I use tests (result part) as \"documentation\". But I\nhave not any problem to reduce tests if there will be requirement to do it.\n\nRegards\n\nPavel\n\n\n> Regards,\n>\n> Karl <kop@meme.com>\n> Free Software: \"You don't pay back, you pay forward.\"\n> -- Robert A. Heinlein\n>",
"msg_date": "Tue, 10 Dec 2019 07:11:59 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: proposal: minscale, rtrim, btrim functions for numeric"
},
{
"msg_contents": "On Tue, 10 Dec 2019 07:11:59 +0100\nPavel Stehule <pavel.stehule@gmail.com> wrote:\n\n> út 10. 12. 2019 v 0:03 odesílatel Karl O. Pinc <kop@meme.com> napsal:\n> > I also wonder whether all the trim_scale() tests\n> > are now necessary, but not enough to make any suggestions.\n\n> I don't think so tests should be minimalistic - there can be some\n> redundancy to coverage some less probable size effects of some future\n> changes. More - there is a small symmetry with min_scale tests - and\n> third argument - some times I use tests (result part) as\n> \"documentation\".\n\nFine with me.\n\nTests pass against HEAD. Docs build and look good.\nPatch looks good to me.\n\nI'm marking it ready for a committer.\n\nThanks for the work.\n\nRegards,\n\nKarl <kop@meme.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Tue, 10 Dec 2019 06:56:33 -0600",
"msg_from": "\"Karl O. Pinc\" <kop@meme.com>",
"msg_from_op": false,
"msg_subject": "Re: proposal: minscale, rtrim, btrim functions for numeric"
},
{
"msg_contents": "út 10. 12. 2019 v 13:56 odesílatel Karl O. Pinc <kop@meme.com> napsal:\n\n> On Tue, 10 Dec 2019 07:11:59 +0100\n> Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> > út 10. 12. 2019 v 0:03 odesílatel Karl O. Pinc <kop@meme.com> napsal:\n> > > I also wonder whether all the trim_scale() tests\n> > > are now necessary, but not enough to make any suggestions.\n>\n> > I don't think so tests should be minimalistic - there can be some\n> > redundancy to coverage some less probable size effects of some future\n> > changes. More - there is a small symmetry with min_scale tests - and\n> > third argument - some times I use tests (result part) as\n> > \"documentation\".\n>\n> Fine with me.\n>\n> Tests pass against HEAD. Docs build and look good.\n> Patch looks good to me.\n>\n> I'm marking it ready for a committer.\n>\n> Thanks for the work.\n>\n\nThank you for review\n\nPavel\n\n\n> Regards,\n>\n> Karl <kop@meme.com>\n> Free Software: \"You don't pay back, you pay forward.\"\n> -- Robert A. Heinlein\n>\n\nút 10. 12. 2019 v 13:56 odesílatel Karl O. Pinc <kop@meme.com> napsal:On Tue, 10 Dec 2019 07:11:59 +0100\nPavel Stehule <pavel.stehule@gmail.com> wrote:\n\n> út 10. 12. 2019 v 0:03 odesílatel Karl O. Pinc <kop@meme.com> napsal:\n> > I also wonder whether all the trim_scale() tests\n> > are now necessary, but not enough to make any suggestions.\n\n> I don't think so tests should be minimalistic - there can be some\n> redundancy to coverage some less probable size effects of some future\n> changes. More - there is a small symmetry with min_scale tests - and\n> third argument - some times I use tests (result part) as\n> \"documentation\".\n\nFine with me.\n\nTests pass against HEAD. Docs build and look good.\nPatch looks good to me.\n\nI'm marking it ready for a committer.\n\nThanks for the work.Thank you for reviewPavel\n\nRegards,\n\nKarl <kop@meme.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein",
"msg_date": "Tue, 10 Dec 2019 14:47:03 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: proposal: minscale, rtrim, btrim functions for numeric"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> út 10. 12. 2019 v 13:56 odesílatel Karl O. Pinc <kop@meme.com> napsal:\n>> I'm marking it ready for a committer.\n\n> Thank you for review\n\nPushed with minor adjustments. Notably, I didn't like having\nget_min_scale() depend on its callers having stripped trailing zeroes\nto avoid getting into a tight infinite loop. That's just trouble\nwaiting to happen, especially since non-stripped numerics are seldom\nseen in practice (ones coming into the SQL-level functions should\nnever look like that, ie the strip_var calls you had are almost\ncertainly dead code). If we did have a code path where the situation\ncould occur, and somebody forgot the strip_var call, the omission\ncould easily escape notice. So I got rid of the strip_var calls and\nmade get_min_scale() defend itself against the case. It's hardly\nany more code, and it should be a shade faster than strip_var anyway.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 06 Jan 2020 12:22:50 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: proposal: minscale, rtrim, btrim functions for numeric"
},
{
"msg_contents": "po 6. 1. 2020 v 18:22 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > út 10. 12. 2019 v 13:56 odesílatel Karl O. Pinc <kop@meme.com> napsal:\n> >> I'm marking it ready for a committer.\n>\n> > Thank you for review\n>\n> Pushed with minor adjustments. Notably, I didn't like having\n> get_min_scale() depend on its callers having stripped trailing zeroes\n> to avoid getting into a tight infinite loop. That's just trouble\n> waiting to happen, especially since non-stripped numerics are seldom\n> seen in practice (ones coming into the SQL-level functions should\n> never look like that, ie the strip_var calls you had are almost\n> certainly dead code). If we did have a code path where the situation\n> could occur, and somebody forgot the strip_var call, the omission\n> could easily escape notice. So I got rid of the strip_var calls and\n> made get_min_scale() defend itself against the case. It's hardly\n> any more code, and it should be a shade faster than strip_var anyway.\n>\n\nThank you very much\n\nMaybe this issue was part of ToDo list\n\nPavel\n\n\n> regards, tom lane\n>\n\npo 6. 1. 2020 v 18:22 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> út 10. 12. 2019 v 13:56 odesílatel Karl O. Pinc <kop@meme.com> napsal:\n>> I'm marking it ready for a committer.\n\n> Thank you for review\n\nPushed with minor adjustments. Notably, I didn't like having\nget_min_scale() depend on its callers having stripped trailing zeroes\nto avoid getting into a tight infinite loop. That's just trouble\nwaiting to happen, especially since non-stripped numerics are seldom\nseen in practice (ones coming into the SQL-level functions should\nnever look like that, ie the strip_var calls you had are almost\ncertainly dead code). If we did have a code path where the situation\ncould occur, and somebody forgot the strip_var call, the omission\ncould easily escape notice. So I got rid of the strip_var calls and\nmade get_min_scale() defend itself against the case. It's hardly\nany more code, and it should be a shade faster than strip_var anyway.Thank you very muchMaybe this issue was part of ToDo listPavel\n\n regards, tom lane",
"msg_date": "Mon, 6 Jan 2020 19:08:23 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: proposal: minscale, rtrim, btrim functions for numeric"
}
] |
[
{
"msg_contents": "Hi,\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=moonjelly&dt=2019-11-09%2010%3A17%3A06\n\nshows a failure, including a backtrace:\n\n======-=-====== stack trace: pgsql.build/src/test/regress/tmp_check/data/core ======-=-======\n[New LWP 42902]\n[Thread debugging using libthread_db enabled]\nUsing host libthread_db library \"/lib/x86_64-linux-gnu/libthread_db.so.1\".\nCore was generated by `postgres: fabien regression [local] SELECT '.\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0 0x00000000006d962b in gimme_tour (root=root@entry=0x1cfb4b0, edge_table=edge_table@entry=0x1d3afc0, new_gene=<optimized out>, num_gene=5) at geqo_erx.c:209\n209\t\t\tremove_gene(root, new_gene[i - 1], edge_table[(int) new_gene[i - 1]], edge_table);\n#0 0x00000000006d962b in gimme_tour (root=root@entry=0x1cfb4b0, edge_table=edge_table@entry=0x1d3afc0, new_gene=<optimized out>, num_gene=5) at geqo_erx.c:209\n#1 0x00000000006da0a8 in geqo (root=0x1cfb4b0, number_of_rels=<optimized out>, initial_rels=<optimized out>) at geqo_main.c:190\n#2 0x00000000006de084 in make_one_rel (root=root@entry=0x1cfb4b0, joinlist=joinlist@entry=0x1d0a868) at allpaths.c:227\n#3 0x0000000000701d19 in query_planner (root=root@entry=0x1cfb4b0, qp_callback=qp_callback@entry=0x702300 <standard_qp_callback>, qp_extra=qp_extra@entry=0x7ffd46b55a60) at planmain.c:269\n#4 0x0000000000706844 in grouping_planner () at planner.c:2054\n#5 0x00000000007093c7 in subquery_planner (glob=glob@entry=0x1cfb418, parse=parse@entry=0x1cd77b8, parent_root=parent_root@entry=0x0, hasRecursion=hasRecursion@entry=false, tuple_fraction=tuple_fraction@entry=0) at planner.c:1014\n#6 0x000000000070a803 in standard_planner (parse=0x1cd77b8, cursorOptions=256, boundParams=<optimized out>) at planner.c:406\n#7 0x00000000007cb1dc in pg_plan_query (querytree=0x1cd77b8, cursorOptions=256, boundParams=0x0) at postgres.c:873\n#8 0x00000000007cb2be in pg_plan_queries (querytrees=0x1cfb3c0, cursorOptions=cursorOptions@entry=256, boundParams=boundParams@entry=0x0) at postgres.c:963\n#9 0x00000000007cb618 in exec_simple_query () at postgres.c:1154\n#10 0x00000000007cd384 in PostgresMain (argc=<optimized out>, argv=argv@entry=0x1c23058, dbname=<optimized out>, username=<optimized out>) at postgres.c:4278\n#11 0x000000000074b574 in BackendRun (port=0x1c1c650) at postmaster.c:4498\n#12 BackendStartup (port=0x1c1c650) at postmaster.c:4189\n#13 ServerLoop () at postmaster.c:1727\n#14 0x000000000074c34d in PostmasterMain (argc=argc@entry=8, argv=argv@entry=0x1bf35b0) at postmaster.c:1400\n#15 0x0000000000491f41 in main (argc=8, argv=0x1bf35b0) at main.c:210\n$1 = {si_signo = 11, si_errno = 0, si_code = 1, _sifields = {_pad = {30650304, -12, 0 <repeats 26 times>}, _kill = {si_pid = 30650304, si_uid = 4294967284}, _timer = {si_tid = 30650304, si_overrun = -12, si_sigval = {sival_int = 0, sival_ptr = 0x0}}, _rt = {si_pid = 30650304, si_uid = 4294967284, si_sigval = {sival_int = 0, sival_ptr = 0x0}}, _sigchld = {si_pid = 30650304, si_uid = 4294967284, si_status = 0, si_utime = 0, si_stime = 0}, _sigfault = {si_addr = 0xfffffff401d3afc0, _addr_lsb = 0, _addr_bnd = {_lower = 0x0, _upper = 0x0}}, _sigpoll = {si_band = -51508957248, si_fd = 0}}}\n\nI don't think there's been any relevant code changes since the last\nsuccess.\n\nlast success:\n2019-11-09 09:20:28.346 CET [28785:1] LOG: starting PostgreSQL 13devel on x86_64-pc-linux-gnu, compiled by gcc (GCC) 10.0.0 20191102 (experimental), 64-bit\n\nfirst failure:\n2019-11-09 11:19:36.277 CET [42512:1] LOG: starting PostgreSQL 13devel on x86_64-pc-linux-gnu, compiled by gcc (GCC) 10.0.0 20191109 (experimental), 64-bit\n\n\nso it sure looks like a gcc upgrade caused the failure. But it's not\nclear wheter it's a compiler bug, or some undefined behaviour that\ntriggers the bug.\n\nFabien, any chance to either bisect or get a bit more information on the\nbacktrace?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 9 Nov 2019 14:19:19 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "segfault in geqo on experimental gcc animal"
},
{
"msg_contents": "\nHello Andres,\n\n> I don't think there's been any relevant code changes since the last\n> success.\n>\n> last success:\n> 2019-11-09 09:20:28.346 CET [28785:1] LOG: starting PostgreSQL 13devel on x86_64-pc-linux-gnu, compiled by gcc (GCC) 10.0.0 20191102 (experimental), 64-bit\n>\n> first failure:\n> 2019-11-09 11:19:36.277 CET [42512:1] LOG: starting PostgreSQL 13devel on x86_64-pc-linux-gnu, compiled by gcc (GCC) 10.0.0 20191109 (experimental), 64-bit\n>\n>\n> so it sure looks like a gcc upgrade caused the failure. But it's not\n> clear wheter it's a compiler bug, or some undefined behaviour that\n> triggers the bug.\n>\n> Fabien, any chance to either bisect or get a bit more information on the\n> backtrace?\n\nThere is a promising \"keep_error_builds\" option in buildfarm settings, but \nit does not seem to be used anywhere in the scripts. Well, I can probably \nrelaunch by hand.\n\nHowever, given the experimental nature of the setup, I think that the most \nprobable cause is a newly introduced gcc bug, so I'd suggest to wait to \ncheck whether the issue persist before spending time on that, and if it \npersists to investigate further to either report a bug to gcc or pg, \ndepending.\n\nAlso, I'll recompile gcc before the next weekly builds.\n\n-- \nFabien.\n\n\n",
"msg_date": "Sun, 10 Nov 2019 09:07:55 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: segfault in geqo on experimental gcc animal"
},
{
"msg_contents": ">> so it sure looks like a gcc upgrade caused the failure. But it's not\n>> clear wheter it's a compiler bug, or some undefined behaviour that\n>> triggers the bug.\n>> \n>> Fabien, any chance to either bisect or get a bit more information on \n>> the backtrace?\n>\n> There is a promising \"keep_error_builds\" option in buildfarm settings, \n> but it does not seem to be used anywhere in the scripts. Well, I can \n> probably relaunch by hand.\n>\n> However, given the experimental nature of the setup, I think that the \n> most probable cause is a newly introduced gcc bug, so I'd suggest to \n> wait to check whether the issue persist before spending time on that, \n> and if it persists to investigate further to either report a bug to gcc \n> or pg, depending.\n>\n> Also, I'll recompile gcc before the next weekly builds.\n\nI did some manual testing.\n\nAll versions are tested failed miserably (I tested master, 12, 11, 10, \n9.6…). High probability that it is a newly introduced gcc bug, however pg \nis not a nice self contain tested case to submit to gcc for debugging:-(\n\nI suggest to ignore for the time being, and if the problem persist I'll \ntry to investigate to detect which gcc commit caused the regression.\n\n-- \nFabien.",
"msg_date": "Wed, 13 Nov 2019 15:28:28 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: segfault in geqo on experimental gcc animal"
},
{
"msg_contents": "Hello,\n\nI did a (slow) dichotomy on gcc sources which determined that gcc r277979 \nwas the culprit, then I started a bug report which showed that the issue \nwas already reported this morning by Martin Liᅵka, including a nice \nexample isolated from sources. See:\n\n \thttps://gcc.gnu.org/bugzilla/show_bug.cgi?id=92506\n\n-- \nFabien.",
"msg_date": "Thu, 14 Nov 2019 15:47:57 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: segfault in geqo on experimental gcc animal"
},
{
"msg_contents": "Hi.\n\nYep, I build periodically PostgreSQL package in openSUSE with the\nlatest GCC and so\nthat I identified that and isolated to a simple test-case. I would expect a fix\ntoday or tomorrow.\n\nSee you,\nMartin\n\nOn Thu, 14 Nov 2019 at 16:46, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> Hello,\n>\n> I did a (slow) dichotomy on gcc sources which determined that gcc r277979\n> was the culprit, then I started a bug report which showed that the issue\n> was already reported this morning by Martin Liška, including a nice\n> example isolated from sources. See:\n>\n> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=92506\n>\n> --\n> Fabien.\n\n\n",
"msg_date": "Thu, 14 Nov 2019 16:52:05 +0100",
"msg_from": "=?UTF-8?Q?Martin_Li=C5=A1ka?= <marxin.liska@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: segfault in geqo on experimental gcc animal"
},
{
"msg_contents": "\n> Yep, I build periodically PostgreSQL package in openSUSE with the latest \n> GCC and so that I identified that and isolated to a simple test-case. I \n> would expect a fix today or tomorrow.\n\nIndeed, the gcc issue reported seems fixed by gcc r278259. I'm updating \nmoonjelly gcc to check if this solves pg compilation woes.\n\n-- \nFabien.\n\n\n",
"msg_date": "Fri, 15 Nov 2019 08:37:11 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: segfault in geqo on experimental gcc animal"
},
{
"msg_contents": "Yes, after the revision I see other failing tests like:\n...\n select_having ... ok 16 ms\n subselect ... FAILED 92 ms\n union ... FAILED 77 ms\n case ... ok 32 ms\n join ... FAILED 239 ms\n aggregates ... FAILED 136 ms\n transactions ... ok 59 ms\n...\n\nI'm going to investigate that and will inform you guys.\n\nMartin\n\nOn Fri, 15 Nov 2019 at 11:56, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> > Yep, I build periodically PostgreSQL package in openSUSE with the latest\n> > GCC and so that I identified that and isolated to a simple test-case. I\n> > would expect a fix today or tomorrow.\n>\n> Indeed, the gcc issue reported seems fixed by gcc r278259. I'm updating\n> moonjelly gcc to check if this solves pg compilation woes.\n>\n> --\n> Fabien.\n\n\n",
"msg_date": "Fri, 15 Nov 2019 12:24:49 +0100",
"msg_from": "=?UTF-8?Q?Martin_Li=C5=A1ka?= <marxin.liska@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: segfault in geqo on experimental gcc animal"
},
{
"msg_contents": "\n> Yes, after the revision I see other failing tests like:\n\nIndeed, I can confirm there are still 18/195 fails with the updated gcc.\n\n> I'm going to investigate that and will inform you guys.\n\nGreat, thanks!\n\n-- \nFabien.\n\n\n",
"msg_date": "Fri, 15 Nov 2019 13:01:21 +0100 (CET)",
"msg_from": "Fabien COELHO <fabien.coelho@mines-paristech.fr>",
"msg_from_op": false,
"msg_subject": "Re: segfault in geqo on experimental gcc animal"
},
{
"msg_contents": "Heh, it's me who now breaks postgresql build:\nhttps://gcc.gnu.org/bugzilla/show_bug.cgi?id=92529\n\nMartin\n\nOn Fri, 15 Nov 2019 at 13:01, Fabien COELHO\n<fabien.coelho@mines-paristech.fr> wrote:\n>\n>\n> > Yes, after the revision I see other failing tests like:\n>\n> Indeed, I can confirm there are still 18/195 fails with the updated gcc.\n>\n> > I'm going to investigate that and will inform you guys.\n>\n> Great, thanks!\n>\n> --\n> Fabien.\n\n\n",
"msg_date": "Fri, 15 Nov 2019 13:11:36 +0100",
"msg_from": "=?UTF-8?Q?Martin_Li=C5=A1ka?= <marxin.liska@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: segfault in geqo on experimental gcc animal"
},
{
"msg_contents": "Hello.\n\nThe issue is resolved now and tests are fine for me.\n\nMartin\n\nOn Fri, 15 Nov 2019 at 13:11, Martin Liška <marxin.liska@gmail.com> wrote:\n>\n> Heh, it's me who now breaks postgresql build:\n> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=92529\n>\n> Martin\n>\n> On Fri, 15 Nov 2019 at 13:01, Fabien COELHO\n> <fabien.coelho@mines-paristech.fr> wrote:\n> >\n> >\n> > > Yes, after the revision I see other failing tests like:\n> >\n> > Indeed, I can confirm there are still 18/195 fails with the updated gcc.\n> >\n> > > I'm going to investigate that and will inform you guys.\n> >\n> > Great, thanks!\n> >\n> > --\n> > Fabien.\n\n\n",
"msg_date": "Mon, 18 Nov 2019 13:17:34 +0100",
"msg_from": "=?UTF-8?Q?Martin_Li=C5=A1ka?= <marxin.liska@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: segfault in geqo on experimental gcc animal"
},
{
"msg_contents": "\nHello Martin,\n\n> The issue is resolved now and tests are fine for me.\n\nI recompiled gcc trunk and the moonjelly is back to green.\n\nThanks!\n\n-- \nFabien.\n\n\n",
"msg_date": "Mon, 18 Nov 2019 21:08:06 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: segfault in geqo on experimental gcc animal"
}
] |
[
{
"msg_contents": "in src/backend/replication/walsender.c, there is the section quoted below.\nIt looks like nothing interesting happens between the GetFlushRecPtr just\nbefore the loop starts, and the one inside the loop the first time through\nthe loop. If we want to avoid doing CHECK_FOR_INTERRUPTS(); etc.\nneedlessly, then we should check the result of GetFlushRecPtr and return\nearly if it is sufficiently advanced--before entering the loop. If we\ndon't care, then what is the point of updating it twice with no meaningful\naction in between? We could just get rid of the section just before the\nloop starts. The current coding seems confusing, and increases traffic on\na potentially busy spin lock.\n\n\n\n /* Get a more recent flush pointer. */\n if (!RecoveryInProgress())\n RecentFlushPtr = GetFlushRecPtr();\n else\n RecentFlushPtr = GetXLogReplayRecPtr(NULL);\n\n for (;;)\n {\n long sleeptime;\n\n /* Clear any already-pending wakeups */\n ResetLatch(MyLatch);\n\n CHECK_FOR_INTERRUPTS();\n\n /* Process any requests or signals received recently */\n if (ConfigReloadPending)\n {\n ConfigReloadPending = false;\n ProcessConfigFile(PGC_SIGHUP);\n SyncRepInitConfig();\n }\n\n /* Check for input from the client */\n ProcessRepliesIfAny();\n\n /*\n * If we're shutting down, trigger pending WAL to be written out,\n * otherwise we'd possibly end up waiting for WAL that never gets\n * written, because walwriter has shut down already.\n */\n if (got_STOPPING)\n XLogBackgroundFlush();\n\n /* Update our idea of the currently flushed position. */\n if (!RecoveryInProgress())\n RecentFlushPtr = GetFlushRecPtr();\n else\n RecentFlushPtr = GetXLogReplayRecPtr(NULL);\n\nCheers,\n\nJeff\n\nin src/backend/replication/walsender.c, there is the section quoted below. It looks like nothing interesting happens between the GetFlushRecPtr just before the loop starts, and the one inside the loop the first time through the loop. If we want to avoid doing \n\n CHECK_FOR_INTERRUPTS(); etc. needlessly, then we should check the result of \n\nGetFlushRecPtr and return early if it is sufficiently advanced--before entering the loop. If we don't care, then what is the point of updating it twice with no meaningful action in between? We could just get rid of the section just before the loop starts. The current coding seems confusing, and increases traffic on a potentially busy spin lock. /* Get a more recent flush pointer. */ if (!RecoveryInProgress()) RecentFlushPtr = GetFlushRecPtr(); else RecentFlushPtr = GetXLogReplayRecPtr(NULL); for (;;) { long sleeptime; /* Clear any already-pending wakeups */ ResetLatch(MyLatch); CHECK_FOR_INTERRUPTS(); /* Process any requests or signals received recently */ if (ConfigReloadPending) { ConfigReloadPending = false; ProcessConfigFile(PGC_SIGHUP); SyncRepInitConfig(); } /* Check for input from the client */ ProcessRepliesIfAny(); /* * If we're shutting down, trigger pending WAL to be written out, * otherwise we'd possibly end up waiting for WAL that never gets * written, because walwriter has shut down already. */ if (got_STOPPING) XLogBackgroundFlush(); /* Update our idea of the currently flushed position. */ if (!RecoveryInProgress()) RecentFlushPtr = GetFlushRecPtr(); else RecentFlushPtr = GetXLogReplayRecPtr(NULL);Cheers,Jeff",
"msg_date": "Sat, 9 Nov 2019 19:20:38 -0500",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": true,
"msg_subject": "Coding in WalSndWaitForWal"
},
{
"msg_contents": "On Sun, Nov 10, 2019 at 5:51 AM Jeff Janes <jeff.janes@gmail.com> wrote:\n>\n> in src/backend/replication/walsender.c, there is the section quoted below. It looks like nothing interesting happens between the GetFlushRecPtr just before the loop starts, and the one inside the loop the first time through the loop. If we want to avoid doing CHECK_FOR_INTERRUPTS(); etc. needlessly, then we should check the result of GetFlushRecPtr and return early if it is sufficiently advanced--before entering the loop. If we don't care, then what is the point of updating it twice with no meaningful action >in between? We could just get rid of the section just before the loop starts.\n>\n\n+1. I also think we should do one of the two things suggested by you.\nI would prefer earlier as it can save us some processing in some cases\nwhen the WAL is flushed in the meantime by WALWriter. BTW, I have\nnoticed that this part of code is same as it was first introduced in\nbelow commit:\n\ncommit 5a991ef8692ed0d170b44958a81a6bd70e90585c\nAuthor: Robert Haas <rhaas@postgresql.org>\nDate: Mon Mar 10 13:50:28 2014 -0400\n\n Allow logical decoding via the walsender interface.\n..\n..\nAndres Freund, with contributions from Álvaro Herrera, and further review by me.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 10 Nov 2019 10:43:33 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Coding in WalSndWaitForWal"
},
{
"msg_contents": "On Sun, Nov 10, 2019 at 10:43:33AM +0530, Amit Kapila wrote:\n> On Sun, Nov 10, 2019 at 5:51 AM Jeff Janes <jeff.janes@gmail.com> wrote:\n>> in src/backend/replication/walsender.c, there is the section\n>> quoted below. It looks like nothing interesting happens between\n>> the GetFlushRecPtr just before the loop starts, and the one inside\n>> the loop the first time through the loop. If we want to avoid\n>> doing CHECK_FOR_INTERRUPTS(); etc. needlessly, then we should\n>> check the result of GetFlushRecPtr and return early if it is\n>> sufficiently advanced--before entering the loop. If we don't\n>> care, then what is the point of updating it twice with no\n>> meaningful action >in between? We could just get rid of the\n>> section just before the loop starts. \n> \n> +1. I also think we should do one of the two things suggested by you.\n> I would prefer earlier as it can save us some processing in some cases\n> when the WAL is flushed in the meantime by WALWriter.\n\nSo your suggestion would be to call GetFlushRecPtr() before the first\ncheck on RecentFlushPtr and before entering the loop? It seems to me\nthat we don't want to do that to avoid any unnecessary spin lock\ncontention if the flush position is sufficiently advanced. Or are you\njust suggesting to move the first check on RecentFlushPtr within the\nloop just after resetting the latch but before checking for\ninterrupts? If we were to do some cleanup here, I would just remove\nthe first update of RecentFlushPtr before the loop as per the\nattached, which is the second suggestion from Jeff. Any thoughts?\n--\nMichael",
"msg_date": "Mon, 11 Nov 2019 11:23:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Coding in WalSndWaitForWal"
},
{
"msg_contents": "On Mon, Nov 11, 2019 at 7:53 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sun, Nov 10, 2019 at 10:43:33AM +0530, Amit Kapila wrote:\n> > On Sun, Nov 10, 2019 at 5:51 AM Jeff Janes <jeff.janes@gmail.com> wrote:\n> >> in src/backend/replication/walsender.c, there is the section\n> >> quoted below. It looks like nothing interesting happens between\n> >> the GetFlushRecPtr just before the loop starts, and the one inside\n> >> the loop the first time through the loop. If we want to avoid\n> >> doing CHECK_FOR_INTERRUPTS(); etc. needlessly, then we should\n> >> check the result of GetFlushRecPtr and return early if it is\n> >> sufficiently advanced--before entering the loop. If we don't\n> >> care, then what is the point of updating it twice with no\n> >> meaningful action >in between? We could just get rid of the\n> >> section just before the loop starts.\n> >\n> > +1. I also think we should do one of the two things suggested by you.\n> > I would prefer earlier as it can save us some processing in some cases\n> > when the WAL is flushed in the meantime by WALWriter.\n>\n> So your suggestion would be to call GetFlushRecPtr() before the first\n> check on RecentFlushPtr and before entering the loop?\n>\n\nNo. What I meant was to keep the current code as-is and have an\nadditional check on RecentFlushPtr before entering the loop.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 11 Nov 2019 08:55:14 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Coding in WalSndWaitForWal"
},
{
"msg_contents": "On 2019-Nov-11, Amit Kapila wrote:\n\n> On Mon, Nov 11, 2019 at 7:53 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> > So your suggestion would be to call GetFlushRecPtr() before the first\n> > check on RecentFlushPtr and before entering the loop?\n> \n> No. What I meant was to keep the current code as-is and have an\n> additional check on RecentFlushPtr before entering the loop.\n\nI noticed that the \"return\" at the bottom of the function does a\nSetLatch(), but the other returns do not. Isn't that a bug?\n\nAlso, what's up with those useless returns?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 11 Nov 2019 13:53:40 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Coding in WalSndWaitForWal"
},
{
"msg_contents": "On Mon, Nov 11, 2019 at 01:53:40PM -0300, Alvaro Herrera wrote:\n> On 2019-Nov-11, Amit Kapila wrote:\n> \n>> On Mon, Nov 11, 2019 at 7:53 AM Michael Paquier <michael@paquier.xyz> wrote:\n>>> So your suggestion would be to call GetFlushRecPtr() before the first\n>>> check on RecentFlushPtr and before entering the loop?\n>> \n>> No. What I meant was to keep the current code as-is and have an\n>> additional check on RecentFlushPtr before entering the loop.\n\nOkay, but is that really useful? \n\n> I noticed that the \"return\" at the bottom of the function does a\n> SetLatch(), but the other returns do not. Isn't that a bug?\n\nI don't think that it is necessary to set the latch in the first check\nas in this case WalSndWaitForWal() would have gone through its loop to\nset RecentFlushPtr to the last position available already, which would\nhave already set the latch. If you add an extra check based on (loc\n<= RecentFlushPtr) as your patch does, then you need to set the\nlatch appropriately before returning.\n\nAnyway, I don't think that there is any reason to do this extra work\nat the beginning of the routine before entering the loop. But there\nis an extra reason not to do that: your patch would prevent more pings\nto be sent, which means less flush LSN updates. If you think that\nthe extra check makes sense, then I think that the patch should at\nleast clearly document why it is done this way, and why it makes\nsense to do so.\n\nPersonally, my take would be to remove the extra call to\nGetFlushRecPtr() before entering the loop.\n\n> Also, what's up with those useless returns?\n\nYes, let's rip them out.\n--\nMichael",
"msg_date": "Tue, 12 Nov 2019 11:17:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Coding in WalSndWaitForWal"
},
{
"msg_contents": "At Tue, 12 Nov 2019 11:17:26 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Mon, Nov 11, 2019 at 01:53:40PM -0300, Alvaro Herrera wrote:\n> > On 2019-Nov-11, Amit Kapila wrote:\n> > \n> >> On Mon, Nov 11, 2019 at 7:53 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >>> So your suggestion would be to call GetFlushRecPtr() before the first\n> >>> check on RecentFlushPtr and before entering the loop?\n> >> \n> >> No. What I meant was to keep the current code as-is and have an\n> >> additional check on RecentFlushPtr before entering the loop.\n> \n> Okay, but is that really useful? \n> \n> > I noticed that the \"return\" at the bottom of the function does a\n> > SetLatch(), but the other returns do not. Isn't that a bug?\n> \n> I don't think that it is necessary to set the latch in the first check\n> as in this case WalSndWaitForWal() would have gone through its loop to\n> set RecentFlushPtr to the last position available already, which would\n> have already set the latch. If you add an extra check based on (loc\n> <= RecentFlushPtr) as your patch does, then you need to set the\n> latch appropriately before returning.\n> \n> Anyway, I don't think that there is any reason to do this extra work\n> at the beginning of the routine before entering the loop. But there\n\nIt seems to me as if it is a fast-path when RecentFlushPtr reached the\ntarget location before enterig the loop. It is frequently called in\n(AFAICS) interruptible loops. On that standpoint I vote +1 for Amit.\n\nOr we could shift the stuff of the for loop so that the duplicate code\nis placed at the beginning.\n\n> is an extra reason not to do that: your patch would prevent more pings\n> to be sent, which means less flush LSN updates. If you think that\n> the extra check makes sense, then I think that the patch should at\n> least clearly document why it is done this way, and why it makes\n> sense to do so.\n> \n> Personally, my take would be to remove the extra call to\n> GetFlushRecPtr() before entering the loop.\n> \n> > Also, what's up with those useless returns?\n> \n> Yes, let's rip them out.\n\nIt seems to me that the fast-path seems intentional.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 12 Nov 2019 13:11:44 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Coding in WalSndWaitForWal"
},
{
"msg_contents": "On Tue, Nov 12, 2019 at 7:47 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Nov 11, 2019 at 01:53:40PM -0300, Alvaro Herrera wrote:\n> > On 2019-Nov-11, Amit Kapila wrote:\n> >\n> >> On Mon, Nov 11, 2019 at 7:53 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >>> So your suggestion would be to call GetFlushRecPtr() before the first\n> >>> check on RecentFlushPtr and before entering the loop?\n> >>\n> >> No. What I meant was to keep the current code as-is and have an\n> >> additional check on RecentFlushPtr before entering the loop.\n>\n> Okay, but is that really useful?\n>\n\nI think so. It will be useful in cases where the WAL is already\nflushed by the WALWriter in the meantime.\n\n> > I noticed that the \"return\" at the bottom of the function does a\n> > SetLatch(), but the other returns do not. Isn't that a bug?\n>\n> I don't think that it is necessary to set the latch in the first check\n> as in this case WalSndWaitForWal() would have gone through its loop to\n> set RecentFlushPtr to the last position available already, which would\n> have already set the latch. If you add an extra check based on (loc\n> <= RecentFlushPtr) as your patch does, then you need to set the\n> latch appropriately before returning.\n>\n\nThis point makes sense to me.\n\n> Anyway, I don't think that there is any reason to do this extra work\n> at the beginning of the routine before entering the loop. But there\n> is an extra reason not to do that: your patch would prevent more pings\n> to be sent, which means less flush LSN updates. If you think that\n> the extra check makes sense, then I think that the patch should at\n> least clearly document why it is done this way, and why it makes\n> sense to do so.\n>\n\nI also think adding a comment there would be good.\n\n>\n> > Also, what's up with those useless returns?\n>\n> Yes, let's rip them out.\n>\n\n+1.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 12 Nov 2019 09:45:07 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Coding in WalSndWaitForWal"
},
{
"msg_contents": "Hi,\n\nOn 2019-11-11 13:53:40 -0300, Alvaro Herrera wrote:\n> On 2019-Nov-11, Amit Kapila wrote:\n> \n> > On Mon, Nov 11, 2019 at 7:53 AM Michael Paquier <michael@paquier.xyz> wrote:\n> \n> > > So your suggestion would be to call GetFlushRecPtr() before the first\n> > > check on RecentFlushPtr and before entering the loop?\n> > \n> > No. What I meant was to keep the current code as-is and have an\n> > additional check on RecentFlushPtr before entering the loop.\n> \n> I noticed that the \"return\" at the bottom of the function does a\n> SetLatch(), but the other returns do not. Isn't that a bug?\n\nI don't think it is - We never reset the latch in that case. I don't see\nwhat we'd gain from setting it explicitly, other than unnecessarily\nperforming more work?\n\n\n> \t/*\n> \t * Fast path to avoid acquiring the spinlock in case we already know we\n> \t * have enough WAL available. This is particularly interesting if we're\n> \t * far behind.\n> \t */\n> \tif (RecentFlushPtr != InvalidXLogRecPtr &&\n> \t\tloc <= RecentFlushPtr)\n> +\t{\n> +\t\tSetLatch(MyLatch);\n> \t\treturn RecentFlushPtr;\n> +\t}\n\nI.e. let's not do this.\n\n\n> \t/* Get a more recent flush pointer. */\n> \tif (!RecoveryInProgress())\n> \t\tRecentFlushPtr = GetFlushRecPtr();\n> \telse\n> \t\tRecentFlushPtr = GetXLogReplayRecPtr(NULL);\n> \n> +\tif (loc <= RecentFlushPtr)\n> +\t{\n> +\t\tSetLatch(MyLatch);\n> +\t\treturn RecentFlushPtr;\n> +\t}\n\nHm. I'm doubtful this is a good idea - it essentially means we'd not\ncheck for interrupts, protocol replies, etc. for an unbounded amount of\ntime. Whereas the existing fast-path does so for a bounded - although\nnot necessarily short - amount of time.\n\nIt seems to me it'd be better to just remove the \"get a more recent\nflush pointer\" block - it doesn't seem to currently surve a meaningful\npurpose.\n\n\n> \tfor (;;)\n> \t{\n> \t\tlong\t\tsleeptime;\n> \n> \t\t/* Clear any already-pending wakeups */\n> \t\tResetLatch(MyLatch);\n> \n> @@ -2267,15 +2276,14 @@ WalSndLoop(WalSndSendDataCallback send_data)\n> \n> \t\t\t/* Sleep until something happens or we time out */\n> \t\t\t(void) WaitLatchOrSocket(MyLatch, wakeEvents,\n> \t\t\t\t\t\t\t\t\t MyProcPort->sock, sleeptime,\n> \t\t\t\t\t\t\t\t\t WAIT_EVENT_WAL_SENDER_MAIN);\n> \t\t}\n> \t}\n> -\treturn;\n> }\n\nHaving dug into history, the reason this exists is that there used to be\nthe following below the return:\n\n-\n-send_failure:\n-\n- /*\n- * Get here on send failure. Clean up and exit.\n- *\n- * Reset whereToSendOutput to prevent ereport from attempting to send any\n- * more messages to the standby.\n- */\n- if (whereToSendOutput == DestRemote)\n- whereToSendOutput = DestNone;\n-\n- proc_exit(0);\n- abort(); /* keep the compiler quiet */\n\nbut when 5a991ef8692ed (Allow logical decoding via the walsender\ninterface) moved the shutdown code into its own function,\nWalSndShutdown(), we left the returns in place.\n\n\nWe still have the curious\n\tproc_exit(0);\n\tabort();\t\t\t\t\t/* keep the compiler quiet */\n\npattern in WalSndShutdown() - wouldn't the right approach to instead\nproc_exit() with pg_attribute_noreturn()?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 12 Nov 2019 11:27:16 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Coding in WalSndWaitForWal"
},
{
"msg_contents": "On Tue, Nov 12, 2019 at 11:27:16AM -0800, Andres Freund wrote:\n> It seems to me it'd be better to just remove the \"get a more recent\n> flush pointer\" block - it doesn't seem to currently surve a meaningful\n> purpose.\n\n+1. That was actually my suggestion upthread :)\n--\nMichael",
"msg_date": "Wed, 13 Nov 2019 16:34:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Coding in WalSndWaitForWal"
},
{
"msg_contents": "Ah, my stupid.\n\nAt Wed, 13 Nov 2019 16:34:49 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Tue, Nov 12, 2019 at 11:27:16AM -0800, Andres Freund wrote:\n> > It seems to me it'd be better to just remove the \"get a more recent\n> > flush pointer\" block - it doesn't seem to currently surve a meaningful\n> > purpose.\n> \n> +1. That was actually my suggestion upthread :)\n\nActually it is useless as it is. But the code still seems to me an\nincomplete fast path (that lacks immediate return after it) for the\ncase where just one call to GetFlushRecPtr advances RecentFlushPtr is\nenough.\n\nHowever, I'm not confident taht removing the (intended) fast path\nimpacts perforance significantly. So I don't object to remove it.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 13 Nov 2019 17:18:10 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Coding in WalSndWaitForWal"
},
{
"msg_contents": "On Wed, Nov 13, 2019 at 12:57 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2019-11-11 13:53:40 -0300, Alvaro Herrera wrote:\n>\n> > /* Get a more recent flush pointer. */\n> > if (!RecoveryInProgress())\n> > RecentFlushPtr = GetFlushRecPtr();\n> > else\n> > RecentFlushPtr = GetXLogReplayRecPtr(NULL);\n> >\n> > + if (loc <= RecentFlushPtr)\n> > + {\n> > + SetLatch(MyLatch);\n> > + return RecentFlushPtr;\n> > + }\n>\n> Hm. I'm doubtful this is a good idea - it essentially means we'd not\n> check for interrupts, protocol replies, etc. for an unbounded amount of\n> time.\n>\n\nI think this function (WalSndWaitForWal) will be called from\nWalSndLoop which checks for interrupts and protocol replies, so it\nmight not miss checking those things in that context. In which case\nit will miss to check those things for an unbounded amount of time?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 13 Nov 2019 14:21:13 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Coding in WalSndWaitForWal"
},
{
"msg_contents": "At Wed, 13 Nov 2019 14:21:13 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Wed, Nov 13, 2019 at 12:57 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > On 2019-11-11 13:53:40 -0300, Alvaro Herrera wrote:\n> >\n> > > /* Get a more recent flush pointer. */\n> > > if (!RecoveryInProgress())\n> > > RecentFlushPtr = GetFlushRecPtr();\n> > > else\n> > > RecentFlushPtr = GetXLogReplayRecPtr(NULL);\n> > >\n> > > + if (loc <= RecentFlushPtr)\n> > > + {\n> > > + SetLatch(MyLatch);\n> > > + return RecentFlushPtr;\n> > > + }\n> >\n> > Hm. I'm doubtful this is a good idea - it essentially means we'd not\n> > check for interrupts, protocol replies, etc. for an unbounded amount of\n> > time.\n> >\n> \n> I think this function (WalSndWaitForWal) will be called from\n> WalSndLoop which checks for interrupts and protocol replies, so it\n> might not miss checking those things in that context. In which case\n> it will miss to check those things for an unbounded amount of time?\n\nI think so for the first part, but I'm not sure for the second. But it\nshould be avoided if it can be happen.\n\n# the walreader's callback structure makes such things less clear :p\n\nI remember that there was a fixed bug that logical replication code\nfails to send a reply for a longer time than timeout on a very fast\nconnection, running through a fast path without checking the need for\nreply. I couldn't find where it is, though..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 14 Nov 2019 17:14:51 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Coding in WalSndWaitForWal"
},
{
"msg_contents": "On 2019-Nov-12, Andres Freund wrote:\n\n> We still have the curious\n> \tproc_exit(0);\n> \tabort();\t\t\t\t\t/* keep the compiler quiet */\n> \n> pattern in WalSndShutdown() - wouldn't the right approach to instead\n> proc_exit() with pg_attribute_noreturn()?\n\nproc_exit() is already marked noreturn ... and has been marked as such\nsince commit eeece9e60984 (2012), which is the same that added abort()\nafter some proc_exit calls as well as other routines that were already\nmarked noreturn, such as WalSenderMain(). However, back then we were\nusing the GCC-specific notation of __attribute__((noreturn)), so perhaps\nthe reason we kept the abort() (and a few breaks, etc) after proc_exit()\nwas to satisfy compilers other than GCC.\n\nIn modern times, we define pg_attribute_noreturn() like this:\n\n/* GCC, Sunpro and XLC support aligned, packed and noreturn */\n#if defined(__GNUC__) || defined(__SUNPRO_C) || defined(__IBMC__)\n#define pg_attribute_noreturn() __attribute__((noreturn))\n#define HAVE_PG_ATTRIBUTE_NORETURN 1\n#else\n#define pg_attribute_noreturn()\n#endif\n\nI suppose this will cause warnings in compilers other than those, but\nI'm not sure if we care. What about MSVC for example?\n\nWith the attached patch, everything compiles cleanly in my setup, no\nwarnings, but then it's GCC.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 9 Jan 2020 16:29:36 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Coding in WalSndWaitForWal"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> In modern times, we define pg_attribute_noreturn() like this:\n\n> /* GCC, Sunpro and XLC support aligned, packed and noreturn */\n> #if defined(__GNUC__) || defined(__SUNPRO_C) || defined(__IBMC__)\n> #define pg_attribute_noreturn() __attribute__((noreturn))\n> #define HAVE_PG_ATTRIBUTE_NORETURN 1\n> #else\n> #define pg_attribute_noreturn()\n> #endif\n\n> I suppose this will cause warnings in compilers other than those, but\n> I'm not sure if we care. What about MSVC for example?\n\nYeah, the lack of coverage for MSVC seems like the main reason not\nto assume this works \"everywhere of interest\".\n\n> With the attached patch, everything compiles cleanly in my setup, no\n> warnings, but then it's GCC.\n\nMeh ... I'm not really convinced that any of those changes are\nimprovements. Particularly not the removals of switch-case breaks.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Jan 2020 14:56:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Coding in WalSndWaitForWal"
},
{
"msg_contents": "On 2020-Jan-09, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > In modern times, we define pg_attribute_noreturn() like this:\n> \n> > /* GCC, Sunpro and XLC support aligned, packed and noreturn */\n> > #if defined(__GNUC__) || defined(__SUNPRO_C) || defined(__IBMC__)\n> > #define pg_attribute_noreturn() __attribute__((noreturn))\n> > #define HAVE_PG_ATTRIBUTE_NORETURN 1\n> > #else\n> > #define pg_attribute_noreturn()\n> > #endif\n> \n> > I suppose this will cause warnings in compilers other than those, but\n> > I'm not sure if we care. What about MSVC for example?\n> \n> Yeah, the lack of coverage for MSVC seems like the main reason not\n> to assume this works \"everywhere of interest\".\n\nThat would easy to add as __declspec(noreturn) ... except that in MSVC\nthe decoration goes *before* the prototype rather after it, so this\nseems difficult to achieve without invasive surgery.\nhttps://docs.microsoft.com/en-us/cpp/cpp/noreturn?view=vs-2015\n\n> > With the attached patch, everything compiles cleanly in my setup, no\n> > warnings, but then it's GCC.\n> \n> Meh ... I'm not really convinced that any of those changes are\n> improvements. Particularly not the removals of switch-case breaks.\n\nHowever, we already have a large number of proc_exit() calls in switch\nblocks that are not followed by breaks. In fact, the majority are\nalready like that.\n\nI can easily leave this well enough alone, though.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 9 Jan 2020 17:19:11 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Coding in WalSndWaitForWal"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> However, we already have a large number of proc_exit() calls in switch\n> blocks that are not followed by breaks. In fact, the majority are\n> already like that.\n\nOh, hmm ... consistency is good.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Jan 2020 15:58:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Coding in WalSndWaitForWal"
}
] |
[
{
"msg_contents": "While composing the release note entry for commits 8d48e6a72 et al\n(handle recursive type dependencies while checking for unsupported\ntypes in pg_upgrade), I realized that there's a huge hole in\npg_upgrade's test for such cases. It looks for domains containing\nthe unsupported type, and for composites containing it, but not\nfor arrays or ranges containing it. It's definitely possible to\ncreate tables containing arrays of lines, or arrays of composites\ncontaining line, etc etc. A range over line is harder for lack of\na btree opclass, but a range over sql_identifier is possible.\n\nThe attached patches fix this. 0001 refactors the code in question\nso that we have only one copy not three-and-growing. The only\ndifference between the three copies was that one case didn't bother\nto search indexes, but I judged that that wasn't an optimization we\nneed to preserve. (Note: this patch is shown with --ignore-space-change\nto make it more reviewable, but I did re-pgindent the code.) Then\n0002 actually adds the array and range cases.\n\nAlthough this is a really straightforward patch and I've tested it\nagainst appropriate old versions (9.1 and 9.2), I'm very hesitant\nto shove it in so soon before a release wrap. Should I do that, or\nlet it wait till after the wrap?\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 10 Nov 2019 14:07:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "pg_upgrade fails to detect unsupported arrays and ranges"
},
{
"msg_contents": "> On 10 Nov 2019, at 20:07, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> 0001 refactors the code in question\n> so that we have only one copy not three-and-growing. The only\n> difference between the three copies was that one case didn't bother\n> to search indexes, but I judged that that wasn't an optimization we\n> need to preserve. \n\nA big +1 on this refactoring.\n\n> (Note: this patch is shown with --ignore-space-change\n> to make it more reviewable, but I did re-pgindent the code.) Then\n> 0002 actually adds the array and range cases.\n\nWas the source pgindented, but not committed, before generating the patches? I\nfail to apply them on master (or REL_12_STABLE) on what seems to be only\nwhitespace changes.\n\n> Although this is a really straightforward patch and I've tested it\n> against appropriate old versions (9.1 and 9.2), I'm very hesitant\n> to shove it in so soon before a release wrap. Should I do that, or\n> let it wait till after the wrap?\n\nHaving read the patch I agree that it's trivial enough that I wouldn't be\nworried to let it slip through. However, given that we've lacked the check for\na few releases, is it worth rushing with the potential for a last-minute\n\"oh-shit\"?\n\n> +\t\t/* arrays over any type selected so far */\n> +\t\t\t\t\t\t \"\t\t\tSELECT t.oid FROM pg_catalog.pg_type t, x WHERE typelem = x.oid AND typtype = 'b' \"\n\nNo need to check typlen?\n\ncheers ./daniel\n\n",
"msg_date": "Sun, 10 Nov 2019 22:01:21 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails to detect unsupported arrays and ranges"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> On 10 Nov 2019, at 20:07, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Although this is a really straightforward patch and I've tested it\n>> against appropriate old versions (9.1 and 9.2), I'm very hesitant\n>> to shove it in so soon before a release wrap. Should I do that, or\n>> let it wait till after the wrap?\n\n> Having read the patch I agree that it's trivial enough that I wouldn't be\n> worried to let it slip through. However, given that we've lacked the check for\n> a few releases, is it worth rushing with the potential for a last-minute\n> \"oh-shit\"?\n\nProbably not, really --- the main argument for that is just that it'd fit\nwell with the fixes Tomas already made.\n\n>> +\t\t/* arrays over any type selected so far */\n>> +\t\t\t\t\t\t \"\t\t\tSELECT t.oid FROM pg_catalog.pg_type t, x WHERE typelem = x.oid AND typtype = 'b' \"\n\n> No need to check typlen?\n\nYeah, that's intentional. A fixed-length array type over a problematic\ntype would be just as much of a problem as a varlena array type.\nThe case shouldn't apply to any of the existing problematic types,\nbut I was striving for generality.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 10 Nov 2019 16:05:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade fails to detect unsupported arrays and ranges"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> On 10 Nov 2019, at 20:07, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> (Note: this patch is shown with --ignore-space-change\n>> to make it more reviewable, but I did re-pgindent the code.) Then\n>> 0002 actually adds the array and range cases.\n\n> Was the source pgindented, but not committed, before generating the patches? I\n> fail to apply them on master (or REL_12_STABLE) on what seems to be only\n> whitespace changes.\n\nHm, I suppose it might be hard to apply the combination of the patches\n(maybe something involving patch -l would work). For simplicity, here's\nthe complete patch for HEAD. I fixed a missing schema qualification.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 10 Nov 2019 16:12:38 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade fails to detect unsupported arrays and ranges"
},
{
"msg_contents": "> On 10 Nov 2019, at 22:05, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Daniel Gustafsson <daniel@yesql.se> writes:\n\n>>> +\t\t/* arrays over any type selected so far */\n>>> +\t\t\t\t\t\t \"\t\t\tSELECT t.oid FROM pg_catalog.pg_type t, x WHERE typelem = x.oid AND typtype = 'b' \"\n> \n>> No need to check typlen?\n> \n> Yeah, that's intentional. A fixed-length array type over a problematic\n> type would be just as much of a problem as a varlena array type.\n> The case shouldn't apply to any of the existing problematic types,\n> but I was striving for generality.\n\nThat makes a lot of sense, thanks for the explanation.\n\ncheers ./daniel\n\n",
"msg_date": "Sun, 10 Nov 2019 23:26:09 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails to detect unsupported arrays and ranges"
},
{
"msg_contents": "> On 10 Nov 2019, at 22:12, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 10 Nov 2019, at 20:07, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> (Note: this patch is shown with --ignore-space-change\n>>> to make it more reviewable, but I did re-pgindent the code.) Then\n>>> 0002 actually adds the array and range cases.\n> \n>> Was the source pgindented, but not committed, before generating the patches? I\n>> fail to apply them on master (or REL_12_STABLE) on what seems to be only\n>> whitespace changes.\n> \n> Hm, I suppose it might be hard to apply the combination of the patches\n> (maybe something involving patch -l would work). For simplicity, here's\n> the complete patch for HEAD. I fixed a missing schema qualification.\n\nApplies, builds clean and passes light testing. I can see the appeal of\nincluding it before the wrap, even though I personally would've held off.\n\ncheers ./daniel\n\n",
"msg_date": "Sun, 10 Nov 2019 23:39:14 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails to detect unsupported arrays and ranges"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> Applies, builds clean and passes light testing.\n\nThanks for checking!\n\n> I can see the appeal of\n> including it before the wrap, even though I personally would've held off.\n\nNah, I'm not gonna risk it at this stage. I concur with your point\nthat this is an ancient bug, and one that is unlikely to bite many\npeople. I'll push it Wednesday or so.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 10 Nov 2019 18:06:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade fails to detect unsupported arrays and ranges"
},
{
"msg_contents": "[ blast-from-the-past department ]\n\nI wrote:\n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> I can see the appeal of\n>> including it before the wrap, even though I personally would've held off.\n\n> Nah, I'm not gonna risk it at this stage. I concur with your point\n> that this is an ancient bug, and one that is unlikely to bite many\n> people. I'll push it Wednesday or so.\n\nI happened across a couple of further pg_upgrade oversights in the\nsame vein as 29aeda6e4 et al:\n\n* Those commits fixed the bugs in pg_upgrade/version.c about not\nchecking the contents of arrays/ranges/etc, but there are two\nsimilar functions in pg_upgrade/check.c that I failed to notice\n(probably due to the haste with which that patch was prepared).\n\n* We really need to also reject user tables that contain instances\nof system-defined composite types (i.e. catalog rowtypes), because\nexcept for a few bootstrap catalogs, those type OIDs are assigned by\ngenbki.pl not by hand, so they aren't stable across major versions.\nFor example, in HEAD I get\n\nregression=# select 'pg_enum'::regtype::oid;\n oid \n-------\n 13045\n(1 row)\n\nbut the same OID was 12022 in v13, 11551 in v11, etc. So if you\nhad a column of type pg_enum, you'd likely get no-such-type-OID\nfailures when reading the values after an upgrade. I don't see\nmuch use-case for doing such a thing, so it seems saner to just\nblock off the possibility rather than trying to support it.\n(We'd have little choice in the back branches anyway, as their\nOID values are locked down now.)\n\nThe attached proposed patch fixes these cases too. I generalized\nthe recursive query a little more so that it could start from an\narbitrary query yielding pg_type OIDs, rather than just one type\nname; otherwise it's pretty straightforward.\n\nBarring objections I'll apply and back-patch this soon.\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 28 Apr 2021 11:09:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade fails to detect unsupported arrays and ranges"
},
{
"msg_contents": "> On 28 Apr 2021, at 17:09, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> [ blast-from-the-past department ]\n> \n> I wrote:\n>> Daniel Gustafsson <daniel@yesql.se> writes:\n>>> I can see the appeal of\n>>> including it before the wrap, even though I personally would've held off.\n> \n>> Nah, I'm not gonna risk it at this stage. I concur with your point\n>> that this is an ancient bug, and one that is unlikely to bite many\n>> people. I'll push it Wednesday or so.\n> \n> I happened across a couple of further pg_upgrade oversights in the\n> same vein as 29aeda6e4 et al:\n\nNice find, this makes a lot of sense.\n\n> ..the same OID was 12022 in v13, 11551 in v11, etc. So if you\n> had a column of type pg_enum, you'd likely get no-such-type-OID\n> failures when reading the values after an upgrade. I don't see\n> much use-case for doing such a thing, so it seems saner to just\n> block off the possibility rather than trying to support it.\n\nAgreed. Having implemented basically this for Greenplum I think it’s wise to\navoid it unless we really have to, it gets very complicated once the layers of\nworms are peeled back.\n\n> The attached proposed patch fixes these cases too. I generalized\n> the recursive query a little more so that it could start from an\n> arbitrary query yielding pg_type OIDs, rather than just one type\n> name; otherwise it's pretty straightforward.\n> \n> Barring objections I'll apply and back-patch this soon.\n\nPatch LGTM on reading, +1 on applying. Being on parental leave I don’t have my\ndev env ready to go so I didn’t perform testing; sorry about that.\n\n> +\t\tpg_fatal(\"Your installation contains system-defined composite type(s) in user tables.\\n\"\n> +\t\t\t\t \"These type OIDs are not stable across PostgreSQL versions,\\n\"\n> +\t\t\t\t \"so this cluster cannot currently be upgraded. You can\\n\"\n> +\t\t\t\t \"remove the problem tables and restart the upgrade.\\n\"\n> +\t\t\t\t \"A list of the problem columns is in the file:\\n\"\n\nWould it be helpful to inform the user that they can alter/drop just the\nproblematic columns as a potentially less scary alternative to dropping the\nentire table?\n\n> -\t\t * The type of interest might be wrapped in a domain, array,\n> +\t\t * The types of interest might be wrapped in a domain, array,\n\nShouldn't this be \"type(s)” as in the other changes here?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 28 Apr 2021 22:38:28 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails to detect unsupported arrays and ranges"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 28 Apr 2021, at 17:09, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> +\t\tpg_fatal(\"Your installation contains system-defined composite type(s) in user tables.\\n\"\n>> +\t\t\t\t \"These type OIDs are not stable across PostgreSQL versions,\\n\"\n>> +\t\t\t\t \"so this cluster cannot currently be upgraded. You can\\n\"\n>> +\t\t\t\t \"remove the problem tables and restart the upgrade.\\n\"\n>> +\t\t\t\t \"A list of the problem columns is in the file:\\n\"\n\n> Would it be helpful to inform the user that they can alter/drop just the\n> problematic columns as a potentially less scary alternative to dropping the\n> entire table?\n\nThis wording is copied-and-pasted from the other similar tests. I agree\nthat it's advocating a solution that might be overkill, but if we change\nit we should also change the existing messages. I don't mind doing\nthat in HEAD; less sure about the back branches, as (I think) these\nare translatable strings.\n\nThoughts?\n\n>> -\t\t * The type of interest might be wrapped in a domain, array,\n>> +\t\t * The types of interest might be wrapped in a domain, array,\n\n> Shouldn't this be \"type(s)” as in the other changes here?\n\nFair enough.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Apr 2021 16:47:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade fails to detect unsupported arrays and ranges"
},
{
"msg_contents": "> On 28 Apr 2021, at 22:47, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>>> On 28 Apr 2021, at 17:09, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> +\t\tpg_fatal(\"Your installation contains system-defined composite type(s) in user tables.\\n\"\n>>> +\t\t\t\t \"These type OIDs are not stable across PostgreSQL versions,\\n\"\n>>> +\t\t\t\t \"so this cluster cannot currently be upgraded. You can\\n\"\n>>> +\t\t\t\t \"remove the problem tables and restart the upgrade.\\n\"\n>>> +\t\t\t\t \"A list of the problem columns is in the file:\\n\"\n> \n>> Would it be helpful to inform the user that they can alter/drop just the\n>> problematic columns as a potentially less scary alternative to dropping the\n>> entire table?\n> \n> This wording is copied-and-pasted from the other similar tests. I agree\n> that it's advocating a solution that might be overkill, but if we change\n> it we should also change the existing messages. \n\nGood point.\n\n> I don't mind doing that in HEAD; less sure about the back branches, as\n\nI think it would be helpful for users to try and give slightly more expanded\nadvice while (obviously) still always being safe. I’m happy to take a crack at\nthat once back unless someone beats me to it.\n\n> (I think) these are translatable strings.\n\n\nIf they aren't I think we should try and make them so to as far as we can\nreduce language barrier problems in such important messages.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 28 Apr 2021 22:58:01 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails to detect unsupported arrays and ranges"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> On 28 Apr 2021, at 22:47, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> This wording is copied-and-pasted from the other similar tests. I agree\n>> that it's advocating a solution that might be overkill, but if we change\n>> it we should also change the existing messages. \n\n> Good point.\n\n>> I don't mind doing that in HEAD; less sure about the back branches, as\n\n> I think it would be helpful for users to try and give slightly more expanded\n> advice while (obviously) still always being safe. I’m happy to take a crack at\n> that once back unless someone beats me to it.\n\nSeems like s/remove the problem tables/drop the problem columns/\nis easy and sufficient.\n\n>> (I think) these are translatable strings.\n\n> If they aren't I think we should try and make them so to as far as we can\n> reduce language barrier problems in such important messages.\n\nChecking, I see they do appear in pg_upgrade's po files. So I propose\nthat we change the existing messages in HEAD but not the back branches.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Apr 2021 17:44:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade fails to detect unsupported arrays and ranges"
}
] |
[
{
"msg_contents": "We have some very wide tables (historically, up to 1600 columns ; this is\nimproved now, but sometimes still several hundred, with numerous pages output\nto psql pager). Is is reasonable to suggest adding a psql command to show a\ntable's definition, without all the columns listed?\n\nOr limit display to matching columns ? That's more general than the above\nfunctionality, if \"empty string\" is taken to mean \"show no columns\", like \\d\ntable \"\" or \\d table *id or \\d table ????\n\nAttached minimal patch for the latter.\n\npostgres=# \\d pg_attribute \"\"\n Table \"pg_catalog.pg_attribute\"\n Column | Type | Collation | Nullable | Default \n--------+------+-----------+----------+---------\nIndexes:\n \"pg_attribute_relid_attnam_index\" UNIQUE, btree (attrelid, attname)\n \"pg_attribute_relid_attnum_index\" UNIQUE, btree (attrelid, attnum)\n\npostgres=# \\d pg_attribute \"attn*|attrel*\"\n Table \"pg_catalog.pg_attribute\"\n Column | Type | Collation | Nullable | Default \n------------+----------+-----------+----------+---------\n attrelid | oid | | not null | \n attname | name | | not null | \n attnum | smallint | | not null | \n attndims | integer | | not null | \n attnotnull | boolean | | not null | \nIndexes:\n \"pg_attribute_relid_attnam_index\" UNIQUE, btree (attrelid, attname)\n \"pg_attribute_relid_attnum_index\" UNIQUE, btree (attrelid, attnum)\n\npostgres=# \\d pg_attribute ??????\n Table \"pg_catalog.pg_attribute\"\n Column | Type | Collation | Nullable | Default \n--------+-----------+-----------+----------+---------\n attlen | smallint | | not null | \n attnum | smallint | | not null | \n attacl | aclitem[] | | | \nIndexes:\n \"pg_attribute_relid_attnam_index\" UNIQUE, btree (attrelid, attname)\n \"pg_attribute_relid_attnum_index\" UNIQUE, btree (attrelid, attnum)\n\npostgres=# \\d pg_attribute *id\n Table \"pg_catalog.pg_attribute\"\n Column | Type | Collation | Nullable | Default \n----------+------+-----------+----------+---------\n attrelid | oid | | not null | \n atttypid | oid | | not null | \nIndexes:\n \"pg_attribute_relid_attnam_index\" UNIQUE, btree (attrelid, attname)\n \"pg_attribute_relid_attnum_index\" UNIQUE, btree (attrelid, attnum)",
"msg_date": "Sun, 10 Nov 2019 15:29:28 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "psql \\d for wide tables / pattern for individual columns"
},
{
"msg_contents": "Em dom., 10 de nov. de 2019 às 18:29, Justin Pryzby\n<pryzby@telsasoft.com> escreveu:\n>\n> We have some very wide tables (historically, up to 1600 columns ; this is\n> improved now, but sometimes still several hundred, with numerous pages output\n> to psql pager). Is is reasonable to suggest adding a psql command to show a\n> table's definition, without all the columns listed?\n>\nIt seems a good idea. However, I'm afraid adding a second argument\ncould limit our capabilities to match/suppress other table properties\nin the future. For example, I think psql might have a way to omit\nindexes, FKs, partitions, some column properties, or even show GRANTs\nfor that table. I don't have a concrete plan at the moment but maybe\nsomeone else already thought about it.\n\n> Or limit display to matching columns ? That's more general than the above\n> functionality, if \"empty string\" is taken to mean \"show no columns\", like \\d\n> table \"\" or \\d table *id or \\d table ????\n>\n> Attached minimal patch for the latter.\n>\n> postgres=# \\d pg_attribute \"\"\n> Table \"pg_catalog.pg_attribute\"\n> Column | Type | Collation | Nullable | Default\n> --------+------+-----------+----------+---------\n> Indexes:\n> \"pg_attribute_relid_attnam_index\" UNIQUE, btree (attrelid, attname)\n> \"pg_attribute_relid_attnum_index\" UNIQUE, btree (attrelid, attnum)\n>\nThe problem with your proposal is that I can't differentiate a\ncomplete output from another suppress-some-columns output if you don't\nprovide the meta-command. I think you should explicitly show that some\ncolumns were suppressed (something like \"... suppressed columns...\"\nafter the list of matched columns). If you don't, it could lead to\nconfusion while reporting table description.\n\n\n-- \n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\n\n",
"msg_date": "Sun, 10 Nov 2019 22:01:04 -0300",
"msg_from": "Euler Taveira <euler@timbira.com.br>",
"msg_from_op": false,
"msg_subject": "Re: psql \\d for wide tables / pattern for individual columns"
},
{
"msg_contents": "Euler Taveira <euler@timbira.com.br> writes:\n> Em dom., 10 de nov. de 2019 às 18:29, Justin Pryzby\n> <pryzby@telsasoft.com> escreveu:\n>> We have some very wide tables (historically, up to 1600 columns ; this is\n>> improved now, but sometimes still several hundred, with numerous pages output\n>> to psql pager). Is is reasonable to suggest adding a psql command to show a\n>> table's definition, without all the columns listed?\n\n> It seems a good idea. However, I'm afraid adding a second argument\n> could limit our capabilities to match/suppress other table properties\n> in the future.\n\nYeah, that was my immediate reaction to the proposed syntax as well.\nI think we'd better make sure that we aren't foreclosing future\nextensions of \\d.\n\nMaybe a reasonable idea is to expect that any additional arguments\nare in \"keyword=value\" style, so that the immediate need could be\nmet with\n\n\\d mytable columns=<pattern>\n\nIt might already be worthwhile to allow both positive and negative\npatterns, so also\n\n\\d mytable exclude_columns=<pattern>\n\n> The problem with your proposal is that I can't differentiate a\n> complete output from another suppress-some-columns output if you don't\n> provide the meta-command. I think you should explicitly show that some\n> columns were suppressed (something like \"... suppressed columns...\"\n> after the list of matched columns). If you don't, it could lead to\n> confusion while reporting table description.\n\nHm ... \"N columns suppressed\" might sometimes be useful, but I'm afraid\nit would take an extra query to get it, and I'm not sure it's worth it.\nI think someone who's using these options would already know perfectly\nwell what they're hiding. We don't expect, say, \"\\dt my*\" to tell you\nhow many tables it didn't list.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 11 Nov 2019 10:46:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: psql \\d for wide tables / pattern for individual columns"
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nTable AM routine already provided two custom functions to fetch sample\nblocks and sample tuples,\nhowever, the total blocks the ANALYZE can scan are still restricted to the\nnumber of physical blocks\nin a table, this doesn't work well for storages which organize blocks in\ndifferent ways than the heap.\n\nHere is proposing to add a new method named scan_analyze_total_blocks() to\nprovide more flexibility,\nit can return physical or logical blocks number which depends on how the\ntable AM implement\nscan_analyze_next_block() and scan_analyze_next_tuple().",
"msg_date": "Mon, 11 Nov 2019 16:21:31 +0800",
"msg_from": "Pengzhou Tang <ptang@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Extend Table AM routine to get total blocks can be analyzed"
}
] |
[
{
"msg_contents": "Hi hackers.\n\nI made a patch fixing build and install problems under MSYS2, including\nllvmjit.\n\nI have tested this in my environment and it works, of course need more\nextensive testing.\nAttached is a patch that fixes it. Tag REL_11_5. Easy to adapt for other\nversions.\n\n-- \nBest regards.\nGuram Duka.",
"msg_date": "Mon, 11 Nov 2019 14:01:29 +0300",
"msg_from": "=?UTF-8?B?0JPRg9GA0LDQvCDQlNGD0LrQsA==?= <guram.duka@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Fix PostgreSQL server build and install problems under MSYS2"
},
{
"msg_contents": "=?UTF-8?B?0JPRg9GA0LDQvCDQlNGD0LrQsA==?= <guram.duka@gmail.com> writes:\n> I made a patch fixing build and install problems under MSYS2, including\n> llvmjit.\n\nThis seems like it probably breaks a lot of other cases along the way.\nWhy have you made all these #if tests dependent on defined(__cplusplus)?\nThat's surely not specific to MSYS2. (I'm a bit bemused by the idea\nthat our code compiles at all on a C++ compiler; we have not tried\nto make the .c files C++-clean. But if it does work, this probably\nbreaks it for non-Windows cases.)\n\nThe GSSAPI changes seem like they might be better considered\nseparately from the basic problem of getting a working MSYS2 build.\n\nIn any case, you need to explain these changes individually,\nnot expect that we're just going to adopt them without questions.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 11 Nov 2019 10:56:51 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix PostgreSQL server build and install problems under\n MSYS2"
},
{
"msg_contents": "On Mon, Nov 11, 2019 at 10:56:51AM -0500, Tom Lane wrote:\n> This seems like it probably breaks a lot of other cases along the way.\n> Why have you made all these #if tests dependent on defined(__cplusplus)?\n> That's surely not specific to MSYS2. (I'm a bit bemused by the idea\n> that our code compiles at all on a C++ compiler; we have not tried\n> to make the .c files C++-clean. But if it does work, this probably\n> breaks it for non-Windows cases.)\n> \n> The GSSAPI changes seem like they might be better considered\n> separately from the basic problem of getting a working MSYS2 build.\n\nYeah. We have fairywen in the buildfarm but it does not compile with\nGSSAPI and LLVM. If we were to do something, it would be better to\nseparate those changes into minimum, separate patches, with one each\nfor each library dependency you are trying to fix so as they can be\nevaluated separately. We should also have a buildfarm member on that\nif some of those changes actually make sense and are\nplatform-dependent.\n\n> In any case, you need to explain these changes individually,\n> not expect that we're just going to adopt them without questions.\n\nI am doubtful that the changes in c.h, elog.h, win32_port.h,\nmiscadmin.h and src/backend/jit/llvm/Makefile are actually needed\nthanks to fairywen.\n\n+#if defined(HAVE_GSS_API_H) && !defined(GSS_DLLIMP)\n+static gss_OID_desc GSS_C_NT_USER_NAME_desc =\n+{10, (void *) \"\\x2a\\x86\\x48\\x86\\xf7\\x12\\x01\\x02\\x01\\x02\"};\nThis also deserves an explanation.\n--\nMichael",
"msg_date": "Tue, 12 Nov 2019 12:50:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix PostgreSQL server build and install problems under\n MSYS2"
},
{
"msg_contents": "Thank you for your comments.\n\n1. The #if ... defined(__ cplusplus) is necessary for the successful\ncompilation of C++ llvmjit code that uses C headers.\n2. #if defined(HAVE_GSS_API_H) && !defined(GSS_DLLIMP) is necessary for the\nsuccessful link libgss.a provided by MSYS2.\n3. Remember that you need to run autoreconf before running configure.\n4. Found and fixed a small bug in the patch. Build in environment CentOS\n7.7.1908 works. Build in environment MSYS2 gcc 9.2.0 works. Build in\nenvironment Visual Studio Community 2019 works.\n\nThe second version of the patch attached.\nWaiting for your comments.\n\n-- \nBest regards.\nGuram Duka.",
"msg_date": "Tue, 12 Nov 2019 09:03:37 +0300",
"msg_from": "Guram Duka <guram.duka@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix PostgreSQL server build and install problems under\n MSYS2"
}
] |
[
{
"msg_contents": "Hello. While looking a patch, I found that PHJ sometimes complains for\nfile leaks if accompanied by LIMIT.\n\nRepro is very simple:\n\ncreate table t as (select a, a as b from generate_series(0, 999999) a);\nanalyze t;\nselect t.a from t join t t2 on (t.a = t2.a) limit 1;\n\nOnce in several (or dozen of) times execution of the last query\ncomplains as follows.\n\nWARNING: temporary file leak: File 15 still referenced\nWARNING: temporary file leak: File 17 still referenced\n\nThis is using PHJ and the leaked file was a shared tuplestore for\nouter tuples, which was opend by sts_parallel_scan_next() called from\nExecParallelHashJoinOuterGetTuple(). It seems to me that\nExecHashTableDestroy is forgeting to release shared tuplestore\naccessors. Please find the attached.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 11 Nov 2019 21:24:18 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "PHJ file leak."
},
{
"msg_contents": "On Tue, Nov 12, 2019 at 1:24 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> Hello. While looking a patch, I found that PHJ sometimes complains for\n> file leaks if accompanied by LIMIT.\n\nOops.\n\n> Repro is very simple:\n>\n> create table t as (select a, a as b from generate_series(0, 999999) a);\n> analyze t;\n> select t.a from t join t t2 on (t.a = t2.a) limit 1;\n>\n> Once in several (or dozen of) times execution of the last query\n> complains as follows.\n>\n> WARNING: temporary file leak: File 15 still referenced\n> WARNING: temporary file leak: File 17 still referenced\n\nAck. Reproduced here.\n\n> This is using PHJ and the leaked file was a shared tuplestore for\n> outer tuples, which was opend by sts_parallel_scan_next() called from\n> ExecParallelHashJoinOuterGetTuple(). It seems to me that\n> ExecHashTableDestroy is forgeting to release shared tuplestore\n> accessors. Please find the attached.\n\nThanks for the patch! Yeah, this seems correct, but I'd like to think\nabout it some more before committing. I'm going to be a bit tied up\nwith travel so that might be next week.\n\n\n",
"msg_date": "Tue, 12 Nov 2019 11:18:39 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PHJ file leak."
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Tue, Nov 12, 2019 at 1:24 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n>> Hello. While looking a patch, I found that PHJ sometimes complains for\n>> file leaks if accompanied by LIMIT.\n\n> Thanks for the patch! Yeah, this seems correct, but I'd like to think\n> about it some more before committing. I'm going to be a bit tied up\n> with travel so that might be next week.\n\nAt this point we've missed the window for this week's releases, so\nthere's no great hurry (and it'd be best not to push any noncritical\npatches into the back branches anyway, for the next 24 hours).\n\nAlthough the patch seems unobjectionable as far as it goes, I'd like\nto understand why we didn't see the need for it long since. Is there\nanother call to ExecParallelHashCloseBatchAccessors somewhere, and\nif so, is that one wrongly placed?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 11 Nov 2019 17:24:45 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PHJ file leak."
},
{
"msg_contents": "At Mon, 11 Nov 2019 17:24:45 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > On Tue, Nov 12, 2019 at 1:24 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> >> Hello. While looking a patch, I found that PHJ sometimes complains for\n> >> file leaks if accompanied by LIMIT.\n> \n> > Thanks for the patch! Yeah, this seems correct, but I'd like to think\n> > about it some more before committing. I'm going to be a bit tied up\n> > with travel so that might be next week.\n> \n> At this point we've missed the window for this week's releases, so\n> there's no great hurry (and it'd be best not to push any noncritical\n> patches into the back branches anyway, for the next 24 hours).\n> \n> Although the patch seems unobjectionable as far as it goes, I'd like\n> to understand why we didn't see the need for it long since. Is there\n> another call to ExecParallelHashCloseBatchAccessors somewhere, and\n> if so, is that one wrongly placed?\n\nIt's a simple race conditions between leader and workers.\n\nIf a scan on workers reached to the end, no batch file is open, but a\nbatch file is open if it doesn't reaches to the end.\n\nIf a worker notices that the channel to the leader is already closed\nbefore reaching the limit, it calls ExecEndNode and doesn't call\nExecShutdownNode. Otherwise, if the worker finds the limit first, the\nworker *shutdowns* itself then ends. Everything's clean.\n\nOn second thought, it seems a issue of ExecutePlan, rather than PHJ\nitself. ExecutePlan should shutdown nodes when output channel is\nbroken.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 12 Nov 2019 12:11:00 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PHJ file leak."
},
{
"msg_contents": "At Mon, 11 Nov 2019 17:24:45 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Although the patch seems unobjectionable as far as it goes, I'd like\n> to understand why we didn't see the need for it long since. Is there\n> another call to ExecParallelHashCloseBatchAccessors somewhere, and\n> if so, is that one wrongly placed?\n\nThe previous patch would be wrong. The root cause is a open batch so\nthe right thing to be done at scan end is\nExecHashTableDeatchBatch. And the real issue here seems to be in\nExecutePlan, not in PHJ.\n\nregards\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 12 Nov 2019 12:19:58 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PHJ file leak."
},
{
"msg_contents": "On Tue, Nov 12, 2019 at 4:20 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Mon, 11 Nov 2019 17:24:45 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in\n> > Although the patch seems unobjectionable as far as it goes, I'd like\n> > to understand why we didn't see the need for it long since. Is there\n> > another call to ExecParallelHashCloseBatchAccessors somewhere, and\n> > if so, is that one wrongly placed?\n>\n> The previous patch would be wrong. The root cause is a open batch so\n> the right thing to be done at scan end is\n> ExecHashTableDeatchBatch. And the real issue here seems to be in\n> ExecutePlan, not in PHJ.\n\nYou are right. Here is the email I just wrote that says the same\nthing, but with less efficiency:\n\nOn Tue, Nov 12, 2019 at 11:24 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > On Tue, Nov 12, 2019 at 1:24 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> >> Hello. While looking a patch, I found that PHJ sometimes complains for\n> >> file leaks if accompanied by LIMIT.\n>\n> > Thanks for the patch! Yeah, this seems correct, but I'd like to think\n> > about it some more before committing. I'm going to be a bit tied up\n> > with travel so that might be next week.\n>\n> At this point we've missed the window for this week's releases, so\n> there's no great hurry (and it'd be best not to push any noncritical\n> patches into the back branches anyway, for the next 24 hours).\n>\n> Although the patch seems unobjectionable as far as it goes, I'd like\n> to understand why we didn't see the need for it long since. Is there\n> another call to ExecParallelHashCloseBatchAccessors somewhere, and\n> if so, is that one wrongly placed?\n\nI'll need to look into this some more in a few days, but here's a\npartial analysis:\n\nThe usual way that these files get closed is by\nsts_end_parallel_scan(). For the particular file in question here --\nan outer batch file that is open for reading while we probe -- that\nusually happens at the end of the per-batch probe phase before we try\nto move to another batch or reach end of data. In case of an early\nend, it also happens in ExecShutdownHashJoin(), which detaches from\nand cleans up shared resources, which includes closing these files.\n\nThere are a few ways that ExecShutdownNode() can be reached:\n\n1. ExecLimit() (which we now suspect to be bogus -- see nearby\nunfinished business[1]), though that only happens in the leader in\nthis case\n2. ExecutePlan() on end-of-data.\n3. ExecutePlan() on reaching the requested tuple count.\n\nUnfortunately those aren't the only ways out of ExecutePlan()'s loop,\nand that may be a problem. I think what's happening in this case is\nthat there is a race where dest->receiveSlot(slot, dest) returns false\nbecause the leader has stopped receiving tuples (having received\nenough tuples to satisfy the LIMIT) so we exit early, but that path\nout of the loop doesn't run ExecShutdownNode(). EXEC_FLAG_BACKWARD\nwould also inhibit it, but that shouldn't be set in a parallel plan.\n\nSo, after thinking about it, I'm not sure the proposed patch is\nconceptually sound (even if it seems to work), because\nExecHashTableDestroy() runs at 'end' time and that's after shared\nmemory has disappeared, so it shouldn't be doing shared\nresource-related cleanup work, whereas\nExecParallelHashCloseBatchAccessors() is relates to shared resources\n(for example, it calls sts_end_write(), which should never actually do\nanything at this time but it is potentially a shm-updating routine,\nwhich seems wrong to me).\n\nRecommending a change is going to require more brainpower than I have\nspare today due to other commitments. ExecShutdownNode() is certainly\na bit tricky.\n\n[1] https://www.postgresql.org/message-id/flat/87ims2amh6.fsf%40jsievers.enova.com\n\n\n",
"msg_date": "Tue, 12 Nov 2019 16:23:46 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PHJ file leak."
},
{
"msg_contents": "On Tue, Nov 12, 2019 at 4:23 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Tue, Nov 12, 2019 at 4:20 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > At Mon, 11 Nov 2019 17:24:45 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in\n> > > Although the patch seems unobjectionable as far as it goes, I'd like\n> > > to understand why we didn't see the need for it long since. Is there\n> > > another call to ExecParallelHashCloseBatchAccessors somewhere, and\n> > > if so, is that one wrongly placed?\n> >\n> > The previous patch would be wrong. The root cause is a open batch so\n> > the right thing to be done at scan end is\n> > ExecHashTableDeatchBatch. And the real issue here seems to be in\n> > ExecutePlan, not in PHJ.\n>\n> You are right. Here is the email I just wrote that says the same\n> thing, but with less efficiency:\n\nAnd yeah, your Make_parallel_shutdown_on_broken_channel.patch seems\nlike the real fix here. It's not optional to run that at\nend-of-query, though you might get that impression from various\ncomments, and it's also not OK to call it before the end of the query,\nthough you might get that impression from what the code actually does.\nCC'ing to Robert who designed ExecShutdownNode(), though admittedly\nthe matter might have been theoretical until PHJ came along (since the\nonly other executor node that implements ExecShutdownNode() was\nGather, and you can't have a Gather under a Gather, and if this\nhappens to you in the leader process then the user is gone and no one\nwill hear the screams).\n\n\n",
"msg_date": "Tue, 12 Nov 2019 17:03:21 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PHJ file leak."
},
{
"msg_contents": "On Tue, Nov 12, 2019 at 5:03 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Tue, Nov 12, 2019 at 4:23 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > On Tue, Nov 12, 2019 at 4:20 PM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > > The previous patch would be wrong. The root cause is a open batch so\n> > > the right thing to be done at scan end is\n> > > ExecHashTableDeatchBatch. And the real issue here seems to be in\n> > > ExecutePlan, not in PHJ.\n> >\n> > You are right. Here is the email I just wrote that says the same\n> > thing, but with less efficiency:\n>\n> And yeah, your Make_parallel_shutdown_on_broken_channel.patch seems\n> like the real fix here. It's not optional to run that at\n> end-of-query, though you might get that impression from various\n> comments, and it's also not OK to call it before the end of the query,\n> though you might get that impression from what the code actually does.\n\nHere's the version I'd like to commit in a day or two, once the dust\nhas settled on the minor release. Instead of adding yet another copy\nof that code, I just moved it out of the loop; this way there is no\nway to miss it. I think the comment could also be better, but I'll\nwait for the concurrent discussions about the meaning of\nExecShutdownNode() to fix that in master.",
"msg_date": "Wed, 13 Nov 2019 09:48:19 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PHJ file leak."
},
{
"msg_contents": "At Wed, 13 Nov 2019 09:48:19 +1300, Thomas Munro <thomas.munro@gmail.com> wrote in \n> On Tue, Nov 12, 2019 at 5:03 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > On Tue, Nov 12, 2019 at 4:23 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > On Tue, Nov 12, 2019 at 4:20 PM Kyotaro Horiguchi\n> > > <horikyota.ntt@gmail.com> wrote:\n> > > > The previous patch would be wrong. The root cause is a open batch so\n> > > > the right thing to be done at scan end is\n> > > > ExecHashTableDeatchBatch. And the real issue here seems to be in\n> > > > ExecutePlan, not in PHJ.\n> > >\n> > > You are right. Here is the email I just wrote that says the same\n> > > thing, but with less efficiency:\n> >\n> > And yeah, your Make_parallel_shutdown_on_broken_channel.patch seems\n> > like the real fix here. It's not optional to run that at\n> > end-of-query, though you might get that impression from various\n> > comments, and it's also not OK to call it before the end of the query,\n> > though you might get that impression from what the code actually does.\n> \n> Here's the version I'd like to commit in a day or two, once the dust\n> has settled on the minor release. Instead of adding yet another copy\n> of that code, I just moved it out of the loop; this way there is no\n> way to miss it. I think the comment could also be better, but I'll\n> wait for the concurrent discussions about the meaning of\n> ExecShutdownNode() to fix that in master.\n\nThe phatch's shape looks better. Thanks.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 13 Nov 2019 09:42:43 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PHJ file leak."
},
{
"msg_contents": "On Wed, Nov 13, 2019 at 6:13 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 13 Nov 2019 09:48:19 +1300, Thomas Munro <thomas.munro@gmail.com> wrote in\n> >\n> > Here's the version I'd like to commit in a day or two, once the dust\n> > has settled on the minor release. Instead of adding yet another copy\n> > of that code, I just moved it out of the loop; this way there is no\n> > way to miss it. I think the comment could also be better, but I'll\n> > wait for the concurrent discussions about the meaning of\n> > ExecShutdownNode() to fix that in master.\n>\n> The phatch's shape looks better. Thanks.\n>\n\n+1. LGTM as well.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 13 Nov 2019 14:22:45 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PHJ file leak."
},
{
"msg_contents": "On Wed, Nov 13, 2019 at 9:52 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Wed, Nov 13, 2019 at 6:13 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> > The phatch's shape looks better. Thanks.\n>\n> +1. LGTM as well.\n\nThanks. Pushed.\n\n\n",
"msg_date": "Sat, 16 Nov 2019 10:32:14 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PHJ file leak."
},
{
"msg_contents": "On Fri, Nov 15, 2019 at 3:32 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Wed, Nov 13, 2019 at 9:52 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> > On Wed, Nov 13, 2019 at 6:13 AM Kyotaro Horiguchi <\n> horikyota.ntt@gmail.com> wrote:\n> > > The phatch's shape looks better. Thanks.\n> >\n> > +1. LGTM as well.\n>\n> Thanks. Pushed.\n>\n>\n>\nWe are hitting this leak in production on an 11.6 system for a query that\nis using a parallel hash join. Was this fix pushed in 11.7? I can't tell\nclearly from the release notes for 11.7 or this thread.\n\nThanks!\nJeremy\n\nOn Fri, Nov 15, 2019 at 3:32 PM Thomas Munro <thomas.munro@gmail.com> wrote:On Wed, Nov 13, 2019 at 9:52 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Wed, Nov 13, 2019 at 6:13 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> > The phatch's shape looks better. Thanks.\n>\n> +1. LGTM as well.\n\nThanks. Pushed.\n\nWe are hitting this leak in production on an 11.6 system for a query that is using a parallel hash join. Was this fix pushed in 11.7? I can't tell clearly from the release notes for 11.7 or this thread.Thanks!Jeremy",
"msg_date": "Fri, 6 Mar 2020 09:18:34 -0600",
"msg_from": "Jeremy Finzel <finzelj@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PHJ file leak."
},
{
"msg_contents": "Jeremy Finzel <finzelj@gmail.com> writes:\n> We are hitting this leak in production on an 11.6 system for a query that\n> is using a parallel hash join. Was this fix pushed in 11.7? I can't tell\n> clearly from the release notes for 11.7 or this thread.\n\nIt looks like you're asking about this commit:\n\nAuthor: Thomas Munro <tmunro@postgresql.org>\nBranch: master [76cbfcdf3] 2019-11-16 10:11:30 +1300\nBranch: REL_12_STABLE Release: REL_12_2 [24897e1a1] 2019-11-16 10:18:45 +1300\nBranch: REL_11_STABLE Release: REL_11_7 [bc049d0d4] 2019-11-16 10:19:16 +1300\n\n Always call ExecShutdownNode() if appropriate.\n\nwhich is documented thus in the 11.7 release notes:\n\n <listitem>\n<!--\nAuthor: Thomas Munro <tmunro@postgresql.org>\nBranch: master [76cbfcdf3] 2019-11-16 10:11:30 +1300\nBranch: REL_12_STABLE [24897e1a1] 2019-11-16 10:18:45 +1300\nBranch: REL_11_STABLE [bc049d0d4] 2019-11-16 10:19:16 +1300\n-->\n <para>\n Ensure parallel plans are always shut down at the correct time\n (Kyotaro Horiguchi)\n </para>\n\n <para>\n This oversight is known to result in <quote>temporary file\n leak</quote> warnings from multi-batch parallel hash joins.\n </para>\n </listitem>\n\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 06 Mar 2020 10:43:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PHJ file leak."
},
{
"msg_contents": "On Fri, Mar 6, 2020 at 9:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Jeremy Finzel <finzelj@gmail.com> writes:\n> > We are hitting this leak in production on an 11.6 system for a query that\n> > is using a parallel hash join. Was this fix pushed in 11.7? I can't tell\n> > clearly from the release notes for 11.7 or this thread.\n>\n> It looks like you're asking about this commit:\n>\n> Author: Thomas Munro <tmunro@postgresql.org>\n> Branch: master [76cbfcdf3] 2019-11-16 10:11:30 +1300\n> Branch: REL_12_STABLE Release: REL_12_2 [24897e1a1] 2019-11-16 10:18:45\n> +1300\n> Branch: REL_11_STABLE Release: REL_11_7 [bc049d0d4] 2019-11-16 10:19:16\n> +1300\n>\n> Always call ExecShutdownNode() if appropriate.\n>\n> which is documented thus in the 11.7 release notes:\n>\n> <listitem>\n> <!--\n> Author: Thomas Munro <tmunro@postgresql.org>\n> Branch: master [76cbfcdf3] 2019-11-16 10:11:30 +1300\n> Branch: REL_12_STABLE [24897e1a1] 2019-11-16 10:18:45 +1300\n> Branch: REL_11_STABLE [bc049d0d4] 2019-11-16 10:19:16 +1300\n> -->\n> <para>\n> Ensure parallel plans are always shut down at the correct time\n> (Kyotaro Horiguchi)\n> </para>\n>\n> <para>\n> This oversight is known to result in <quote>temporary file\n> leak</quote> warnings from multi-batch parallel hash joins.\n> </para>\n> </listitem>\n>\n>\n> regards, tom lane\n>\n\nThank you! Yep, pretty clear :).\n\nOn Fri, Mar 6, 2020 at 9:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Jeremy Finzel <finzelj@gmail.com> writes:\n> We are hitting this leak in production on an 11.6 system for a query that\n> is using a parallel hash join. Was this fix pushed in 11.7? I can't tell\n> clearly from the release notes for 11.7 or this thread.\n\nIt looks like you're asking about this commit:\n\nAuthor: Thomas Munro <tmunro@postgresql.org>\nBranch: master [76cbfcdf3] 2019-11-16 10:11:30 +1300\nBranch: REL_12_STABLE Release: REL_12_2 [24897e1a1] 2019-11-16 10:18:45 +1300\nBranch: REL_11_STABLE Release: REL_11_7 [bc049d0d4] 2019-11-16 10:19:16 +1300\n\n Always call ExecShutdownNode() if appropriate.\n\nwhich is documented thus in the 11.7 release notes:\n\n <listitem>\n<!--\nAuthor: Thomas Munro <tmunro@postgresql.org>\nBranch: master [76cbfcdf3] 2019-11-16 10:11:30 +1300\nBranch: REL_12_STABLE [24897e1a1] 2019-11-16 10:18:45 +1300\nBranch: REL_11_STABLE [bc049d0d4] 2019-11-16 10:19:16 +1300\n-->\n <para>\n Ensure parallel plans are always shut down at the correct time\n (Kyotaro Horiguchi)\n </para>\n\n <para>\n This oversight is known to result in <quote>temporary file\n leak</quote> warnings from multi-batch parallel hash joins.\n </para>\n </listitem>\n\n\n regards, tom laneThank you! Yep, pretty clear :).",
"msg_date": "Fri, 6 Mar 2020 09:47:49 -0600",
"msg_from": "Jeremy Finzel <finzelj@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PHJ file leak."
}
] |
[
{
"msg_contents": "Hi,\nCan anyone check this bug fix?\n\nThanks.\nRanier Vilela\n\n--- \\dll\\postgresql-12.0\\a\\backend\\commands\\event_trigger.c Mon Sep 30 17:06:55 2019\n+++ event_trigger.c Mon Nov 11 13:52:35 2019\n@@ -171,7 +171,7 @@\n HeapTuple tuple;\n Oid funcoid;\n Oid funcrettype;\n- Oid fargtypes[1]; /* dummy */\n+ Oid fargtypes[1] = {InvalidOid, InvalidOid}; /* dummy */\n Oid evtowner = GetUserId();\n ListCell *lc;\n List *tags = NULL;",
"msg_date": "Mon, 11 Nov 2019 18:28:47 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "[BUG FIX] Uninitialized var fargtypes used."
},
{
"msg_contents": "On Mon, Nov 11, 2019 at 06:28:47PM +0000, Ranier Vilela wrote:\n> Can anyone check this bug fix?\n> \n> +++ event_trigger.c Mon Nov 11 13:52:35 2019\n> @@ -171,7 +171,7 @@\n> HeapTuple tuple;\n> Oid funcoid;\n> Oid funcrettype;\n> - Oid fargtypes[1]; /* dummy */\n> + Oid fargtypes[1] = {InvalidOid, InvalidOid}; /* dummy */\n> Oid evtowner = GetUserId();\n\nYeah, it would be better to fix this initialization.\n--\nMichael",
"msg_date": "Tue, 12 Nov 2019 12:31:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [BUG FIX] Uninitialized var fargtypes used."
},
{
"msg_contents": "At Tue, 12 Nov 2019 12:31:41 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Mon, Nov 11, 2019 at 06:28:47PM +0000, Ranier Vilela wrote:\n> > Can anyone check this bug fix?\n> > \n> > +++ event_trigger.c Mon Nov 11 13:52:35 2019\n> > @@ -171,7 +171,7 @@\n> > HeapTuple tuple;\n> > Oid funcoid;\n> > Oid funcrettype;\n> > - Oid fargtypes[1]; /* dummy */\n> > + Oid fargtypes[1] = {InvalidOid, InvalidOid}; /* dummy */\n> > Oid evtowner = GetUserId();\n> \n> Yeah, it would be better to fix this initialization.\n\nAgreed, but compiler should complain since the initializer is too\nlong. And I found at least five other instances of the same. Or there\nmight be similar cases.\n\n\nfind . -type f -exec egrep --color -nH --null -e 'LookupFuncName ?\\(.*, ?0,' \\{\\} +\n./pl/tcl/pltcl.c\u0000619:\tprocOid = LookupFuncName(namelist, 0, fargtypes, false);\n./backend/commands/trigger.c\u0000693:\t\tfuncoid = LookupFuncName(stmt->funcname, 0, fargtypes, false);\n./backend/commands/proclang.c\u0000108:\t\thandlerOid = LookupFuncName(funcname, 0, funcargtypes, true);\n./backend/commands/proclang.c\u0000266:\t\thandlerOid = LookupFuncName(stmt->plhandler, 0, funcargtypes, false);\n./backend/commands/event_trigger.c\u0000240:\tfuncoid = LookupFuncName(stmt->funcname, 0, fargtypes, false);\n./backend/commands/foreigncmds.c\u0000484:\thandlerOid = LookupFuncName((List *) handler->arg, 0, funcargtypes, false);\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 12 Nov 2019 15:27:35 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG FIX] Uninitialized var fargtypes used."
},
{
"msg_contents": "On Tue, Nov 12, 2019 at 03:27:35PM +0900, Kyotaro Horiguchi wrote:\n> At Tue, 12 Nov 2019 12:31:41 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> > On Mon, Nov 11, 2019 at 06:28:47PM +0000, Ranier Vilela wrote:\n> > > Can anyone check this bug fix?\n> > > \n> > > +++ event_trigger.c Mon Nov 11 13:52:35 2019\n> > > @@ -171,7 +171,7 @@\n> > > HeapTuple tuple;\n> > > Oid funcoid;\n> > > Oid funcrettype;\n> > > - Oid fargtypes[1]; /* dummy */\n> > > + Oid fargtypes[1] = {InvalidOid, InvalidOid}; /* dummy */\n> > > Oid evtowner = GetUserId();\n> > \n> > Yeah, it would be better to fix this initialization.\n> \n> Agreed, but compiler should complain since the initializer is too\n> long. And I found at least five other instances of the same. Or there\n> might be similar cases.\n\nWould you like to write a patch with everything you found? I have\ncommented on a rather similar topic about the style of the\ninitialization close to here:\nhttps://www.postgresql.org/message-id/3378.1571684676@sss.pgh.pa.us\n\nHowever, if it comes to InvalidOid and if we are talking about only\none element, I think that we should just assign the value without\nmemset.\n--\nMichael",
"msg_date": "Tue, 12 Nov 2019 17:46:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [BUG FIX] Uninitialized var fargtypes used."
},
{
"msg_contents": "Hi,\nSorry by error in the patch.\n\n--- \\dll\\postgresql-12.0\\a\\backend\\commands\\event_trigger.c Mon Sep 30 17:06:55 2019\n+++ event_trigger.c Tue Nov 12 08:34:30 2019\n@@ -171,7 +171,7 @@\n HeapTuple tuple;\n Oid funcoid;\n Oid funcrettype;\n- Oid fargtypes[1]; /* dummy */\n+ Oid fargtypes[1] = {InvalidOid}; /* dummy */\n Oid evtowner = GetUserId();\n ListCell *lc;\n List *tags = NULL;\n\n\n________________________________\nDe: Michael Paquier <michael@paquier.xyz>\nEnviado: terça-feira, 12 de novembro de 2019 03:31\nPara: Ranier Vilela <ranier_gyn@hotmail.com>\nCc: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nAssunto: Re: [BUG FIX] Uninitialized var fargtypes used.\n\nOn Mon, Nov 11, 2019 at 06:28:47PM +0000, Ranier Vilela wrote:\n> Can anyone check this bug fix?\n>\n> +++ event_trigger.c Mon Nov 11 13:52:35 2019\n> @@ -171,7 +171,7 @@\n> HeapTuple tuple;\n> Oid funcoid;\n> Oid funcrettype;\n> - Oid fargtypes[1]; /* dummy */\n> + Oid fargtypes[1] = {InvalidOid, InvalidOid}; /* dummy */\n> Oid evtowner = GetUserId();\n\nYeah, it would be better to fix this initialization.\n--\nMichael",
"msg_date": "Tue, 12 Nov 2019 11:38:16 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [BUG FIX] Uninitialized var fargtypes used."
}
] |
[
{
"msg_contents": "I happened to notice that find_expr_references_walker has not\nbeen taught anything about TableFunc nodes, which means it will\nmiss the type and collation OIDs embedded in such a node.\n\nThis can be demonstrated to be a problem by the attached script,\nwhich will end up with a \"cache lookup failed for type NNNNN\"\nerror because we allow dropping a type the XMLTABLE construct\nreferences.\n\nThis isn't hard to fix, as per the attached patch, but it makes\nme nervous. I wonder what other dependencies we might be missing.\n\nWould it be a good idea to move find_expr_references_walker to\nnodeFuncs.c, in hopes of making it more visible to people adding\nnew node types? We could decouple it from the specific use-case\nof recordDependencyOnExpr by having it call a callback function\nfor each identified OID; although maybe there's no point in that,\nsince I'm not sure there are any other use-cases.\n\nAnother thought is that maybe the code could be automatically\ngenerated, as Andres has been threatening to do with respect\nto the other stuff in backend/nodes/.\n\nIn practice, this bug is probably not a huge problem, because a\nview that involves a column of type X will likely have some other\ndependencies on X. I had to tweak the example view a bit to get\nit to not have any other dependencies on \"seg\". So I'm not feeling\nthat this is a stop-ship problem for today's releases --- I'll plan\non installing the fix after the releases are tagged.\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 11 Nov 2019 16:41:41 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Missing dependency tracking for TableFunc nodes"
},
{
"msg_contents": "\n\nOn 11/11/19 1:41 PM, Tom Lane wrote:\n> Would it be a good idea to move find_expr_references_walker to\n> nodeFuncs.c, in hopes of making it more visible to people adding\n> new node types?\n\nI'm not sure that would be enough. The logic of that function is not \nimmediately obvious, and where to add a node to it might not occur to \npeople. If the repeated use of\n\n else if (IsA(node, XXX))\n\nwere replaced with\n\n switch (nodeTag(node)) {\n case XXX:\n\nthen the compiler, ala -Wswitch, would alert folks when they forget to \nhandle a new node type.\n\n-- \nMark Dilger\n\n\n",
"msg_date": "Mon, 11 Nov 2019 14:33:24 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Missing dependency tracking for TableFunc nodes"
},
{
"msg_contents": "On 11/11/19 2:33 PM, Mark Dilger wrote:\n> \n> \n> On 11/11/19 1:41 PM, Tom Lane wrote:\n>> Would it be a good idea to move find_expr_references_walker to\n>> nodeFuncs.c, in hopes of making it more visible to people adding\n>> new node types?\n> \n> I'm not sure that would be enough.� The logic of that function is not \n> immediately obvious, and where to add a node to it might not occur to \n> people.� If the repeated use of\n> \n> ��� else if (IsA(node, XXX))\n> \n> were replaced with\n> \n> ��� switch (nodeTag(node)) {\n> ������� case XXX:\n> \n> then the compiler, ala -Wswitch, would alert folks when they forget to \n> handle a new node type.\n> \n\nI played with this a bit, making the change I proposed, and got lots of \nwarnings from the compiler. I don't know how many of these would need \nto be suppressed by adding a no-op for them at the end of the switch vs. \nhow many need to be handled, but the attached patch implements the idea. \n I admit adding all these extra cases to the end is verbose....\n\nThe change as written is much too verbose to be acceptable, but given \nhow many places in the code could really use this sort of treatment, I \nwonder if there is a way to reorganize the NodeTag enum into multiple \nenums, one for each logical subtype (such as executor nodes, plan nodes, \netc) and then have switches over enums of the given subtype, with the \ncompiler helping detect tags of same subtype that are unhandled in the \nswitch.\n\nI have added enough nodes over the years, and spent enough time tracking \ndown all the parts of the code that need updating for a new node, to say \nthat this would be very helpful if we could make it work. I have not \ndone the research yet on how many places would be made less elegant by \nsuch a change, though. I think I'll go look into that a bit....\n\n\n\n-- \nMark Dilger",
"msg_date": "Mon, 11 Nov 2019 17:42:45 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Missing dependency tracking for TableFunc nodes"
},
{
"msg_contents": "Mark Dilger <hornschnorter@gmail.com> writes:\n> I played with this a bit, making the change I proposed, and got lots of \n> warnings from the compiler. I don't know how many of these would need \n> to be suppressed by adding a no-op for them at the end of the switch vs. \n> how many need to be handled, but the attached patch implements the idea. \n> I admit adding all these extra cases to the end is verbose....\n\nYeah, that's why it's not done that way ...\n\n> The change as written is much too verbose to be acceptable, but given \n> how many places in the code could really use this sort of treatment, I \n> wonder if there is a way to reorganize the NodeTag enum into multiple \n> enums, one for each logical subtype (such as executor nodes, plan nodes, \n> etc) and then have switches over enums of the given subtype, with the \n> compiler helping detect tags of same subtype that are unhandled in the \n> switch.\n\nThe problem here is that the set of nodes of interest can vary depending\non what you're doing. As a case in point, find_expr_references has to\ncover both expression nodes and some things that aren't expression nodes\nbut can represent dependencies of a plan tree.\n\nI think that the long-term answer, if Andres gets somewhere with his\nproject to autogenerate code like this, is that we'd rely on annotating\nthe struct declarations to tell us what to do. In the case at hand,\nI could imagine annotations that say \"this field contains a function OID\"\nor \"this list contains collation OIDs\" and then the find_expr_references\nlogic could be derived from that. Now, that's not perfect either, because\nit's always possible to forget to annotate something. But it'd be a lot\neasier, because there'd be tons of nearby examples of doing it right.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 12 Nov 2019 10:19:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Missing dependency tracking for TableFunc nodes"
},
{
"msg_contents": "Hi,\n\nOn 2019-11-12 10:19:30 -0500, Tom Lane wrote:\n> I think that the long-term answer, if Andres gets somewhere with his\n> project to autogenerate code like this, is that we'd rely on annotating\n> the struct declarations to tell us what to do. In the case at hand,\n> I could imagine annotations that say \"this field contains a function OID\"\n> or \"this list contains collation OIDs\" and then the find_expr_references\n> logic could be derived from that. Now, that's not perfect either, because\n> it's always possible to forget to annotate something. But it'd be a lot\n> easier, because there'd be tons of nearby examples of doing it right.\n\nYea, I think that'd be going in the right direction.\n\nI've a few annotations for other purposes in my local version of the\npatch (e.g. to ignore fields for comparison), and adding further ones\nfor purposes like this ought to be easy.\n\nI want to attach some annotations to types, rather than fields. I\ne.g. introduced a Location typedef, annotated as being ignored for\nequality purposes, instead of annotating each 'int location'. Wonder if\nwe should also do something like that for your hypothetical \"function\nOID\" etc. above - seems like it also might give the human reader of code\na hint.\n\n\n\nOn 2019-11-11 16:41:41 -0500, Tom Lane wrote:\n> I happened to notice that find_expr_references_walker has not\n> been taught anything about TableFunc nodes, which means it will\n> miss the type and collation OIDs embedded in such a node.\n\n> Would it be a good idea to move find_expr_references_walker to\n> nodeFuncs.c, in hopes of making it more visible to people adding\n> new node types?\n\nCan't hurt, at least. Reducing the number of files that need to be\nfairly mechanically be touched when adding a node type / node type\nfield strikes me as a good idea.\n\nWonder if there's any way to write an assertion check that verifies we\nhave the necessary dependencies. But the only idea I have - basically\nrecord all the syscache lookups while parse analysing an expression, and\nthen check that all the necessary dependencies exist - seems too\ncomplicated to be worthwhile.\n\n\n> We could decouple it from the specific use-case\n> of recordDependencyOnExpr by having it call a callback function\n> for each identified OID; although maybe there's no point in that,\n> since I'm not sure there are any other use-cases.\n\nI could see it being useful for a few other purposes, e.g. it seems\n*marginally* possible we could share *some* code with\nextract_query_dependencies(). But I think I'd only go there if we\nactually convert something else to it.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 12 Nov 2019 11:47:20 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Missing dependency tracking for TableFunc nodes"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-11-12 10:19:30 -0500, Tom Lane wrote:\n>> I could imagine annotations that say \"this field contains a function OID\"\n>> or \"this list contains collation OIDs\" and then the find_expr_references\n>> logic could be derived from that.\n\n> I want to attach some annotations to types, rather than fields. I\n> e.g. introduced a Location typedef, annotated as being ignored for\n> equality purposes, instead of annotating each 'int location'. Wonder if\n> we should also do something like that for your hypothetical \"function\n> OID\" etc. above - seems like it also might give the human reader of code\n> a hint.\n\nHm. We could certainly do \"typedef FunctionOid Oid;\",\n\"typedef CollationOidList List;\" etc, but I think it'd get pretty\ntedious pretty quickly --- just for this one purpose, you'd need\na couple of typedefs for every system catalog that contains\nquery-referenceable OIDs. Maybe that's better than comment-style\nannotations, but I'm not convinced.\n\n> Wonder if there's any way to write an assertion check that verifies we\n> have the necessary dependencies. But the only idea I have - basically\n> record all the syscache lookups while parse analysing an expression, and\n> then check that all the necessary dependencies exist - seems too\n> complicated to be worthwhile.\n\nYeah, it's problematic. One issue there that I'm not sure how to\nresolve with autogenerated code, much less automated checking, is that\nquite a few cases in find_expr_references know that we don't need to\nrecord a dependency on an OID stored in the node because there's an\nindirect dependency on something else. For example, in FuncExpr we\nneedn't log funcresulttype because the funcid is enough dependency,\nand we needn't log either funccollid or inputcollid because those are\nderived from the input expressions or the function result type.\n(And giving up those optimizations would be pretty costly, 4x more\ndependency checks in this example alone.)\n\nFor sure I don't want both \"CollationOid\" and \"RedundantCollationOid\"\ntypedefs, so it seems like annotation is the solution for this, but\nI see no reasonable way to automatically verify such annotations.\nStill, just writing down the annotations would be a way to expose\nsuch assumptions for manual checking, which we don't really have now.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 12 Nov 2019 15:32:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Missing dependency tracking for TableFunc nodes"
},
{
"msg_contents": "On 2019-11-12 15:32:14 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-11-12 10:19:30 -0500, Tom Lane wrote:\n> >> I could imagine annotations that say \"this field contains a function OID\"\n> >> or \"this list contains collation OIDs\" and then the find_expr_references\n> >> logic could be derived from that.\n> \n> > I want to attach some annotations to types, rather than fields. I\n> > e.g. introduced a Location typedef, annotated as being ignored for\n> > equality purposes, instead of annotating each 'int location'. Wonder if\n> > we should also do something like that for your hypothetical \"function\n> > OID\" etc. above - seems like it also might give the human reader of code\n> > a hint.\n> \n> Hm. We could certainly do \"typedef FunctionOid Oid;\",\n> \"typedef CollationOidList List;\" etc, but I think it'd get pretty\n> tedious pretty quickly --- just for this one purpose, you'd need\n> a couple of typedefs for every system catalog that contains\n> query-referenceable OIDs. Maybe that's better than comment-style\n> annotations, but I'm not convinced.\n\nI'm not saying that we should exclusively do so, just that it's\nworthwhile for a few frequent cases.\n\n\n> One issue there that I'm not sure how to resolve with autogenerated\n> code, much less automated checking, is that quite a few cases in\n> find_expr_references know that we don't need to record a dependency on\n> an OID stored in the node because there's an indirect dependency on\n> something else. For example, in FuncExpr we needn't log\n> funcresulttype because the funcid is enough dependency, and we needn't\n> log either funccollid or inputcollid because those are derived from\n> the input expressions or the function result type. (And giving up\n> those optimizations would be pretty costly, 4x more dependency checks\n> in this example alone.)\n\nYea, that one is hard. I suspect the best way to address that is to have\nexplicit code for a few cases that are worth optimizing (like\ne.g. FuncExpr), and for the rest use the generic logic using. I'd so\nfar just written the specialized cases into the \"generated metadata\"\nusing code - but we also could have an annotation that instructs to\ninstead call some function, but I doubt that's worthwhile.\n\n\n> For sure I don't want both \"CollationOid\" and \"RedundantCollationOid\"\n> typedefs\n\nIndeed.\n\n\n> so it seems like annotation is the solution for this\n\nI'm not even sure annotations are going to be the easiest way to\nimplement some of the more complicated edge cases. Might be easier to\njust open-code those, and fall back to generic logic for the rest. We'll\nhave to see, I think.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 12 Nov 2019 13:21:08 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Missing dependency tracking for TableFunc nodes"
},
{
"msg_contents": "\n\nOn 11/11/19 1:41 PM, Tom Lane wrote:\n> I happened to notice that find_expr_references_walker has not\n> been taught anything about TableFunc nodes, which means it will\n> miss the type and collation OIDs embedded in such a node.\n> \n> This can be demonstrated to be a problem by the attached script,\n> which will end up with a \"cache lookup failed for type NNNNN\"\n> error because we allow dropping a type the XMLTABLE construct\n> references.\n> \n> This isn't hard to fix, as per the attached patch, but it makes\n> me nervous. I wonder what other dependencies we might be missing.\n\nI can consistently generate errors like the following in master:\n\n ERROR: cache lookup failed for statistics object 31041\n\nThis happens in a stress test in which multiple processes are making \nchanges to the schema. So far, all the sessions that report this cache \nlookup error do so when performing one of ANALYZE, VACUUM ANALYZE, \nUPDATE, DELETE or EXPLAIN ANALYZE on a table that has MCV statistics. \nAll processes running simultaneously are running the same set of \nfunctions, which create and delete tables, indexes, and statistics \nobjects, insert, update, and delete rows in those tables, etc.\n\nThe fact that the statistics are of the MCV type might not be relevant; \nI'm creating those on tables as part of testing Tomas Vondra's MCV \nstatistics patch, so all the tables have statistics of that kind on them.\n\nI can try to distill my test case a bit, but first I'd like to know if \nyou are interested. Currently, the patch is over 2.2MB, gzip'd. I'll \nonly bother distilling it if you don't already know about these cache \nlookup failures.\n\n-- \nMark Dilger\n\n\n",
"msg_date": "Wed, 13 Nov 2019 15:00:03 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Missing dependency tracking for TableFunc nodes"
},
{
"msg_contents": "On Wed, Nov 13, 2019 at 03:00:03PM -0800, Mark Dilger wrote:\n>\n>\n>On 11/11/19 1:41 PM, Tom Lane wrote:\n>>I happened to notice that find_expr_references_walker has not\n>>been taught anything about TableFunc nodes, which means it will\n>>miss the type and collation OIDs embedded in such a node.\n>>\n>>This can be demonstrated to be a problem by the attached script,\n>>which will end up with a \"cache lookup failed for type NNNNN\"\n>>error because we allow dropping a type the XMLTABLE construct\n>>references.\n>>\n>>This isn't hard to fix, as per the attached patch, but it makes\n>>me nervous. I wonder what other dependencies we might be missing.\n>\n>I can consistently generate errors like the following in master:\n>\n> ERROR: cache lookup failed for statistics object 31041\n>\n>This happens in a stress test in which multiple processes are making \n>changes to the schema. So far, all the sessions that report this \n>cache lookup error do so when performing one of ANALYZE, VACUUM \n>ANALYZE, UPDATE, DELETE or EXPLAIN ANALYZE on a table that has MCV \n>statistics. All processes running simultaneously are running the same \n>set of functions, which create and delete tables, indexes, and \n>statistics objects, insert, update, and delete rows in those tables, \n>etc.\n>\n>The fact that the statistics are of the MCV type might not be \n>relevant; I'm creating those on tables as part of testing Tomas \n>Vondra's MCV statistics patch, so all the tables have statistics of \n>that kind on them.\n>\n\nHmmm, I don't know the details of the test, but this seems a bit like\nwe're trying to use the stats during estimation but it got dropped\nmeanwhile. If that's the case, it probably affects all stats types, not\njust MCV lists - there should no significant difference between\ndifferent statistics types, I think.\n\nI've managed to reproduce this with a stress-test, and I do get these\nfailures with both dependencies and mcv stats, although in slightly\ndifferent places.\n\nAnd I think I see the issue - when dropping the statistics, we do\nRemoveObjects which however does not acquire any lock on the table. So\nwe get the list of stats (without the serialized data), but before we\nget to load the contents, someone drops it. If that's the root cause,\nit's there since pg 10.\n\nI'm not sure what's the right solution. An straightforward option would\nbe to lock the relation, but will that work after adding support for\nstats on joins? An alternative would be to just ignore those failures,\nbut that kinda breaks the estimation (we should have picked a different\nstats in this case).\n\n>I can try to distill my test case a bit, but first I'd like to know if \n>you are interested. Currently, the patch is over 2.2MB, gzip'd. I'll \n>only bother distilling it if you don't already know about these cache \n>lookup failures.\n>\n\nNot sure. But I do wonder if we see the same issue.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Thu, 14 Nov 2019 01:46:31 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Missing dependency tracking for TableFunc nodes"
},
{
"msg_contents": "Mark Dilger <hornschnorter@gmail.com> writes:\n> On 11/11/19 1:41 PM, Tom Lane wrote:\n>> I happened to notice that find_expr_references_walker has not\n>> been taught anything about TableFunc nodes, which means it will\n>> miss the type and collation OIDs embedded in such a node.\n\n> I can consistently generate errors like the following in master:\n> ERROR: cache lookup failed for statistics object 31041\n\nThis is surely a completely different issue --- there are not,\none hopes, any extended-stats OIDs embedded in views or other\nquery trees.\n\nI concur with Tomas' suspicion that this must be a race condition\nduring DROP STATISTICS. If we're going to allow people to do that\nseparately from dropping the table(s), there has to be some kind of\nlocking around it, and it sounds like there's not :-(\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 13 Nov 2019 20:37:59 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Missing dependency tracking for TableFunc nodes"
},
{
"msg_contents": "On 11/13/19 4:46 PM, Tomas Vondra wrote:\n> On Wed, Nov 13, 2019 at 03:00:03PM -0800, Mark Dilger wrote:\n>>\n>>\n>> On 11/11/19 1:41 PM, Tom Lane wrote:\n>>> I happened to notice that find_expr_references_walker has not\n>>> been taught anything about TableFunc nodes, which means it will\n>>> miss the type and collation OIDs embedded in such a node.\n>>>\n>>> This can be demonstrated to be a problem by the attached script,\n>>> which will end up with a \"cache lookup failed for type NNNNN\"\n>>> error because we allow dropping a type the XMLTABLE construct\n>>> references.\n>>>\n>>> This isn't hard to fix, as per the attached patch, but it makes\n>>> me nervous. I wonder what other dependencies we might be missing.\n>>\n>> I can consistently generate errors like the following in master:\n>>\n>> ERROR: cache lookup failed for statistics object 31041\n>>\n>> This happens in a stress test in which multiple processes are making \n>> changes to the schema. So far, all the sessions that report this \n>> cache lookup error do so when performing one of ANALYZE, VACUUM \n>> ANALYZE, UPDATE, DELETE or EXPLAIN ANALYZE on a table that has MCV \n>> statistics. All processes running simultaneously are running the same \n>> set of functions, which create and delete tables, indexes, and \n>> statistics objects, insert, update, and delete rows in those tables, etc.\n>>\n>> The fact that the statistics are of the MCV type might not be \n>> relevant; I'm creating those on tables as part of testing Tomas \n>> Vondra's MCV statistics patch, so all the tables have statistics of \n>> that kind on them.\n>>\n> \n> Hmmm, I don't know the details of the test, but this seems a bit like\n> we're trying to use the stats during estimation but it got dropped\n> meanwhile. If that's the case, it probably affects all stats types, not\n> just MCV lists - there should no significant difference between\n> different statistics types, I think.\n> \n> I've managed to reproduce this with a stress-test, and I do get these\n> failures with both dependencies and mcv stats, although in slightly\n> different places.\n> \n> And I think I see the issue - when dropping the statistics, we do\n> RemoveObjects which however does not acquire any lock on the table. So\n> we get the list of stats (without the serialized data), but before we\n> get to load the contents, someone drops it. If that's the root cause,\n> it's there since pg 10.\n> \n> I'm not sure what's the right solution. An straightforward option would\n> be to lock the relation, but will that work after adding support for\n> stats on joins? An alternative would be to just ignore those failures,\n> but that kinda breaks the estimation (we should have picked a different\n> stats in this case).\n> \n>> I can try to distill my test case a bit, but first I'd like to know if \n>> you are interested. Currently, the patch is over 2.2MB, gzip'd. I'll \n>> only bother distilling it if you don't already know about these cache \n>> lookup failures.\n>>\n> \n> Not sure. But I do wonder if we see the same issue.\n\nI don't know. If you want to reproduce what I'm seeing....\n\nI added a parallel_schedule target:\n\ndiff --git a/src/test/regress/parallel_schedule \nb/src/test/regress/parallel_schedule\nindex fc0f14122b..5ace7c7a8a 100644\n--- a/src/test/regress/parallel_schedule\n+++ b/src/test/regress/parallel_schedule\n@@ -85,6 +85,8 @@ test: create_table_like alter_generic alter_operator \nmisc async dbsize misc_func\n # collate.*.utf8 tests cannot be run in parallel with each other\n test: rules psql psql_crosstab amutils stats_ext collate.linux.utf8\n\n+test: mcv_huge_stress_a mcv_huge_stress_b mcv_huge_stress_c \nmcv_huge_stress_d mcv_huge_stress_e mcv_huge_stress_f mcv_huge_stress_g\n+\n # run by itself so it can run parallel workers\n test: select_parallel\n test: write_parallel\n\n\nAnd used the attached script to generate the contents of the seven \nparallel tests. If you want to duplicate this, you'll have to manually \nrun gen.pl and direct its output to those src/test/regress/sql/ files. \nThe src/test/regress/expected/ files are just empty, as I don't care \nabout whether the test results match. I'm just checking what kinds of \nerrors I get and whether any of them are concerning.\n\nAfter my most recent run of the stress tests, I grep'd for cache \nfailures and got 23 of them, all coming from get_relation_statistics(), \nstatext_store() and statext_mcv_load(). Two different adjacent spots in \nget_relation_statistics() were involved:\n\n htup = SearchSysCache1(STATEXTOID, ObjectIdGetDatum(statOid));\n if (!HeapTupleIsValid(htup))\n elog(ERROR, \"cache lookup failed for statistics object %u\", \nstatOid);\n staForm = (Form_pg_statistic_ext) GETSTRUCT(htup);\n\n dtup = SearchSysCache1(STATEXTDATASTXOID, \nObjectIdGetDatum(statOid));\n if (!HeapTupleIsValid(dtup))\n elog(ERROR, \"cache lookup failed for statistics object %u\", \nstatOid);\n\nMost were from the first SearchSysCache1 call, but one of them was from \nthe second.\n\n-- \nMark Dilger",
"msg_date": "Wed, 13 Nov 2019 17:38:02 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Missing dependency tracking for TableFunc nodes"
},
{
"msg_contents": "On Wed, Nov 13, 2019 at 08:37:59PM -0500, Tom Lane wrote:\n>Mark Dilger <hornschnorter@gmail.com> writes:\n>> On 11/11/19 1:41 PM, Tom Lane wrote:\n>>> I happened to notice that find_expr_references_walker has not\n>>> been taught anything about TableFunc nodes, which means it will\n>>> miss the type and collation OIDs embedded in such a node.\n>\n>> I can consistently generate errors like the following in master:\n>> ERROR: cache lookup failed for statistics object 31041\n>\n>This is surely a completely different issue --- there are not,\n>one hopes, any extended-stats OIDs embedded in views or other\n>query trees.\n>\n>I concur with Tomas' suspicion that this must be a race condition\n>during DROP STATISTICS. If we're going to allow people to do that\n>separately from dropping the table(s), there has to be some kind of\n>locking around it, and it sounds like there's not :-(\n>\n\nI think the right thing to do is simply acquire AE lock on the relation\nin RemoveStatisticsById, per the attached patch. It's possible we'll\nneed to do something more complicated once join stats are added, but\nfor now this should be enough (and backpatchable).\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 14 Nov 2019 22:31:24 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Missing dependency tracking for TableFunc nodes"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Wed, Nov 13, 2019 at 08:37:59PM -0500, Tom Lane wrote:\n>> I concur with Tomas' suspicion that this must be a race condition\n>> during DROP STATISTICS. If we're going to allow people to do that\n>> separately from dropping the table(s), there has to be some kind of\n>> locking around it, and it sounds like there's not :-(\n\n> I think the right thing to do is simply acquire AE lock on the relation\n> in RemoveStatisticsById, per the attached patch. It's possible we'll\n> need to do something more complicated once join stats are added, but\n> for now this should be enough (and backpatchable).\n\nHm. No, it's not enough, unless you add more logic to deal with the\npossibility that the stats object is gone by the time you have the\ntable lock. Consider e.g. two concurrent DROP STATISTICS commands,\nor a statistics drop that's cascading from something like a drop\nof a relevant function and so has no earlier table lock.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 14 Nov 2019 16:36:54 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Missing dependency tracking for TableFunc nodes"
},
{
"msg_contents": "On Thu, Nov 14, 2019 at 04:36:54PM -0500, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> On Wed, Nov 13, 2019 at 08:37:59PM -0500, Tom Lane wrote:\n>>> I concur with Tomas' suspicion that this must be a race condition\n>>> during DROP STATISTICS. If we're going to allow people to do that\n>>> separately from dropping the table(s), there has to be some kind of\n>>> locking around it, and it sounds like there's not :-(\n>\n>> I think the right thing to do is simply acquire AE lock on the relation\n>> in RemoveStatisticsById, per the attached patch. It's possible we'll\n>> need to do something more complicated once join stats are added, but\n>> for now this should be enough (and backpatchable).\n>\n>Hm. No, it's not enough, unless you add more logic to deal with the\n>possibility that the stats object is gone by the time you have the\n>table lock. Consider e.g. two concurrent DROP STATISTICS commands,\n>or a statistics drop that's cascading from something like a drop\n>of a relevant function and so has no earlier table lock.\n>\n\nIsn't that solved by RemoveObjects() doing this?\n\n /* Get an ObjectAddress for the object. */\n address = get_object_address(stmt->removeType,\n object,\n &relation,\n AccessExclusiveLock,\n stmt->missing_ok);\n\nI've actually done some debugging before sending the patch, and I think\nthis prevent the issue you describe.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Thu, 14 Nov 2019 23:22:32 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Missing dependency tracking for TableFunc nodes"
},
{
"msg_contents": "On 2019-Nov-14, Tomas Vondra wrote:\n\n> Isn't that solved by RemoveObjects() doing this?\n> \n> /* Get an ObjectAddress for the object. */\n> address = get_object_address(stmt->removeType,\n> object,\n> &relation,\n> AccessExclusiveLock,\n> stmt->missing_ok);\n> \n> I've actually done some debugging before sending the patch, and I think\n> this prevent the issue you describe.\n\nHmm .. shouldn't get_statistics_object_oid get a lock on the table that\nowns the stats object too? I think it should be setting *relp to it.\nThat way, the lock you're proposing to add would be obtained there.\nThat means it'd be similar to what we do for OBJECT_TRIGGER etc,\nget_object_address_relobject().\n\nI admit this'd crash and burn if we had stats on multiple relations,\nbecause there'd be no way to return the multiple relations that would\nend up locked.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 14 Nov 2019 19:27:29 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Missing dependency tracking for TableFunc nodes"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Thu, Nov 14, 2019 at 04:36:54PM -0500, Tom Lane wrote:\n>> Hm. No, it's not enough, unless you add more logic to deal with the\n>> possibility that the stats object is gone by the time you have the\n>> table lock. Consider e.g. two concurrent DROP STATISTICS commands,\n>> or a statistics drop that's cascading from something like a drop\n>> of a relevant function and so has no earlier table lock.\n\n> Isn't that solved by RemoveObjects() doing this?\n\n> /* Get an ObjectAddress for the object. */\n> address = get_object_address(stmt->removeType,\n> object,\n> &relation,\n> AccessExclusiveLock,\n> stmt->missing_ok);\n\nAh, I see, we already have AEL on the stats object itself. So that\neliminates my concern about a race between two RemoveStatisticsById\ncalls, but what we have instead is fear of deadlock. A DROP STATISTICS\ncommand will acquire AEL on the stats object but then AEL on the table,\nthe opposite of what will happen during DROP TABLE, so concurrent\nexecutions of those will deadlock. That might be better than the\nfailures Mark is seeing now, but not by much.\n\nA correct fix I think is that the planner ought to acquire AccessShareLock\non a stats object it's trying to use (and then recheck whether the object\nis still there). That seems rather expensive, but there may be no other\nway.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 14 Nov 2019 17:35:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Missing dependency tracking for TableFunc nodes"
},
{
"msg_contents": "On Thu, Nov 14, 2019 at 07:27:29PM -0300, Alvaro Herrera wrote:\n>On 2019-Nov-14, Tomas Vondra wrote:\n>\n>> Isn't that solved by RemoveObjects() doing this?\n>>\n>> /* Get an ObjectAddress for the object. */\n>> address = get_object_address(stmt->removeType,\n>> object,\n>> &relation,\n>> AccessExclusiveLock,\n>> stmt->missing_ok);\n>>\n>> I've actually done some debugging before sending the patch, and I think\n>> this prevent the issue you describe.\n>\n>Hmm .. shouldn't get_statistics_object_oid get a lock on the table that\n>owns the stats object too? I think it should be setting *relp to it.\n>That way, the lock you're proposing to add would be obtained there.\n>That means it'd be similar to what we do for OBJECT_TRIGGER etc,\n>get_object_address_relobject().\n>\n\nHmmm, maybe. We'd have to fake the list of names, because that function\nexpects the relation name to be included in the list of names, and we\ndon't have that for extended stats. But it might work, I guess.\n\n>I admit this'd crash and burn if we had stats on multiple relations,\n>because there'd be no way to return the multiple relations that would\n>end up locked.\n>\n\nI think that's less important now. If we ever get that feature, we'll\nneed to make that work somehow.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Fri, 15 Nov 2019 00:06:46 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Missing dependency tracking for TableFunc nodes"
},
{
"msg_contents": "On Thu, Nov 14, 2019 at 05:35:06PM -0500, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> On Thu, Nov 14, 2019 at 04:36:54PM -0500, Tom Lane wrote:\n>>> Hm. No, it's not enough, unless you add more logic to deal with the\n>>> possibility that the stats object is gone by the time you have the\n>>> table lock. Consider e.g. two concurrent DROP STATISTICS commands,\n>>> or a statistics drop that's cascading from something like a drop\n>>> of a relevant function and so has no earlier table lock.\n>\n>> Isn't that solved by RemoveObjects() doing this?\n>\n>> /* Get an ObjectAddress for the object. */\n>> address = get_object_address(stmt->removeType,\n>> object,\n>> &relation,\n>> AccessExclusiveLock,\n>> stmt->missing_ok);\n>\n>Ah, I see, we already have AEL on the stats object itself. So that\n>eliminates my concern about a race between two RemoveStatisticsById\n>calls, but what we have instead is fear of deadlock. A DROP STATISTICS\n>command will acquire AEL on the stats object but then AEL on the table,\n>the opposite of what will happen during DROP TABLE, so concurrent\n>executions of those will deadlock. That might be better than the\n>failures Mark is seeing now, but not by much.\n>\n\nHmmm, yeah :-(\n\n>A correct fix I think is that the planner ought to acquire AccessShareLock\n>on a stats object it's trying to use (and then recheck whether the object\n>is still there). That seems rather expensive, but there may be no other\n>way.\n\nYes, so something like for indexes, although we don't need the recheck\nin that case. I think the attached patch does that (but it's 1AM here).\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 15 Nov 2019 00:28:57 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Missing dependency tracking for TableFunc nodes"
}
] |
[
{
"msg_contents": "I am developed my own PostgreSQL extension for learning purpose and it is working correctly but I want to know to which components of the database is my own extension components communicate. For example I have c code, make file sql script, and control file after compiling the make file to which components of the database are each of my extension components to communicate. Thanks for your response.\n\n\nRegards,\n____________________________________\nYonathan Misgan\nAssistant Lecturer, @ Debre Tabor University\nFaculty of Technology\nDepartment of Computer Science\nStudying MSc in Computer Science (in Data and Web Engineering)\n@ Addis Ababa University\nE-mail: yonamis@dtu.edu.et<mailto:yonamis@dtu.edu.et>\n yonathanmisgan.4@gmail.com<mailto:yonathanmisgan.4@gmail.com>\nTel: (+251)-911180185 (mob)\n\n\n\n\n\n\n\n\n\n\nI am developed my own PostgreSQL extension for learning purpose and it is working correctly but I want to know to which components of the database is my own extension components communicate. For example I have c code, make file sql script,\n and control file after compiling the make file to which components of the database are each of my extension components to communicate. Thanks for your response. \n \nRegards,\n____________________________________\nYonathan Misgan \nAssistant Lecturer, @ Debre Tabor University\nFaculty of Technology\nDepartment of Computer Science\nStudying MSc in Computer Science (in\n Data and Web Engineering) \n@ Addis Ababa University \nE-mail: yonamis@dtu.edu.et\n yonathanmisgan.4@gmail.com\nTel: (+251)-911180185 (mob)",
"msg_date": "Tue, 12 Nov 2019 06:54:09 +0000",
"msg_from": "Yonatan Misgan <yonamis@dtu.edu.et>",
"msg_from_op": true,
"msg_subject": "Extension development"
},
{
"msg_contents": "Hi Yonatan,\n\nHere is an attempt to explain the components of the extension:\n\nMakefile:\nMakefile provides a way to compile your C code. Postgres provides an\ninfrastructure called PGXS for building the extensions against installed\nPostgres server. More of this can be found in official documentation[1].\n\nControl file:\nIt specifies some properties/metadata about the extension, like version,\ncomments, directory etc. Official documentation[2]\n\nSQL Script:\nThis file should be of format extension—version.sql which will have the\nfunctions that are either pure SQL functions, or interfaces for your C\nfunctions and other SQL objects to assist your functions etc. This will be\nexecuted internally by “CREATE EXTENSION” command.\n\nC code:\nYour C code is real implementation of your extension. Here you can have C\nimplementations of SQL interface functions your declared in your .sql\nscript file, register callbacks e.g. things you want to do post parse,\nbefore execution of a query etc. The filename can be anything but you\nshould have PG_MODULE_MAGIC included in your C file.\n\nUsing this infrastructure one can simply do make, make install and then\n“CREATE EXTENSION” command to create objects. This helps keeping track of\nall the extension objects together, create them at once, and drop once with\n“DROP EXTENSION” command. Here[3] is complete documentation for extension.\n\nRegards,\nJeevan Ladhe\n\n[1] https://www.postgresql.org/docs/current/extend-pgxs.html\n[2]\nhttps://www.postgresql.org/docs/current/extend-extensions.html#id-1.8.3.20.12\n[3] https://www.postgresql.org/docs/current/extend-extensions.html\n\nOn Tue, Nov 12, 2019 at 12:24 PM Yonatan Misgan <yonamis@dtu.edu.et> wrote:\n\n> I am developed my own PostgreSQL extension for learning purpose and it is\n> working correctly but I want to know to which components of the database is\n> my own extension components communicate. For example I have c code, make\n> file sql script, and control file after compiling the make file to which\n> components of the database are each of my extension components to\n> communicate. Thanks for your response.\n>\n>\n>\n> Regards,\n>\n> ____________________________________\n>\n> *Yonathan Misgan *\n>\n> *Assistant Lecturer, @ Debre Tabor University*\n>\n> *Faculty of Technology*\n>\n> *Department of Computer Science*\n>\n> *Studying MSc in **Computer Science** (in Data and Web Engineering) *\n>\n> *@ Addis Ababa University*\n>\n> *E-mail: yonamis@dtu.edu.et <yonamis@dtu.edu.et>*\n>\n> * yonathanmisgan.4@gmail.com <yonathanmisgan.4@gmail.com>*\n>\n> *Tel: **(+251)-911180185 (mob)*\n>\n>\n>\n\nHi Yonatan,Here is an attempt to explain the components of the extension:Makefile:Makefile provides a way to compile your C code. Postgres provides an infrastructure called PGXS for building the extensions against installed Postgres server. More of this can be found in official documentation[1].Control file:It specifies some properties/metadata about the extension, like version, comments, directory etc. Official documentation[2]SQL Script:This file should be of format extension—version.sql which will have the functions that are either pure SQL functions, or interfaces for your C functions and other SQL objects to assist your functions etc. This will be executed internally by “CREATE EXTENSION” command.C code:Your C code is real implementation of your extension. Here you can have C implementations of SQL interface functions your declared in your .sql script file, register callbacks e.g. things you want to do post parse, before execution of a query etc. The filename can be anything but you should have PG_MODULE_MAGIC included in your C file.Using this infrastructure one can simply do make, make install and then “CREATE EXTENSION” command to create objects. This helps keeping track of all the extension objects together, create them at once, and drop once with “DROP EXTENSION” command. Here[3] is complete documentation for extension. Regards,Jeevan Ladhe[1] https://www.postgresql.org/docs/current/extend-pgxs.html[2] https://www.postgresql.org/docs/current/extend-extensions.html#id-1.8.3.20.12[3] https://www.postgresql.org/docs/current/extend-extensions.htmlOn Tue, Nov 12, 2019 at 12:24 PM Yonatan Misgan <yonamis@dtu.edu.et> wrote:\n\n\nI am developed my own PostgreSQL extension for learning purpose and it is working correctly but I want to know to which components of the database is my own extension components communicate. For example I have c code, make file sql script,\n and control file after compiling the make file to which components of the database are each of my extension components to communicate. Thanks for your response. \n \nRegards,\n____________________________________\nYonathan Misgan \nAssistant Lecturer, @ Debre Tabor University\nFaculty of Technology\nDepartment of Computer Science\nStudying MSc in Computer Science (in\n Data and Web Engineering) \n@ Addis Ababa University \nE-mail: yonamis@dtu.edu.et\n yonathanmisgan.4@gmail.com\nTel: (+251)-911180185 (mob)",
"msg_date": "Tue, 12 Nov 2019 16:00:37 +0530",
"msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Extension development"
},
{
"msg_contents": "Hi Yonatan,\n\nYou can follow this blog for creating your own extension in PostgreSQL..\n\nhttps://www.highgo.ca/2019/10/01/a-guide-to-create-user-defined-extension-modules-to-postgres/\n\n-- Ahsan\n\nOn Tue, Nov 12, 2019 at 11:54 AM Yonatan Misgan <yonamis@dtu.edu.et> wrote:\n\n> I am developed my own PostgreSQL extension for learning purpose and it is\n> working correctly but I want to know to which components of the database is\n> my own extension components communicate. For example I have c code, make\n> file sql script, and control file after compiling the make file to which\n> components of the database are each of my extension components to\n> communicate. Thanks for your response.\n>\n>\n>\n> Regards,\n>\n> ____________________________________\n>\n> *Yonathan Misgan *\n>\n> *Assistant Lecturer, @ Debre Tabor University*\n>\n> *Faculty of Technology*\n>\n> *Department of Computer Science*\n>\n> *Studying MSc in **Computer Science** (in Data and Web Engineering) *\n>\n> *@ Addis Ababa University*\n>\n> *E-mail: yonamis@dtu.edu.et <yonamis@dtu.edu.et>*\n>\n> * yonathanmisgan.4@gmail.com <yonathanmisgan.4@gmail.com>*\n>\n> *Tel: **(+251)-911180185 (mob)*\n>\n>\n>\n\n\n-- \nHighgo Software (Canada/China/Pakistan)\nURL : http://www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nEMAIL: mailto: ahsan.hadi@highgo.ca\n\nHi Yonatan,You can follow this blog for creating your own extension in PostgreSQL..https://www.highgo.ca/2019/10/01/a-guide-to-create-user-defined-extension-modules-to-postgres/-- AhsanOn Tue, Nov 12, 2019 at 11:54 AM Yonatan Misgan <yonamis@dtu.edu.et> wrote:\n\n\nI am developed my own PostgreSQL extension for learning purpose and it is working correctly but I want to know to which components of the database is my own extension components communicate. For example I have c code, make file sql script,\n and control file after compiling the make file to which components of the database are each of my extension components to communicate. Thanks for your response. \n \nRegards,\n____________________________________\nYonathan Misgan \nAssistant Lecturer, @ Debre Tabor University\nFaculty of Technology\nDepartment of Computer Science\nStudying MSc in Computer Science (in\n Data and Web Engineering) \n@ Addis Ababa University \nE-mail: yonamis@dtu.edu.et\n yonathanmisgan.4@gmail.com\nTel: (+251)-911180185 (mob)\n \n\n\n-- Highgo Software (Canada/China/Pakistan)URL : http://www.highgo.caADDR: 10318 WHALLEY BLVD, Surrey, BCEMAIL: mailto: ahsan.hadi@highgo.ca",
"msg_date": "Wed, 13 Nov 2019 00:50:23 +0500",
"msg_from": "Ahsan Hadi <ahsan.hadi@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Extension development"
},
{
"msg_contents": "I have done the hard code. But my question is related to the concept how these extension components working together as a system. For example what the use case diagram looks like for my extension and also the other architectural view of the extension should look like.\n\n\nRegards,\n____________________________________\nYonathan Misgan\nAssistant Lecturer, @ Debre Tabor University\nFaculty of Technology\nDepartment of Computer Science\nStudying MSc in Computer Science (in Data and Web Engineering)\n@ Addis Ababa University\nE-mail: yonamis@dtu.edu.et<mailto:yonamis@dtu.edu.et>\n yonathanmisgan.4@gmail.com<mailto:yonathanmisgan.4@gmail.com>\nTel: (+251)-911180185 (mob)\n\n________________________________\nFrom: Ahsan Hadi <ahsan.hadi@gmail.com>\nSent: Tuesday, November 12, 2019 10:50:23 PM\nTo: Yonatan Misgan <yonamis@dtu.edu.et>\nCc: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: Extension development\n\nHi Yonatan,\n\nYou can follow this blog for creating your own extension in PostgreSQL..\n\nhttps://www.highgo.ca/2019/10/01/a-guide-to-create-user-defined-extension-modules-to-postgres/\n\n-- Ahsan\n\nOn Tue, Nov 12, 2019 at 11:54 AM Yonatan Misgan <yonamis@dtu.edu.et<mailto:yonamis@dtu.edu.et>> wrote:\n\nI am developed my own PostgreSQL extension for learning purpose and it is working correctly but I want to know to which components of the database is my own extension components communicate. For example I have c code, make file sql script, and control file after compiling the make file to which components of the database are each of my extension components to communicate. Thanks for your response.\n\n\n\nRegards,\n\n____________________________________\n\nYonathan Misgan\n\nAssistant Lecturer, @ Debre Tabor University\n\nFaculty of Technology\n\nDepartment of Computer Science\n\nStudying MSc in Computer Science (in Data and Web Engineering)\n\n@ Addis Ababa University\n\nE-mail: yonamis@dtu.edu.et<mailto:yonamis@dtu.edu.et>\n\n yonathanmisgan.4@gmail.com<mailto:yonathanmisgan.4@gmail.com>\n\nTel: (+251)-911180185 (mob)\n\n\n\n\n--\nHighgo Software (Canada/China/Pakistan)\nURL : http://www.highgo.ca<http://www.highgo.ca/>\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nEMAIL: mailto: ahsan.hadi@highgo.ca\n\n\n\n\n\n\n\n\n\nI have done the hard code. But my question is related to the concept how these extension components working together as a system. For example what the use case diagram looks like for my extension and also the other architectural view of\n the extension should look like. \n \nRegards,\n____________________________________\nYonathan Misgan \nAssistant Lecturer, @ Debre Tabor University\nFaculty of Technology\nDepartment of Computer Science\nStudying MSc in Computer Science (in\n Data and Web Engineering) \n@ Addis Ababa University \nE-mail: yonamis@dtu.edu.et\n yonathanmisgan.4@gmail.com\nTel: (+251)-911180185 (mob)\n \n\n\nFrom: Ahsan Hadi <ahsan.hadi@gmail.com>\nSent: Tuesday, November 12, 2019 10:50:23 PM\nTo: Yonatan Misgan <yonamis@dtu.edu.et>\nCc: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: Extension development\n \n\n\n\nHi Yonatan,\n\n\nYou can follow this blog for creating your own extension in PostgreSQL..\n\n\nhttps://www.highgo.ca/2019/10/01/a-guide-to-create-user-defined-extension-modules-to-postgres/\n\n\n\n-- Ahsan\n\n\n\n\nOn Tue, Nov 12, 2019 at 11:54 AM Yonatan Misgan <yonamis@dtu.edu.et> wrote:\n\n\n\n\nI am developed my own PostgreSQL extension for learning purpose and it is working correctly but I want to know to which components of the database is my own extension components communicate. For example I have c code, make file sql script,\n and control file after compiling the make file to which components of the database are each of my extension components to communicate. Thanks for your response. \n \n\nRegards,\n\n____________________________________\n\nYonathan Misgan \n\nAssistant Lecturer, @ Debre Tabor University\n\nFaculty of Technology\n\nDepartment of Computer Science\n\nStudying MSc in Computer Science (in\n Data and Web Engineering) \n\n@ Addis Ababa University \n\nE-mail: yonamis@dtu.edu.et\n\n yonathanmisgan.4@gmail.com\n\nTel: (+251)-911180185 (mob)\n \n\n\n\n\n\n\n\n-- \n\nHighgo Software (Canada/China/Pakistan)\nURL : http://www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nEMAIL: mailto: ahsan.hadi@highgo.ca",
"msg_date": "Wed, 13 Nov 2019 08:08:48 +0000",
"msg_from": "Yonatan Misgan <yonamis@dtu.edu.et>",
"msg_from_op": true,
"msg_subject": "RE: Extension development"
}
] |
[
{
"msg_contents": "The following bug has been logged on the website:\n\nBug reference: 16108\nLogged by: Haiying Tang\nEmail address: tanghy.fnst@cn.fujitsu.com\nPostgreSQL version: 12.0\nOperating system: Windows\nDescription: \n\nHello\r\n\r\nI found the following release notes in PG12 is not working properly at\nWindows.\r\n> •Add colorization to the output of command-line utilities\r\n\r\nFollowing the release note, I've set the the environment variable PG_COLOR\nto auto, then I run pg_dump command with an incorrect passwd.\r\nHowever, the command-line output is not colorized as the release notes\nsaid.\r\n\r\nBefore PG_COLOR=auto is set: pg_dump: error: connection to database\n\"tanghy.fnst\" failed: FATAL:\r\nAfter PG_COLOR=auto is set: \u001b[01mpg_dump: \u001b[0m\u001b[01;31merror: \u001b[0mconnection\nto database \"tanghy.fnst\" failed: FATAL\r\n\r\nI think the colorization to the output of command-line is not supported at\nWindows.\r\nMaybe function \"pg_logging_init\" at source \"src\\common\\logging.c\" should add\na platform check.\r\nBesides, the related release note of PG12 should add some description about\nit.\r\n\r\nBest Regards,\r\nTang",
"msg_date": "Tue, 12 Nov 2019 08:29:10 +0000",
"msg_from": "PG Bug reporting form <noreply@postgresql.org>",
"msg_from_op": true,
"msg_subject": "BUG #16108: Colorization to the output of command-line has unproperly\n behaviors at Windows platform"
},
{
"msg_contents": "On Tue, Nov 12, 2019 at 9:30 PM PG Bug reporting form\n<noreply@postgresql.org> wrote:\n> The following bug has been logged on the website:\n>\n> Bug reference: 16108\n> Logged by: Haiying Tang\n> Email address: tanghy.fnst@cn.fujitsu.com\n> PostgreSQL version: 12.0\n> Operating system: Windows\n> Description:\n>\n> Hello\n>\n> I found the following release notes in PG12 is not working properly at\n> Windows.\n> > •Add colorization to the output of command-line utilities\n>\n> Following the release note, I've set the the environment variable PG_COLOR\n> to auto, then I run pg_dump command with an incorrect passwd.\n> However, the command-line output is not colorized as the release notes\n> said.\n>\n> Before PG_COLOR=auto is set: pg_dump: error: connection to database\n> \"tanghy.fnst\" failed: FATAL:\n> After PG_COLOR=auto is set: [01mpg_dump: [0m [01;31merror: [0mconnection\n> to database \"tanghy.fnst\" failed: FATAL\n>\n> I think the colorization to the output of command-line is not supported at\n> Windows.\n> Maybe function \"pg_logging_init\" at source \"src\\common\\logging.c\" should add\n> a platform check.\n> Besides, the related release note of PG12 should add some description about\n> it.\n\nBased on this:\n\nhttps://en.wikipedia.org/wiki/ANSI_escape_code#DOS_and_Windows\n\n... I wonder if it works if you use the new Windows Terminal, and I\nwonder if it would work on the older thing if we used the\nSetConsoleMode() flag it mentions.\n\n\n",
"msg_date": "Tue, 12 Nov 2019 21:39:17 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16108: Colorization to the output of command-line has\n unproperly behaviors at Windows platform"
},
{
"msg_contents": "On Tue, Nov 12, 2019 at 9:39 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n>\n> ... I wonder if it works if you use the new Windows Terminal, and I\n> wonder if it would work on the older thing if we used the\n> SetConsoleMode() flag it mentions.\n>\n>\nIn order to make it work both things are needed, setting the console mode\nand a terminal that supports it. Please find attached a patch for so.\n\nRegards,\n\nJuan José Santamaría Flecha",
"msg_date": "Tue, 12 Nov 2019 19:59:35 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16108: Colorization to the output of command-line has\n unproperly behaviors at Windows platform"
},
{
"msg_contents": ">In order to make it work both things are needed, setting the console mode and a terminal that supports it.\r\n\r\n\r\n\r\nYour patch worked fine on windows which supports VT100. But the bug still happened when set PG_COLOR=\"always\" at Windows Terminal that not support VT100. Please see the attached file “Test_result.png” for the NG result. (I used win7 for this test)\r\n\r\nTo fix the above bug, I made some change to your patch. The new one works fine on my win7(VT100 not support) and win10(VT100 support).\r\n\r\nAlso, in this new patch(v1), I added some doc change for Windows not support Colorization. Please find the attached patch for so.\r\n\r\n\r\n\r\nRegards,\r\nTang\r\n\r\n\r\nFrom: Juan José Santamaría Flecha <juanjo.santamaria@gmail.com>\r\nSent: Wednesday, November 13, 2019 4:00 AM\r\nTo: Thomas Munro <thomas.munro@gmail.com>\r\nCc: PG Bug reporting form <noreply@postgresql.org>; PostgreSQL mailing lists <pgsql-bugs@lists.postgresql.org>; Tang, Haiying/唐 海英 <tanghy.fnst@cn.fujitsu.com>\r\nSubject: Re: BUG #16108: Colorization to the output of command-line has unproperly behaviors at Windows platform\r\n\r\n\r\nOn Tue, Nov 12, 2019 at 9:39 AM Thomas Munro <thomas.munro@gmail.com<mailto:thomas.munro@gmail.com>> wrote:\r\n\r\n... I wonder if it works if you use the new Windows Terminal, and I\r\nwonder if it would work on the older thing if we used the\r\nSetConsoleMode() flag it mentions.\r\n\r\nIn order to make it work both things are needed, setting the console mode and a terminal that supports it. Please find attached a patch for so.\r\n\r\nRegards,\r\n\r\nJuan José Santamaría Flecha",
"msg_date": "Fri, 15 Nov 2019 04:23:01 +0000",
"msg_from": "\"Tang, Haiying\" <tanghy.fnst@cn.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: BUG #16108: Colorization to the output of command-line has\n unproperly behaviors at Windows platform"
},
{
"msg_contents": "Thanks for testing. I am opening a new item in the next commitfest for this\ntopic.\n\nOn Fri, Nov 15, 2019 at 5:23 AM Tang, Haiying <tanghy.fnst@cn.fujitsu.com>\nwrote:\n\n> >In order to make it work both things are needed, setting the console mode\n> and a terminal that supports it.\n>\n>\n>\n> Your patch worked fine on windows which supports VT100. But the bug still\n> happened when set PG_COLOR=\"always\" at Windows Terminal that not support\n> VT100. Please see the attached file “Test_result.png” for the NG result. (I\n> used win7 for this test)\n>\n>\n> To fix the above bug, I made some change to your patch. The new one works\n> fine on my win7(VT100 not support) and win10(VT100 support).\n>\n\nMy understanding of the \"always\" logic is that it has to be enabled no\nmatter what, even if not supported in current output.\n\nAlso, in this new patch(v1), I added some doc change for Windows not\n> support Colorization. Please find the attached patch for so.\n>\n>\n>\nYou cannot change the release notes, if anything it will be added to 12.2\npatch notes. It should be added to the 21 (!) utilities that specify the\nPG_COLOR usage, but I am not so sure that adding a note stating this\nfeature requires Windows 10 >= 1511 update is really a Postgres business.\n\nPlease find attached a version that supports older Mingw versions and SDKs.\n\nRegards,\n\nJuan José Santamaría Flecha",
"msg_date": "Fri, 15 Nov 2019 09:14:59 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16108: Colorization to the output of command-line has\n unproperly behaviors at Windows platform"
},
{
"msg_contents": "Hello everyone.\n\n> Please find attached a version that supports older Mingw versions and SDKs.\n\nI have checked the patch source code and it seems to be working. But a\nfew moments I want to mention:\n\nI think it is not good idea to mix the logic of detecting the fact of\nTTY with enabling of the VT100 mode. Yeah, it seems to be correct for\ncurrent case but a little confusing.\nMaybe is it better to detect terminal using *isatty* and later call\n*enable_vt_mode*?\n\nAlso, it seems like if GetConsoleMode returns\nENABLE_VIRTUAL_TERMINAL_PROCESSING flag already set - we could skip\nSetConsoleMode call (not a big deal of course).\n\nThanks,\nMichail.\n\n\n",
"msg_date": "Wed, 19 Feb 2020 01:39:39 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16108: Colorization to the output of command-line has\n unproperly behaviors at Windows platform"
},
{
"msg_contents": "P.S.\n\nAlso, should we enable vt100 mode in case of PG_COLOR=always? I think yes.\n\nP.S.Also, should we enable vt100 mode in case of PG_COLOR=always? I think yes.",
"msg_date": "Wed, 19 Feb 2020 02:01:38 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16108: Colorization to the output of command-line has\n unproperly behaviors at Windows platform"
},
{
"msg_contents": "On Tue, Feb 18, 2020 at 11:39 PM Michail Nikolaev <\nmichail.nikolaev@gmail.com> wrote:\n\n>\n> I have checked the patch source code and it seems to be working. But a\n> few moments I want to mention:\n>\n\nThanks for looking into this.\n\n\n> I think it is not good idea to mix the logic of detecting the fact of\n> TTY with enabling of the VT100 mode. Yeah, it seems to be correct for\n> current case but a little confusing.\n> Maybe is it better to detect terminal using *isatty* and later call\n> *enable_vt_mode*?\n>\n\nMost of what enable_vt_mode() does is actually detecting the terminal, but\nI can see why that is confusing without better comments.\n\n\n> Also, it seems like if GetConsoleMode returns\n> ENABLE_VIRTUAL_TERMINAL_PROCESSING flag already set - we could skip\n> SetConsoleMode call (not a big deal of course).\n>\n\nAgreed.\n\nThe patch about making color by default [1] introduces the\nfunction terminal_supports_color(), that I think is relevant for this\nissue. Please find attached a new version based on that idea.\n\nAlso, adding Peter to weight on this approach.\n\n[1] https://commitfest.postgresql.org/27/2406/\n\nRegards,\n\nJuan José Santamaría Flecha",
"msg_date": "Wed, 19 Feb 2020 17:16:32 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16108: Colorization to the output of command-line has\n unproperly behaviors at Windows platform"
},
{
"msg_contents": "Hello.\n\n> The patch about making color by default [1] introduces the function terminal_supports_color(), that I think is relevant for this issue. Please find attached a new version based on that idea.\n\nI am not sure it is good idea to mix both patches because it adds some\nconfusion and makes it harder to merge each.\nMaybe is it better to update current patch the way to reuse some\nfunction later in [1]?\n\nAlso, regarding comment\n> It is disabled by default, so it must be enabled to use color outpout.\n\nIt is not true for new terminal, for example. Maybe it is better to\nrephrase it to something like: \"Check if TV100 support if enabled and\nattempt to enable if not\".\n\n[1] https://www.postgresql.org/message-id/flat/bbdcce43-bd2e-5599-641b-9b44b9e0add4@2ndquadrant.com\n\n\n",
"msg_date": "Sat, 22 Feb 2020 23:08:45 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16108: Colorization to the output of command-line has\n unproperly behaviors at Windows platform"
},
{
"msg_contents": "On Sat, Feb 22, 2020 at 9:09 PM Michail Nikolaev <michail.nikolaev@gmail.com>\nwrote:\n\n>\n> I am not sure it is good idea to mix both patches because it adds some\n> confusion and makes it harder to merge each.\n> Maybe is it better to update current patch the way to reuse some\n> function later in [1]?\n>\n\nThe patch was originaly reported for Windows, but looking into Peter's\npatch, I think this issue affects other systems unless we use stricter\nlogic to detect a colorable terminal when using the \"auto\" option.\nProbably, the way to go is leaving this patch as WIN32 only and thinking\nabout a future patch.\n\n\n> Also, regarding comment\n> > It is disabled by default, so it must be enabled to use color outpout.\n>\n> It is not true for new terminal, for example. Maybe it is better to\n> rephrase it to something like: \"Check if TV100 support if enabled and\n> attempt to enable if not\".\n>\n\nThe logic I have seen on new terminals is that VT100 is supported but\ndisabled. Would you find clearer? \"Attempt to enable VT100 sequence\nprocessing. If it is not possible consider it as unsupported.\"\n\nPlease find attached a patch addressing these comments.\n\nRegards,\n\nJuan José Santamaría Flecha",
"msg_date": "Mon, 24 Feb 2020 18:56:05 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16108: Colorization to the output of command-line has\n unproperly behaviors at Windows platform"
},
{
"msg_contents": "Hello.\n\nLooks totally fine to me now.\n\nSo, I need to mark it as \"ready to commiter\", right?\n\n\n",
"msg_date": "Wed, 26 Feb 2020 13:48:09 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16108: Colorization to the output of command-line has\n unproperly behaviors at Windows platform"
},
{
"msg_contents": "On Wed, Feb 26, 2020 at 11:48 AM Michail Nikolaev <\nmichail.nikolaev@gmail.com> wrote:\n\n>\n> Looks totally fine to me now.\n>\n> So, I need to mark it as \"ready to commiter\", right?\n>\n\nYes, that's right. Thanks for reviewing it.\n\nRegards\n\nOn Wed, Feb 26, 2020 at 11:48 AM Michail Nikolaev <michail.nikolaev@gmail.com> wrote:\nLooks totally fine to me now.\n\nSo, I need to mark it as \"ready to commiter\", right?Yes, that's right. Thanks for reviewing it.Regards",
"msg_date": "Wed, 26 Feb 2020 11:58:50 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16108: Colorization to the output of command-line has\n unproperly behaviors at Windows platform"
},
{
"msg_contents": "On Mon, Feb 24, 2020 at 06:56:05PM +0100, Juan José Santamaría Flecha wrote:\n> The patch was originaly reported for Windows, but looking into Peter's\n> patch, I think this issue affects other systems unless we use stricter\n> logic to detect a colorable terminal when using the \"auto\" option.\n> Probably, the way to go is leaving this patch as WIN32 only and thinking\n> about a future patch.\n\nIt is better to not mix issues. You can actually bump on similar\ncoloring issues depending on your configuration, with OSX or even\nLinux.\n\n> The logic I have seen on new terminals is that VT100 is supported but\n> disabled. Would you find clearer? \"Attempt to enable VT100 sequence\n> processing. If it is not possible consider it as unsupported.\"\n> \n> Please find attached a patch addressing these comments.\n\nI was reading the thread for the first time, and got surprised first\nwith the argument about \"always\" which gives the possibility to print\nincorrect characters even if the environment does not allow coloring.\nHowever, after looking at logging.c, the answer is pretty clear what\nalways is about as it enforces colorization, so this patch looks\ncorrect to me.\n\nOn top of that, and that's a separate issue, I have noticed that we\nhave exactly zero documentation about PG_COLORS (the plural flavor,\nnot the singular), but we have code for it in common/logging.c..\n\nAnyway, committed down to 12, after tweaking a few things.\n--\nMichael",
"msg_date": "Mon, 2 Mar 2020 15:48:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16108: Colorization to the output of command-line has\n unproperly behaviors at Windows platform"
},
{
"msg_contents": "On Mon, Mar 2, 2020 at 7:48 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n>\n> On top of that, and that's a separate issue, I have noticed that we\n> have exactly zero documentation about PG_COLORS (the plural flavor,\n> not the singular), but we have code for it in common/logging.c..\n>\n\n Yeah, there is nothing about it prior to [1]. So, this conversation will\nhave to be carried over there.\n\n\n> Anyway, committed down to 12, after tweaking a few things.\n>\n\nThank you.\n\n[1]\nhttps://www.postgresql.org/message-id/bbdcce43-bd2e-5599-641b-9b44b9e0add4@2ndquadrant.com\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Mon, Mar 2, 2020 at 7:48 AM Michael Paquier <michael@paquier.xyz> wrote:\nOn top of that, and that's a separate issue, I have noticed that we\nhave exactly zero documentation about PG_COLORS (the plural flavor,\nnot the singular), but we have code for it in common/logging.c.. Yeah, there is nothing about it prior to [1]. So, this conversation will have to be carried over there. \nAnyway, committed down to 12, after tweaking a few things.Thank you.[1] https://www.postgresql.org/message-id/bbdcce43-bd2e-5599-641b-9b44b9e0add4@2ndquadrant.comRegards,Juan José Santamaría Flecha",
"msg_date": "Mon, 2 Mar 2020 10:01:42 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16108: Colorization to the output of command-line has\n unproperly behaviors at Windows platform"
}
] |
[
{
"msg_contents": "Hi,\nExecClearTuple don't check por NULL pointer arg and according\nTupIsNull slot can be NULL.\n\nCan anyone check this buf fix?\n\n--- \\dll\\postgresql-12.0\\a\\backend\\executor\\nodeUnique.c\tMon Sep 30 17:06:55 2019\n+++ nodeUnique.c\tTue Nov 12 09:54:34 2019\n@@ -74,7 +74,8 @@\n \t\tif (TupIsNull(slot))\n \t\t{\n \t\t\t/* end of subplan, so we're done */\n-\t\t\tExecClearTuple(resultTupleSlot);\n+\t\t if (!TupIsNull(resultTupleSlot))\n+\t\t\t ExecClearTuple(resultTupleSlot);\n \t\t\treturn NULL;\n \t\t}",
"msg_date": "Tue, 12 Nov 2019 13:07:04 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH][BUG_FIX] Potential null pointer dereferencing."
},
{
"msg_contents": "> On 12 Nov 2019, at 14:07, Ranier Vilela <ranier_gyn@hotmail.com> wrote:\n\n> ExecClearTuple don't check por NULL pointer arg and according\n> TupIsNull slot can be NULL.\n\nI assume you are referring to the TupIsNull(resultTupleSlot) check a few lines\ndown in the same loop? If resultTupleSlot was indeed NULL and not empty, the\nsubsequent call to ExecCopySlot would be a NULL pointer dereference too. I\nmight be missing something obvious, but in which case can resultTupleSlot be\nNULL when calling ExecUnique?\n\ncheers ./daniel\n\n",
"msg_date": "Tue, 12 Nov 2019 14:43:35 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH][BUG_FIX] Potential null pointer dereferencing."
},
{
"msg_contents": "Hi,\nThe condition is :\n74. if (TupIsNull(slot)) is true\n85. if (TupIsNull(resultTupleSlot)) is true too.\n\nIf resultTupleSlot is not can be NULL, why test if (TupIsNull(resultTupleSlot))?\nOccurring these two conditions ExecClearTuple is called, but, don't check by NULL arg.\n\nThere are at least 2 more possible cases, envolving ExecClearTuple:\nnodeFunctionscan.c and nodeWindowAgg.c\n\nBest regards,\nRanier Vilela\n\n________________________________________\nDe: Daniel Gustafsson <daniel@yesql.se>\nEnviado: terça-feira, 12 de novembro de 2019 13:43\nPara: Ranier Vilela\nCc: PostgreSQL Hackers\nAssunto: Re: [PATCH][BUG_FIX] Potential null pointer dereferencing.\n\n> On 12 Nov 2019, at 14:07, Ranier Vilela <ranier_gyn@hotmail.com> wrote:\n\n> ExecClearTuple don't check por NULL pointer arg and according\n> TupIsNull slot can be NULL.\n\nI assume you are referring to the TupIsNull(resultTupleSlot) check a few lines\ndown in the same loop? If resultTupleSlot was indeed NULL and not empty, the\nsubsequent call to ExecCopySlot would be a NULL pointer dereference too. I\nmight be missing something obvious, but in which case can resultTupleSlot be\nNULL when calling ExecUnique?\n\ncheers ./daniel\n\n\n",
"msg_date": "Tue, 12 Nov 2019 14:03:53 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH][BUG_FIX] Potential null pointer dereferencing."
},
{
"msg_contents": "At Tue, 12 Nov 2019 14:03:53 +0000, Ranier Vilela <ranier_gyn@hotmail.com> wrote in \n> Hi,\n> The condition is :\n> 74. if (TupIsNull(slot)) is true\n> 85. if (TupIsNull(resultTupleSlot)) is true too.\n\nSee the definition of TupIsNull. It checks the tupleslot at a valid\npointer is EMPTY as well. And node->ps.ps_ResultTupleSlot cannot be\nNULL there since ExecInitUnique initializes it. The sequence is\nobvious so even Assert is not needed there, I think.\n\n> If resultTupleSlot is not can be NULL, why test if (TupIsNull(resultTupleSlot))?\n\nIt checks if there is no stored \"previous\" tuple, which is used to\ndetect the next value. If it is EMPTY (not NULL), it is the first\ntuple from the outerPlan as described in the comment just above.\n\n> Occurring these two conditions ExecClearTuple is called, but, don't check by NULL arg.\n> \n> There are at least 2 more possible cases, envolving ExecClearTuple:\n> nodeFunctionscan.c and nodeWindowAgg.c\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 13 Nov 2019 10:43:35 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH][BUG_FIX] Potential null pointer dereferencing."
}
] |
[
{
"msg_contents": "Hi,\nVar TargetEntry *tle;\nHave several paths where can it fail.\n\nCan anyone check this bug fix?\n\n--- \\dll\\postgresql-12.0\\a\\backend\\parser\\parse_expr.c\tMon Sep 30 17:06:55 2019\n+++ parse_expr.c\tTue Nov 12 12:43:07 2019\n@@ -349,6 +349,7 @@\n \t\t\t\t\t errmsg(\"DEFAULT is not allowed in this context\"),\n \t\t\t\t\t parser_errposition(pstate,\n \t\t\t\t\t\t\t\t\t\t((SetToDefault *) expr)->location)));\n+\t\t\tresult = NULL;\t\t/* keep compiler quiet */\n \t\t\tbreak;\n \n \t\t\t/*\n@@ -1637,11 +1638,13 @@\n \t\t\tpstate->p_multiassign_exprs = lappend(pstate->p_multiassign_exprs,\n \t\t\t\t\t\t\t\t\t\t\t\t tle);\n \t\t}\n-\t\telse\n+\t\telse {\n \t\t\tereport(ERROR,\n \t\t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n \t\t\t\t\t errmsg(\"source for a multiple-column UPDATE item must be a sub-SELECT or ROW() expression\"),\n \t\t\t\t\t parser_errposition(pstate, exprLocation(maref->source))));\n+ return NULL;\n+ }\n \t}\n \telse\n \t{\n@@ -1653,6 +1656,10 @@\n \t\tAssert(pstate->p_multiassign_exprs != NIL);\n \t\ttle = (TargetEntry *) llast(pstate->p_multiassign_exprs);\n \t}\n+ if (tle == NULL) {\n+\t elog(ERROR, \"unexpected expr type in multiassign list\");\n+\t return NULL;\t\t\t\t/* keep compiler quiet */\n+ }\n \n \t/*\n \t * Emit the appropriate output expression for the current column",
"msg_date": "Tue, 12 Nov 2019 15:53:07 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH][BUG FIX] Potential uninitialized vars used."
}
] |
[
{
"msg_contents": "Hi,\n\nAt the bottom of\nhttps://www.postgresql.org/message-id/20191112192716.emrqs2afuefunw6v%40alap3.anarazel.de\n\nI mused about the somewhat odd coding pattern at the end of WalSndShutdown():\n\n/*\n * Handle a client's connection abort in an orderly manner.\n */\nstatic void\nWalSndShutdown(void)\n{\n\t/*\n\t * Reset whereToSendOutput to prevent ereport from attempting to send any\n\t * more messages to the standby.\n\t */\n\tif (whereToSendOutput == DestRemote)\n\t\twhereToSendOutput = DestNone;\n\n\tproc_exit(0);\n\tabort();\t\t\t\t\t/* keep the compiler quiet */\n}\n\nnamely that we prox_exit() and then abort(). This case seems to be\npurely historical baggage (from when this was inside other functiosn,\nbefore being centralized), so we can likely just remove the abort().\n\nBut even back then, one would have hoped that proc_exit() being\nannotated with pg_attribute_noreturn() should have told the compiler\nenough.\n\nBut it turns out, we don't actually implement that for MSVC. Which does\nexplain at least some cases where we had to add \"keep compiler quiet\"\ntype code specifically for MSVC.\n\nAs it turns out msvc has it's own annotation for functions that don't\nreturn, __declspec(noreturn). But it unfortunately needs to be placed\nbefore where we, so far, placed pg_attribute_noreturn(), namely after\nthe function name / parameters. Instead it needs to be before the\nfunction name.\n\nBut as it turns out GCC et al's __attribute__((noreturn)) actually can\nalso be placed there, and seemingly for a long time:\nhttps://godbolt.org/z/8v5aFS\n\nSo perhaps we ought to rename pg_attribute_noreturn() to pg_noreturn(),\nadd a __declspec(noreturn) version, and move the existing uses to it.\n\nI'm inclined to also drop the parentheses at the same time (i.e\npg_noreturn rather than pg_noreturn()) - it seems easier to mentally\nparse the code that way.\n\nI actually find the placement closer to the return type easier to\nunderstand, so I'd find this mildly beneficial even without the msvc\nangle.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 12 Nov 2019 12:00:49 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "make pg_attribute_noreturn() work for msvc?"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> So perhaps we ought to rename pg_attribute_noreturn() to pg_noreturn(),\n> add a __declspec(noreturn) version, and move the existing uses to it.\n\n> I'm inclined to also drop the parentheses at the same time (i.e\n> pg_noreturn rather than pg_noreturn()) - it seems easier to mentally\n> parse the code that way.\n\nI guess my big question about that is whether pgindent will make a\nhash of it.\n\nOne idea is to merge it with the \"void\" result type that such\na function would presumably have, along the lines of\n\n#define pg_noreturn\tvoid __declspec(noreturn)\n...\nextern pg_noreturn proc_exit(int);\n\nand if necessary, we could strongarm pgindent into believing\nthat pg_noreturn is a typedef.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 12 Nov 2019 15:58:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: make pg_attribute_noreturn() work for msvc?"
},
{
"msg_contents": "On 2019-11-12 15:58:07 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > So perhaps we ought to rename pg_attribute_noreturn() to pg_noreturn(),\n> > add a __declspec(noreturn) version, and move the existing uses to it.\n> \n> > I'm inclined to also drop the parentheses at the same time (i.e\n> > pg_noreturn rather than pg_noreturn()) - it seems easier to mentally\n> > parse the code that way.\n> \n> I guess my big question about that is whether pgindent will make a\n> hash of it.\n\nIf one writes 'pg_noreturn void', rather than 'void pg_noreturn', then\nthere's only one place where pgindent changes something in a somewhat\nweird way. For tablesync.c, it indents the pg_noreturn for\nfinish_sync_worker(). But only due to being on a separate newline, which\nseems unnecessary…\n\nWith 'void pg_noreturn', a few prototypes in headers get indented more\nthan pretty, e.g. in pg_upgrade.h it turns\n\nvoid pg_noreturn pg_fatal(const char *fmt,...) pg_attribute_printf(1, 2);\ninto\nvoid\t\tpg_noreturn pg_fatal(const char *fmt,...) pg_attribute_printf(1, 2);\n\n\nI'm a bit confused as to why pg_upgrade.h doesn't use 'extern' for\nfunction declarations? Not that it's really related, except for the\n'extern' otherwise hiding the effect of pgindent not liking 'void\npg_noreturn'…\n\n\nI don't see a reason not to go for 'pg_noreturn void'?\n\n\n> One idea is to merge it with the \"void\" result type that such\n> a function would presumably have, along the lines of\n> \n> #define pg_noreturn\tvoid __declspec(noreturn)\n> ...\n> extern pg_noreturn proc_exit(int);\n\n> and if necessary, we could strongarm pgindent into believing\n> that pg_noreturn is a typedef.\n\nYea, that'd be an alternative. But since not necessary, I'd not go\nthere?\n\nGreetings,\n\nAndres Freund",
"msg_date": "Tue, 12 Nov 2019 14:11:40 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: make pg_attribute_noreturn() work for msvc?"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-11-12 15:58:07 -0500, Tom Lane wrote:\n>> I guess my big question about that is whether pgindent will make a\n>> hash of it.\n\n> If one writes 'pg_noreturn void', rather than 'void pg_noreturn', then\n> there's only one place where pgindent changes something in a somewhat\n> weird way. For tablesync.c, it indents the pg_noreturn for\n> finish_sync_worker(). But only due to being on a separate newline, which\n> seems unnecessary…\n\nI think that it might be like that because some previous version of\npgindent changed it to that. That's probably why we never adopted\nthis style generally in the first place.\n\n> With 'void pg_noreturn', a few prototypes in headers get indented more\n> than pretty, e.g. in pg_upgrade.h it turns\n\n> void pg_noreturn pg_fatal(const char *fmt,...) pg_attribute_printf(1, 2);\n> into\n> void\t\tpg_noreturn pg_fatal(const char *fmt,...) pg_attribute_printf(1, 2);\n\n> I'm a bit confused as to why pg_upgrade.h doesn't use 'extern' for\n> function declarations? Not that it's really related, except for the\n> 'extern' otherwise hiding the effect of pgindent not liking 'void\n> pg_noreturn'…\n\nThere are various headers where people have tended to not use \"extern\".\nI always disliked that, thinking it was not per project style, but\nnever bothered to force the issue. If we went around and inserted\nextern in these places, it wouldn't bother me any.\n\n> I don't see a reason not to go for 'pg_noreturn void'?\n\nThat seems kind of ugly from here. Not sure why, but at least to\nmy mind that's a surprising ordering.\n\n>> One idea is to merge it with the \"void\" result type that such\n>> a function would presumably have, along the lines of\n>> #define pg_noreturn\tvoid __declspec(noreturn)\n\n> Yea, that'd be an alternative. But since not necessary, I'd not go\n> there?\n\nI kind of liked that idea, too bad you don't. One argument for it\nis that then there'd be exactly one right way to do it, not two.\nAlso, if we find out that there's some compiler that's pickier\nabout where to place the annotation, we'd have a central place\nto handle it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 12 Nov 2019 17:22:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: make pg_attribute_noreturn() work for msvc?"
},
{
"msg_contents": "Hi,\n\nOn 2019-11-12 17:22:05 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-11-12 15:58:07 -0500, Tom Lane wrote:\n> > I'm a bit confused as to why pg_upgrade.h doesn't use 'extern' for\n> > function declarations? Not that it's really related, except for the\n> > 'extern' otherwise hiding the effect of pgindent not liking 'void\n> > pg_noreturn'…\n> \n> There are various headers where people have tended to not use \"extern\".\n> I always disliked that, thinking it was not per project style, but\n> never bothered to force the issue. If we went around and inserted\n> extern in these places, it wouldn't bother me any.\n\n\n\n> > I don't see a reason not to go for 'pg_noreturn void'?\n> \n> That seems kind of ugly from here. Not sure why, but at least to\n> my mind that's a surprising ordering.\n\nOh, to me it seemed a quite reasonable order. It think it feels that way\nto me because we put properties like 'static', 'extern', 'inline' etc\nalso before the return type (and it's similar for variable declarations\ntoo).\n\nIt's maybe also worthwhile to note that emacs parses 'pg_noreturn void'\ncorrectly, but gets confused by 'void pg_noreturn'. It's just syntax\nhighlighting though, so whatever.\n\n\n> >> One idea is to merge it with the \"void\" result type that such\n> >> a function would presumably have, along the lines of\n> >> #define pg_noreturn\tvoid __declspec(noreturn)\n> \n> > Yea, that'd be an alternative. But since not necessary, I'd not go\n> > there?\n> \n> I kind of liked that idea, too bad you don't.\n\nI don't actively dislike it. It just seemed a bit more magic than\nnecessary. One need not understand what pg_noreturn does - not that it's\nhard to infer from the name - to know the return type of the function.\n\n\n> One argument for it is that then there'd be exactly one right way to\n> do it, not two. Also, if we find out that there's some compiler\n> that's pickier about where to place the annotation, we'd have a\n> central place to handle it.\n\nThe former seems like a good argument to me. I'm not quite sure I think\nthe second is likely.\n\n\nIt's worthwhile to note - I forgot this - that noreturn actually has\nbeen standardized in C11 and C++11. For C11 the keyword is _Noreturn,\nwith a convenience macro 'noreturn' defined in stdnoreturn.h.\n\nFor C++11, the syntax is (please don't get an aneurysm...):\n[[ noreturn ]] void funcname(params)...\n(yes, the [[]] are actually part of the syntax, not some BNF like thing)\n\nI *think* the standard prescribes _Noreturn to be before the return type\n(it's defined in the same rule as inline), but I have some difficulty\nparsing the standard language. Gcc at least accepts inline only before\nthe return type, but _Noreturn in both places.\n\nCertainly all the standard examples place it before the type.\n\n\nWhile it looks tempting to just use 'noreturn', and backfill it if the\ncurrent environment doesn't support it, I think that's a bit too\ndangerous, because it will tend to break other code like\n__attribute__((noreturn)) and _declspec(noreturn). As there's plenty\nother software using either or both of these, I don't think it's worth\ngoing there.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 12 Nov 2019 15:08:35 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: make pg_attribute_noreturn() work for msvc?"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> It's worthwhile to note - I forgot this - that noreturn actually has\n> been standardized in C11 and C++11. For C11 the keyword is _Noreturn,\n> with a convenience macro 'noreturn' defined in stdnoreturn.h.\n\n> For C++11, the syntax is (please don't get an aneurysm...):\n> [[ noreturn ]] void funcname(params)...\n> (yes, the [[]] are actually part of the syntax, not some BNF like thing)\n\nEgad. I'd *want* to hide that under a macro :-(\n\n> While it looks tempting to just use 'noreturn', and backfill it if the\n> current environment doesn't support it, I think that's a bit too\n> dangerous, because it will tend to break other code like\n> __attribute__((noreturn)) and _declspec(noreturn). As there's plenty\n> other software using either or both of these, I don't think it's worth\n> going there.\n\nAgreed, defining noreturn is too dangerous, it'll have to be\npg_noreturn. Or maybe use _Noreturn? But that feels ugly too.\n\nAnyway, I still like the idea of merging the void keyword in with\nthat.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 12 Nov 2019 18:15:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: make pg_attribute_noreturn() work for msvc?"
},
{
"msg_contents": "Hi,\n\nOn 2019-11-12 18:15:28 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > It's worthwhile to note - I forgot this - that noreturn actually has\n> > been standardized in C11 and C++11. For C11 the keyword is _Noreturn,\n> > with a convenience macro 'noreturn' defined in stdnoreturn.h.\n> \n> > For C++11, the syntax is (please don't get an aneurysm...):\n> > [[ noreturn ]] void funcname(params)...\n> > (yes, the [[]] are actually part of the syntax, not some BNF like thing)\n> \n> Egad. I'd *want* to hide that under a macro :-(\n\nYea, it's quite ugly.\n\nI think the only saving grace is that C++ made that the generic syntax\nfor various annotations / attributes. Everywhere, not just for function\nproperties. So there's [[noreturn]], [[fallthrough]], [[nodiscard]],\n[[maybe_unused]] etc, and that there is explicit namespacing for vendor\nextensions by using [[vendorname::attname]], e.g. the actually existing\n[[gnu::always_inline]].\n\nThere's probably not that many other forms of syntax one can add to all\nthe various places, without running into syntax limitations, or various\nvendor extensions...\n\nBut still.\n\n\n> > While it looks tempting to just use 'noreturn', and backfill it if the\n> > current environment doesn't support it, I think that's a bit too\n> > dangerous, because it will tend to break other code like\n> > __attribute__((noreturn)) and _declspec(noreturn). As there's plenty\n> > other software using either or both of these, I don't think it's worth\n> > going there.\n> \n> Agreed, defining noreturn is too dangerous, it'll have to be\n> pg_noreturn. Or maybe use _Noreturn? But that feels ugly too.\n\nYea, I don't like that all that much. We'd have to define it in C++\nmode, and it's in the explicit standard reserved namespace...\n\n\n> Anyway, I still like the idea of merging the void keyword in with\n> that.\n\nHm. Any other opinions?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 12 Nov 2019 15:26:39 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: make pg_attribute_noreturn() work for msvc?"
},
{
"msg_contents": "On 2019-Nov-12, Andres Freund wrote:\n\n> > Anyway, I still like the idea of merging the void keyword in with\n> > that.\n> \n> Hm. Any other opinions?\n\nAlthough it feels very strange to me at first glance, one only has to\nlearn the trick once. My initial inclination was not to do it, but I'm\nkinda +0.1 after thinking some more about it.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 13 Nov 2019 13:27:54 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: make pg_attribute_noreturn() work for msvc?"
},
{
"msg_contents": "On Wed, Nov 13, 2019 at 11:28 AM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n> On 2019-Nov-12, Andres Freund wrote:\n> > > Anyway, I still like the idea of merging the void keyword in with\n> > > that.\n> >\n> > Hm. Any other opinions?\n>\n> Although it feels very strange to me at first glance, one only has to\n> learn the trick once. My initial inclination was not to do it, but I'm\n> kinda +0.1 after thinking some more about it.\n\nI don't care much about this either way, but I think I might be\nslightly more inclined to keep them separate. If we went the\ndirection of combining them, it might be clearer if the magic word\nincluded \"void\" someplace inside of it, like:\n\nextern void_noreturn thunk(void);\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 15 Nov 2019 08:36:19 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: make pg_attribute_noreturn() work for msvc?"
}
] |
[
{
"msg_contents": "The following bug has been logged on the website:\n\nBug reference: 16109\nLogged by: Mukesh Chhatani\nEmail address: chhatani.mukesh@gmail.com\nPostgreSQL version: 10.10\nOperating system: AWS RDS\nDescription: \n\nHello Team,\r\n\r\nI am experiencing weird issue around planning time for the queries across\ncouple of environments below is the sample of the execution plan\r\n\r\nFast Execution Plan\r\ntransformations=> explain (analyze, buffers)\r\nSELECT x2.x3, x2.x4, x2.x5, x2.x6, x2.x7, x2.x8, x2.x9, x2.x10, x2.x11,\nx2.x12, x2.x13, x2.x14, x2.x15, x2.x16, x2.x17, x2.x18, x2.x19, x2.x20,\nx2.x21, x2.x22, x2.x23, x2.x24, x2.x25, x2.x26, x2.x27, x2.x28, x2.x29,\nx2.x30, x2.x31, x2.x32, x2.x33, x2.x34, x35. \"provider_id\", x35.\n\"provider_phone_id\", x35. \"provider_id\", x35. \"address_id\", x35.\n\"prod_code\", x35. \"phone_number\", x35. \"phone_type\", x36. \"provider_id\",\nx36. \"provider_id\", x36. \"address_id\", x36. \"language_code\", x36.\n\"language_used_by\" FROM ( SELECT x37.x38 AS x14, x37.x39 AS x6, x37.x40 AS\nx26, x37.x41 AS x9, x37.x42 AS x20, x37.x43 AS x16, x37.x44 AS x8, x37.x45\nAS x19, x37.x46 AS x3, x37.x47 AS x13, x37.x48 AS x12, x37.x49 AS x18,\nx37.x50 AS x17, x37.x51 AS x11, x37.x52 AS x22, x37.x53 AS x21, x37.x54 AS\nx10, x37.x55 AS x5, x37.x56 AS x4, x37.x57 AS x25, x37.x58 AS x7, x37.x59 AS\nx15, x37.x60 AS x24, x37.x61 AS x23, ( CASE WHEN (x62. \"attribute_value\" IS\nNULL) THEN NULL ELSE 1 END) AS x27, x62. \"paid\" AS x28, x62.\n\"attribute_value\" AS x34, x62. \"attribute_id\" AS x33, x62. \"provider_id\" AS\nx29, x62. \"attribute_group_id\" AS x32, x62. \"parent_paid\" AS x31, x62.\n\"address_id\" AS x30 FROM ( SELECT \"provider_id\" AS x46, \"zip\" AS x38,\n\"first_name\" AS x39, \"provider_name_id\" AS x40, \"degree\" AS x41,\n\"preferred_flag\" AS x42, \"county\" AS x43, \"suffix\" AS x44, \"individual_id\"\nAS x45, \"state\" AS x47, \"city\" AS x48, \"latitude\" AS x49, \"longitude\" AS\nx50, \"address\" AS x51, \"exclusion_type_id\" AS x52, \"quality_score\" AS x53,\n\"gender\" AS x54, \"last_name\" AS x55, \"address_id\" AS x56, \"hi_q_hospital_id\"\nAS x57, \"middle_name\" AS x58, \"zip4\" AS x59, \"handicap_accessible\" AS x60,\n\"sour_address\" AS x61 FROM \"provider\" WHERE \"provider_id\" =\n'03563735-3798-441a-aea6-4e561ea347f7') x37 LEFT OUTER JOIN\n\"provider_attribute\" x62 ON (x37.x46 = x62. \"provider_id\") AND (x37.x56 =\nx62. \"address_id\")) x2 LEFT OUTER JOIN \"provider_phone\" x35 ON (x2.x3 = x35.\n\"provider_id\") AND (x2.x4 = x35. \"address_id\") LEFT OUTER JOIN\n\"provider_language\" x36 ON (x2.x3 = x36. \"provider_id\") AND (x2.x4 = x36.\n\"address_id\");\r\n \n QUERY PLAN \n \r\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\n Merge Left Join (cost=15.87..16.25 rows=13 width=920) (actual\ntime=0.021..0.022 rows=0 loops=1)\r\n Merge Cond: ((provider.address_id)::text = (x36.address_id)::text)\r\n Join Filter: ((provider.provider_id)::text = (x36.provider_id)::text)\r\n Buffers: shared hit=3\r\n -> Merge Left Join (cost=12.37..12.69 rows=13 width=754) (actual\ntime=0.021..0.021 rows=0 loops=1)\r\n Merge Cond: ((provider.address_id)::text =\n(x35.address_id)::text)\r\n Join Filter: ((provider.provider_id)::text =\n(x35.provider_id)::text)\r\n Buffers: shared hit=3\r\n -> Merge Left Join (cost=8.38..8.59 rows=13 width=584) (actual\ntime=0.021..0.021 rows=0 loops=1)\r\n Merge Cond: ((provider.address_id)::text =\n(x62.address_id)::text)\r\n Join Filter: ((provider.provider_id)::text =\n(x62.provider_id)::text)\r\n Buffers: shared hit=3\r\n -> Sort (cost=3.89..3.93 rows=13 width=387) (actual\ntime=0.020..0.021 rows=0 loops=1)\r\n Sort Key: provider.address_id\r\n Sort Method: quicksort Memory: 25kB\r\n Buffers: shared hit=3\r\n -> Index Scan using provider_provider_id_idx on\nprovider (cost=0.42..3.65 rows=13 width=387) (actual time=0.017..0.017\nrows=0 loops=1)\r\n Index Cond: ((provider_id)::text =\n'03563735-3798-441a-aea6-4e561ea347f7'::text)\r\n Buffers: shared hit=3\r\n -> Sort (cost=4.49..4.56 rows=26 width=197) (never\nexecuted)\r\n Sort Key: x62.address_id\r\n -> Append (cost=0.42..3.88 rows=26 width=197) (never\nexecuted)\r\n -> Index Scan using\nprovider_attribute_sub_0_provider_id_idx on provider_attribute_sub_0 x62 \n(cost=0.42..3.88 rows=26 width=197) (never executed)\r\n Index Cond: ((provider_id)::text =\n'03563735-3798-441a-aea6-4e561ea347f7'::text)\r\n -> Sort (cost=3.98..4.02 rows=15 width=170) (never executed)\r\n Sort Key: x35.address_id\r\n -> Index Scan using provider_phone_provider_id_idx on\nprovider_phone x35 (cost=0.43..3.69 rows=15 width=170) (never executed)\r\n Index Cond: ((provider_id)::text =\n'03563735-3798-441a-aea6-4e561ea347f7'::text)\r\n -> Sort (cost=3.50..3.51 rows=3 width=88) (never executed)\r\n Sort Key: x36.address_id\r\n -> Index Scan using provider_language_provider_id_idx on\nprovider_language x36 (cost=0.42..3.47 rows=3 width=88) (never executed)\r\n Index Cond: ((provider_id)::text =\n'03563735-3798-441a-aea6-4e561ea347f7'::text)\r\n Planning time: 7.416 ms\r\n Execution time: 0.096 ms\r\n(34 rows)\r\n\r\n\r\nSlow Execution Plan\r\ntransformations_uhc_medicaid=> explain (analyze, buffers)\r\nSELECT x2.x3, x2.x4, x2.x5, x2.x6, x2.x7, x2.x8, x2.x9, x2.x10, x2.x11,\nx2.x12, x2.x13, x2.x14, x2.x15, x2.x16, x2.x17, x2.x18, x2.x19, x2.x20,\nx2.x21, x2.x22, x2.x23, x2.x24, x2.x25, x2.x26, x2.x27, x2.x28, x2.x29,\nx2.x30, x2.x31, x2.x32, x2.x33, x2.x34, x35. \"provider_id\", x35.\n\"provider_phone_id\", x35. \"provider_id\", x35. \"address_id\", x35.\n\"prod_code\", x35. \"phone_number\", x35. \"phone_type\", x36. \"provider_id\",\nx36. \"provider_id\", x36. \"address_id\", x36. \"language_code\", x36.\n\"language_used_by\" FROM ( SELECT x37.x38 AS x14, x37.x39 AS x6, x37.x40 AS\nx26, x37.x41 AS x9, x37.x42 AS x20, x37.x43 AS x16, x37.x44 AS x8, x37.x45\nAS x19, x37.x46 AS x3, x37.x47 AS x13, x37.x48 AS x12, x37.x49 AS x18,\nx37.x50 AS x17, x37.x51 AS x11, x37.x52 AS x22, x37.x53 AS x21, x37.x54 AS\nx10, x37.x55 AS x5, x37.x56 AS x4, x37.x57 AS x25, x37.x58 AS x7, x37.x59 AS\nx15, x37.x60 AS x24, x37.x61 AS x23, ( CASE WHEN (x62. \"attribute_value\" IS\nNULL) THEN NULL ELSE 1 END) AS x27, x62. \"paid\" AS x28, x62.\n\"attribute_value\" AS x34, x62. \"attribute_id\" AS x33, x62. \"provider_id\" AS\nx29, x62. \"attribute_group_id\" AS x32, x62. \"parent_paid\" AS x31, x62.\n\"address_id\" AS x30 FROM ( SELECT \"provider_id\" AS x46, \"zip\" AS x38,\n\"first_name\" AS x39, \"provider_name_id\" AS x40, \"degree\" AS x41,\n\"preferred_flag\" AS x42, \"county\" AS x43, \"suffix\" AS x44, \"individual_id\"\nAS x45, \"state\" AS x47, \"city\" AS x48, \"latitude\" AS x49, \"longitude\" AS\nx50, \"address\" AS x51, \"exclusion_type_id\" AS x52, \"quality_score\" AS x53,\n\"gender\" AS x54, \"last_name\" AS x55, \"address_id\" AS x56, \"hi_q_hospital_id\"\nAS x57, \"middle_name\" AS x58, \"zip4\" AS x59, \"handicap_accessible\" AS x60,\n\"sour_address\" AS x61 FROM \"provider\" WHERE \"provider_id\" =\n'03563735-3798-441a-aea6-4e561ea347f7') x37 LEFT OUTER JOIN\n\"provider_attribute\" x62 ON (x37.x46 = x62. \"provider_id\") AND (x37.x56 =\nx62. \"address_id\")) x2 LEFT OUTER JOIN \"provider_phone\" x35 ON (x2.x3 = x35.\n\"provider_id\") AND (x2.x4 = x35. \"address_id\") LEFT OUTER JOIN\n\"provider_language\" x36 ON (x2.x3 = x36. \"provider_id\") AND (x2.x4 = x36.\n\"address_id\");\r\n \n QUERY PLAN \n \r\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\n Nested Loop Left Join (cost=5.14..15.03 rows=2 width=944) (actual\ntime=0.039..0.039 rows=0 loops=1)\r\n Join Filter: (((provider.provider_id)::text = (x36.provider_id)::text)\nAND ((provider.address_id)::text = (x36.address_id)::text))\r\n Buffers: shared hit=4\r\n -> Nested Loop Left Join (cost=4.72..11.56 rows=2 width=777) (actual\ntime=0.039..0.039 rows=0 loops=1)\r\n Join Filter: (((provider.provider_id)::text =\n(x35.provider_id)::text) AND ((provider.address_id)::text =\n(x35.address_id)::text))\r\n Buffers: shared hit=4\r\n -> Hash Right Join (cost=4.17..7.78 rows=2 width=607) (actual\ntime=0.038..0.038 rows=0 loops=1)\r\n Hash Cond: (((x62.provider_id)::text =\n(provider.provider_id)::text) AND ((x62.address_id)::text =\n(provider.address_id)::text))\r\n Buffers: shared hit=4\r\n -> Append (cost=0.55..3.94 rows=22 width=171) (never\nexecuted)\r\n -> Index Scan using\nprovider_attribute_sub_0_provider_id_idx on provider_attribute_sub_0 x62 \n(cost=0.55..3.94 rows=22 width=171) (never executed)\r\n Index Cond: ((provider_id)::text =\n'03563735-3798-441a-aea6-4e561ea347f7'::text)\r\n -> Hash (cost=3.59..3.59 rows=2 width=436) (actual\ntime=0.031..0.031 rows=0 loops=1)\r\n Buckets: 1024 Batches: 1 Memory Usage: 8kB\r\n Buffers: shared hit=4\r\n -> Index Scan using provider_provider_id_idx on\nprovider (cost=0.55..3.59 rows=2 width=436) (actual time=0.030..0.030\nrows=0 loops=1)\r\n Index Cond: ((provider_id)::text =\n'03563735-3798-441a-aea6-4e561ea347f7'::text)\r\n Buffers: shared hit=4\r\n -> Materialize (cost=0.56..3.65 rows=4 width=170) (never\nexecuted)\r\n -> Index Scan using provider_phone_provider_id_idx on\nprovider_phone x35 (cost=0.56..3.62 rows=4 width=170) (never executed)\r\n Index Cond: ((provider_id)::text =\n'03563735-3798-441a-aea6-4e561ea347f7'::text)\r\n -> Materialize (cost=0.42..3.44 rows=1 width=89) (never executed)\r\n -> Index Scan using provider_language_provider_id_idx on\nprovider_language x36 (cost=0.42..3.44 rows=1 width=89) (never executed)\r\n Index Cond: ((provider_id)::text =\n'03563735-3798-441a-aea6-4e561ea347f7'::text)\r\n Planning time: 7195.110 ms\r\n Execution time: 0.143 ms\r\n\r\nSome details around table structure\r\nprovider_attribute is partitioned tables as below while other tables are\nnormal tables\r\ntransformations=> \\d+ provider_attribute\r\n Table \"public.provider_attribute\"\r\n Column | Type | Collation | Nullable | Default |\nStorage | Stats target | Description \r\n--------------------+-------------------+-----------+----------+---------+----------+--------------+-------------\r\n paid | character varying | | | |\nextended | | \r\n provider_id | character varying | | not null | |\nextended | | \r\n address_id | character varying | | not null | |\nextended | | \r\n parent_paid | character varying | | | |\nextended | | \r\n attribute_group_id | character varying | | | |\nextended | | \r\n attribute_id | character varying | | not null | |\nextended | | \r\n attribute_value | character varying | | not null | |\nextended | | \r\nPartition key: RANGE (provider_id)\r\nPartitions: provider_attribute_sub_0 FOR VALUES FROM ('0') TO ('1'),\r\n provider_attribute_sub_1 FOR VALUES FROM ('1') TO ('2'),\r\n provider_attribute_sub_2 FOR VALUES FROM ('2') TO ('3'),\r\n provider_attribute_sub_3 FOR VALUES FROM ('3') TO ('4'),\r\n provider_attribute_sub_4 FOR VALUES FROM ('4') TO ('5'),\r\n provider_attribute_sub_5 FOR VALUES FROM ('5') TO ('6'),\r\n provider_attribute_sub_6 FOR VALUES FROM ('6') TO ('7'),\r\n provider_attribute_sub_7 FOR VALUES FROM ('7') TO ('8'),\r\n provider_attribute_sub_8 FOR VALUES FROM ('8') TO ('9'),\r\n provider_attribute_sub_9 FOR VALUES FROM ('9') TO ('a'),\r\n provider_attribute_sub_a FOR VALUES FROM ('a') TO ('b'),\r\n provider_attribute_sub_b FOR VALUES FROM ('b') TO ('c'),\r\n provider_attribute_sub_c FOR VALUES FROM ('c') TO ('d'),\r\n provider_attribute_sub_d FOR VALUES FROM ('d') TO ('e'),\r\n provider_attribute_sub_e FOR VALUES FROM ('e') TO ('f'),\r\n provider_attribute_sub_f FOR VALUES FROM ('f') TO ('g')\r\n\r\ntransformations=> \\d+ provider_attribute_sub_0\r\n Table\n\"public.provider_attribute_sub_0\"\r\n Column | Type | Collation | Nullable | Default |\nStorage | Stats target | Description \r\n--------------------+-------------------+-----------+----------+---------+----------+--------------+-------------\r\n paid | character varying | | | |\nextended | | \r\n provider_id | character varying | | not null | |\nextended | | \r\n address_id | character varying | | not null | |\nextended | | \r\n parent_paid | character varying | | | |\nextended | | \r\n attribute_group_id | character varying | | | |\nextended | | \r\n attribute_id | character varying | | not null | |\nextended | | \r\n attribute_value | character varying | | not null | |\nextended | | \r\nPartition of: provider_attribute FOR VALUES FROM ('0') TO ('1')\r\nPartition constraint: ((provider_id IS NOT NULL) AND ((provider_id)::text >=\n'0'::character varying) AND ((provider_id)::text < '1'::character\nvarying))\r\nIndexes:\r\n \"provider_attribute_sub_0_provider_id_idx\" btree (provider_id) CLUSTER\r\n\r\nI have tried to vacuum analyze the whole database still queries are slow in\n1 of the environment while faster in another environment.\r\n\r\nI have also compared and matched all postgres parameters and still no luck\nin speeding the queries.\r\n\r\nAny help would be greatly appreciated.\r\n\r\nThanks & Regards,\r\nMukesh Chhatani",
"msg_date": "Tue, 12 Nov 2019 20:34:35 +0000",
"msg_from": "PG Bug reporting form <noreply@postgresql.org>",
"msg_from_op": true,
"msg_subject": "BUG #16109: Postgres planning time is high across version - 10.6 vs\n 10.10"
},
{
"msg_contents": "Hi,\n\nOn 2019-11-12 20:34:35 +0000, PG Bug reporting form wrote:\n> I am experiencing weird issue around planning time for the queries across\n> couple of environments below is the sample of the execution plan\n\n\nJust to confirm, these are the same queries, but executed in different\ndatabases / environments?\n\n\n> Fast Execution Plan\n> transformations=> explain (analyze, buffers)\n> SELECT x2.x3, x2.x4, x2.x5, x2.x6, x2.x7, x2.x8, x2.x9, x2.x10, x2.x11,\n> x2.x12, x2.x13, x2.x14, x2.x15, x2.x16, x2.x17, x2.x18, x2.x19, x2.x20,\n> x2.x21, x2.x22, x2.x23, x2.x24, x2.x25, x2.x26, x2.x27, x2.x28, x2.x29,\n> x2.x30, x2.x31, x2.x32, x2.x33, x2.x34, x35. \"provider_id\", x35.\n> \"provider_phone_id\", x35. \"provider_id\", x35. \"address_id\", x35.\n> \"prod_code\", x35. \"phone_number\", x35. \"phone_type\", x36. \"provider_id\",\n> x36. \"provider_id\", x36. \"address_id\", x36. \"language_code\", x36.\n> \"language_used_by\" FROM ( SELECT x37.x38 AS x14, x37.x39 AS x6, x37.x40 AS\n> x26, x37.x41 AS x9, x37.x42 AS x20, x37.x43 AS x16, x37.x44 AS x8, x37.x45\n> AS x19, x37.x46 AS x3, x37.x47 AS x13, x37.x48 AS x12, x37.x49 AS x18,\n> x37.x50 AS x17, x37.x51 AS x11, x37.x52 AS x22, x37.x53 AS x21, x37.x54 AS\n> x10, x37.x55 AS x5, x37.x56 AS x4, x37.x57 AS x25, x37.x58 AS x7, x37.x59 AS\n> x15, x37.x60 AS x24, x37.x61 AS x23, ( CASE WHEN (x62. \"attribute_value\" IS\n> NULL) THEN NULL ELSE 1 END) AS x27, x62. \"paid\" AS x28, x62.\n> \"attribute_value\" AS x34, x62. \"attribute_id\" AS x33, x62. \"provider_id\" AS\n> x29, x62. \"attribute_group_id\" AS x32, x62. \"parent_paid\" AS x31, x62.\n> \"address_id\" AS x30 FROM ( SELECT \"provider_id\" AS x46, \"zip\" AS x38,\n> \"first_name\" AS x39, \"provider_name_id\" AS x40, \"degree\" AS x41,\n> \"preferred_flag\" AS x42, \"county\" AS x43, \"suffix\" AS x44, \"individual_id\"\n> AS x45, \"state\" AS x47, \"city\" AS x48, \"latitude\" AS x49, \"longitude\" AS\n> x50, \"address\" AS x51, \"exclusion_type_id\" AS x52, \"quality_score\" AS x53,\n> \"gender\" AS x54, \"last_name\" AS x55, \"address_id\" AS x56, \"hi_q_hospital_id\"\n> AS x57, \"middle_name\" AS x58, \"zip4\" AS x59, \"handicap_accessible\" AS x60,\n> \"sour_address\" AS x61 FROM \"provider\" WHERE \"provider_id\" =\n> '03563735-3798-441a-aea6-4e561ea347f7') x37 LEFT OUTER JOIN\n> \"provider_attribute\" x62 ON (x37.x46 = x62. \"provider_id\") AND (x37.x56 =\n> x62. \"address_id\")) x2 LEFT OUTER JOIN \"provider_phone\" x35 ON (x2.x3 = x35.\n> \"provider_id\") AND (x2.x4 = x35. \"address_id\") LEFT OUTER JOIN\n> \"provider_language\" x36 ON (x2.x3 = x36. \"provider_id\") AND (x2.x4 = x36.\n> \"address_id\");\n\nThis is really hard to read for a human...\n\nHere's a automatically reformatted version:\n\nSELECT x2.x3,\n x2.x4,\n x2.x5,\n x2.x6,\n x2.x7,\n x2.x8,\n x2.x9,\n x2.x10,\n x2.x11,\n x2.x12,\n x2.x13,\n x2.x14,\n x2.x15,\n x2.x16,\n x2.x17,\n x2.x18,\n x2.x19,\n x2.x20,\n x2.x21,\n x2.x22,\n x2.x23,\n x2.x24,\n x2.x25,\n x2.x26,\n x2.x27,\n x2.x28,\n x2.x29,\n x2.x30,\n x2.x31,\n x2.x32,\n x2.x33,\n x2.x34,\n x35. \"provider_id\",\n x35. \"provider_phone_id\",\n x35. \"provider_id\",\n x35. \"address_id\",\n x35. \"prod_code\",\n x35. \"phone_number\",\n x35. \"phone_type\",\n x36. \"provider_id\",\n x36. \"provider_id\",\n x36. \"address_id\",\n x36. \"language_code\",\n x36. \"language_used_by\"\nFROM\n (SELECT x37.x38 AS x14,\n x37.x39 AS x6,\n x37.x40 AS x26,\n x37.x41 AS x9,\n x37.x42 AS x20,\n x37.x43 AS x16,\n x37.x44 AS x8,\n x37.x45 AS x19,\n x37.x46 AS x3,\n x37.x47 AS x13,\n x37.x48 AS x12,\n x37.x49 AS x18,\n x37.x50 AS x17,\n x37.x51 AS x11,\n x37.x52 AS x22,\n x37.x53 AS x21,\n x37.x54 AS x10,\n x37.x55 AS x5,\n x37.x56 AS x4,\n x37.x57 AS x25,\n x37.x58 AS x7,\n x37.x59 AS x15,\n x37.x60 AS x24,\n x37.x61 AS x23,\n (CASE\n WHEN (x62. \"attribute_value\" IS NULL) THEN NULL\n ELSE 1\n END) AS x27,\n x62. \"paid\" AS x28,\n x62. \"attribute_value\" AS x34,\n x62. \"attribute_id\" AS x33,\n x62. \"provider_id\" AS x29,\n x62. \"attribute_group_id\" AS x32,\n x62. \"parent_paid\" AS x31,\n x62. \"address_id\" AS x30\n FROM\n (SELECT \"provider_id\" AS x46,\n \"zip\" AS x38,\n \"first_name\" AS x39,\n \"provider_name_id\" AS x40,\n \"degree\" AS x41,\n \"preferred_flag\" AS x42,\n \"county\" AS x43,\n \"suffix\" AS x44,\n \"individual_id\" AS x45,\n \"state\" AS x47,\n \"city\" AS x48,\n \"latitude\" AS x49,\n \"longitude\" AS x50,\n \"address\" AS x51,\n \"exclusion_type_id\" AS x52,\n \"quality_score\" AS x53,\n \"gender\" AS x54,\n \"last_name\" AS x55,\n \"address_id\" AS x56,\n \"hi_q_hospital_id\" AS x57,\n \"middle_name\" AS x58,\n \"zip4\" AS x59,\n \"handicap_accessible\" AS x60,\n \"sour_address\" AS x61\n FROM \"provider\"\n WHERE \"provider_id\" = '03563735-3798-441a-aea6-4e561ea347f7') x37\n LEFT OUTER JOIN \"provider_attribute\" x62 ON (x37.x46 = x62. \"provider_id\")\n AND (x37.x56 = x62. \"address_id\")) x2\nLEFT OUTER JOIN \"provider_phone\" x35 ON (x2.x3 = x35. \"provider_id\")\nAND (x2.x4 = x35. \"address_id\")\nLEFT OUTER JOIN \"provider_language\" x36 ON (x2.x3 = x36. \"provider_id\")\nAND (x2.x4 = x36. \"address_id\");\n\n\n> Slow Execution Plan\n> transformations_uhc_medicaid=> explain (analyze, buffers)\n> SELECT x2.x3, x2.x4, x2.x5, x2.x6, x2.x7, x2.x8, x2.x9, x2.x10, x2.x11,\n> x2.x12, x2.x13, x2.x14, x2.x15, x2.x16, x2.x17, x2.x18, x2.x19, x2.x20,\n> x2.x21, x2.x22, x2.x23, x2.x24, x2.x25, x2.x26, x2.x27, x2.x28, x2.x29,\n> x2.x30, x2.x31, x2.x32, x2.x33, x2.x34, x35. \"provider_id\", x35.\n> \"provider_phone_id\", x35. \"provider_id\", x35. \"address_id\", x35.\n> \"prod_code\", x35. \"phone_number\", x35. \"phone_type\", x36. \"provider_id\",\n> x36. \"provider_id\", x36. \"address_id\", x36. \"language_code\", x36.\n> \"language_used_by\" FROM ( SELECT x37.x38 AS x14, x37.x39 AS x6, x37.x40 AS\n> x26, x37.x41 AS x9, x37.x42 AS x20, x37.x43 AS x16, x37.x44 AS x8, x37.x45\n> AS x19, x37.x46 AS x3, x37.x47 AS x13, x37.x48 AS x12, x37.x49 AS x18,\n> x37.x50 AS x17, x37.x51 AS x11, x37.x52 AS x22, x37.x53 AS x21, x37.x54 AS\n> x10, x37.x55 AS x5, x37.x56 AS x4, x37.x57 AS x25, x37.x58 AS x7, x37.x59 AS\n> x15, x37.x60 AS x24, x37.x61 AS x23, ( CASE WHEN (x62. \"attribute_value\" IS\n> NULL) THEN NULL ELSE 1 END) AS x27, x62. \"paid\" AS x28, x62.\n> \"attribute_value\" AS x34, x62. \"attribute_id\" AS x33, x62. \"provider_id\" AS\n> x29, x62. \"attribute_group_id\" AS x32, x62. \"parent_paid\" AS x31, x62.\n> \"address_id\" AS x30 FROM ( SELECT \"provider_id\" AS x46, \"zip\" AS x38,\n> \"first_name\" AS x39, \"provider_name_id\" AS x40, \"degree\" AS x41,\n> \"preferred_flag\" AS x42, \"county\" AS x43, \"suffix\" AS x44, \"individual_id\"\n> AS x45, \"state\" AS x47, \"city\" AS x48, \"latitude\" AS x49, \"longitude\" AS\n> x50, \"address\" AS x51, \"exclusion_type_id\" AS x52, \"quality_score\" AS x53,\n> \"gender\" AS x54, \"last_name\" AS x55, \"address_id\" AS x56, \"hi_q_hospital_id\"\n> AS x57, \"middle_name\" AS x58, \"zip4\" AS x59, \"handicap_accessible\" AS x60,\n> \"sour_address\" AS x61 FROM \"provider\" WHERE \"provider_id\" =\n> '03563735-3798-441a-aea6-4e561ea347f7') x37 LEFT OUTER JOIN\n> \"provider_attribute\" x62 ON (x37.x46 = x62. \"provider_id\") AND (x37.x56 =\n> x62. \"address_id\")) x2 LEFT OUTER JOIN \"provider_phone\" x35 ON (x2.x3 = x35.\n> \"provider_id\") AND (x2.x4 = x35. \"address_id\") LEFT OUTER JOIN\n> \"provider_language\" x36 ON (x2.x3 = x36. \"provider_id\") AND (x2.x4 = x36.\n> \"address_id\");\n\nBased on a quick skim, this is the same query as before.\n\n\n> I have tried to vacuum analyze the whole database still queries are slow in\n> 1 of the environment while faster in another environment.\n\nIs there a chance that one database has longrunning queries, slow\nreplication, or something like that, leading to a very bloated database?\nEven if you VACUUM FULL, if there's still long-running sessions, the\nbloat could not necessarily be removed, because those sessions might\nstill need to see the already superseded data.\n\nDo the table / index sizes differ between the environment? Are the\ndatabases expected to have the same content?\n\n\nThis last point is more oriented towards other PG developers: I wonder\nif we ought to display buffer statistics for plan time, for EXPLAIN\n(BUFFERS). That'd surely make it easier to discern cases where we\ne.g. access the index and scan a lot of the index from cases where we\nhit some CPU time issue. We should easily be able to get that data, I\nthink, we already maintain it, we'd just need to compute the diff\nbetween pgBufferUsage before / after planning.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 12 Nov 2019 12:55:06 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16109: Postgres planning time is high across version - 10.6\n vs 10.10"
},
{
"msg_contents": "Thanks for getting back to me so quickly\n\nQueries are same and executed in 2 different environments. There are no\nlong running queries on any of the environments since they are idle right\naway for my testing.\n\nI can try full vacuum if that is recommended since I tried vacuum analyze\non the whole database in both environments.\n\nDatases have the same content and size is also same.\n\nSorry but I am never seen this before , if the sizes vary or if the content\nvaries I have seen execution time being slow and not the planning time.\n\nI am working on a dataset which I will share for further investigation and\nanalysis.\n\nRegards,\nMukesh\n\nOn Tue, Nov 12, 2019 at 2:55 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2019-11-12 20:34:35 +0000, PG Bug reporting form wrote:\n> > I am experiencing weird issue around planning time for the queries across\n> > couple of environments below is the sample of the execution plan\n>\n>\n> Just to confirm, these are the same queries, but executed in different\n> databases / environments?\n>\n>\n> > Fast Execution Plan\n> > transformations=> explain (analyze, buffers)\n> > SELECT x2.x3, x2.x4, x2.x5, x2.x6, x2.x7, x2.x8, x2.x9, x2.x10, x2.x11,\n> > x2.x12, x2.x13, x2.x14, x2.x15, x2.x16, x2.x17, x2.x18, x2.x19, x2.x20,\n> > x2.x21, x2.x22, x2.x23, x2.x24, x2.x25, x2.x26, x2.x27, x2.x28, x2.x29,\n> > x2.x30, x2.x31, x2.x32, x2.x33, x2.x34, x35. \"provider_id\", x35.\n> > \"provider_phone_id\", x35. \"provider_id\", x35. \"address_id\", x35.\n> > \"prod_code\", x35. \"phone_number\", x35. \"phone_type\", x36. \"provider_id\",\n> > x36. \"provider_id\", x36. \"address_id\", x36. \"language_code\", x36.\n> > \"language_used_by\" FROM ( SELECT x37.x38 AS x14, x37.x39 AS x6, x37.x40\n> AS\n> > x26, x37.x41 AS x9, x37.x42 AS x20, x37.x43 AS x16, x37.x44 AS x8,\n> x37.x45\n> > AS x19, x37.x46 AS x3, x37.x47 AS x13, x37.x48 AS x12, x37.x49 AS x18,\n> > x37.x50 AS x17, x37.x51 AS x11, x37.x52 AS x22, x37.x53 AS x21, x37.x54\n> AS\n> > x10, x37.x55 AS x5, x37.x56 AS x4, x37.x57 AS x25, x37.x58 AS x7,\n> x37.x59 AS\n> > x15, x37.x60 AS x24, x37.x61 AS x23, ( CASE WHEN (x62. \"attribute_value\"\n> IS\n> > NULL) THEN NULL ELSE 1 END) AS x27, x62. \"paid\" AS x28, x62.\n> > \"attribute_value\" AS x34, x62. \"attribute_id\" AS x33, x62. \"provider_id\"\n> AS\n> > x29, x62. \"attribute_group_id\" AS x32, x62. \"parent_paid\" AS x31, x62.\n> > \"address_id\" AS x30 FROM ( SELECT \"provider_id\" AS x46, \"zip\" AS x38,\n> > \"first_name\" AS x39, \"provider_name_id\" AS x40, \"degree\" AS x41,\n> > \"preferred_flag\" AS x42, \"county\" AS x43, \"suffix\" AS x44,\n> \"individual_id\"\n> > AS x45, \"state\" AS x47, \"city\" AS x48, \"latitude\" AS x49, \"longitude\" AS\n> > x50, \"address\" AS x51, \"exclusion_type_id\" AS x52, \"quality_score\" AS\n> x53,\n> > \"gender\" AS x54, \"last_name\" AS x55, \"address_id\" AS x56,\n> \"hi_q_hospital_id\"\n> > AS x57, \"middle_name\" AS x58, \"zip4\" AS x59, \"handicap_accessible\" AS\n> x60,\n> > \"sour_address\" AS x61 FROM \"provider\" WHERE \"provider_id\" =\n> > '03563735-3798-441a-aea6-4e561ea347f7') x37 LEFT OUTER JOIN\n> > \"provider_attribute\" x62 ON (x37.x46 = x62. \"provider_id\") AND (x37.x56 =\n> > x62. \"address_id\")) x2 LEFT OUTER JOIN \"provider_phone\" x35 ON (x2.x3 =\n> x35.\n> > \"provider_id\") AND (x2.x4 = x35. \"address_id\") LEFT OUTER JOIN\n> > \"provider_language\" x36 ON (x2.x3 = x36. \"provider_id\") AND (x2.x4 = x36.\n> > \"address_id\");\n>\n> This is really hard to read for a human...\n>\n> Here's a automatically reformatted version:\n>\n> SELECT x2.x3,\n> x2.x4,\n> x2.x5,\n> x2.x6,\n> x2.x7,\n> x2.x8,\n> x2.x9,\n> x2.x10,\n> x2.x11,\n> x2.x12,\n> x2.x13,\n> x2.x14,\n> x2.x15,\n> x2.x16,\n> x2.x17,\n> x2.x18,\n> x2.x19,\n> x2.x20,\n> x2.x21,\n> x2.x22,\n> x2.x23,\n> x2.x24,\n> x2.x25,\n> x2.x26,\n> x2.x27,\n> x2.x28,\n> x2.x29,\n> x2.x30,\n> x2.x31,\n> x2.x32,\n> x2.x33,\n> x2.x34,\n> x35. \"provider_id\",\n> x35. \"provider_phone_id\",\n> x35. \"provider_id\",\n> x35. \"address_id\",\n> x35. \"prod_code\",\n> x35. \"phone_number\",\n> x35. \"phone_type\",\n> x36. \"provider_id\",\n> x36. \"provider_id\",\n> x36. \"address_id\",\n> x36. \"language_code\",\n> x36. \"language_used_by\"\n> FROM\n> (SELECT x37.x38 AS x14,\n> x37.x39 AS x6,\n> x37.x40 AS x26,\n> x37.x41 AS x9,\n> x37.x42 AS x20,\n> x37.x43 AS x16,\n> x37.x44 AS x8,\n> x37.x45 AS x19,\n> x37.x46 AS x3,\n> x37.x47 AS x13,\n> x37.x48 AS x12,\n> x37.x49 AS x18,\n> x37.x50 AS x17,\n> x37.x51 AS x11,\n> x37.x52 AS x22,\n> x37.x53 AS x21,\n> x37.x54 AS x10,\n> x37.x55 AS x5,\n> x37.x56 AS x4,\n> x37.x57 AS x25,\n> x37.x58 AS x7,\n> x37.x59 AS x15,\n> x37.x60 AS x24,\n> x37.x61 AS x23,\n> (CASE\n> WHEN (x62. \"attribute_value\" IS NULL) THEN NULL\n> ELSE 1\n> END) AS x27,\n> x62. \"paid\" AS x28,\n> x62. \"attribute_value\" AS x34,\n> x62. \"attribute_id\" AS x33,\n> x62. \"provider_id\" AS x29,\n> x62. \"attribute_group_id\" AS x32,\n> x62. \"parent_paid\" AS x31,\n> x62. \"address_id\" AS x30\n> FROM\n> (SELECT \"provider_id\" AS x46,\n> \"zip\" AS x38,\n> \"first_name\" AS x39,\n> \"provider_name_id\" AS x40,\n> \"degree\" AS x41,\n> \"preferred_flag\" AS x42,\n> \"county\" AS x43,\n> \"suffix\" AS x44,\n> \"individual_id\" AS x45,\n> \"state\" AS x47,\n> \"city\" AS x48,\n> \"latitude\" AS x49,\n> \"longitude\" AS x50,\n> \"address\" AS x51,\n> \"exclusion_type_id\" AS x52,\n> \"quality_score\" AS x53,\n> \"gender\" AS x54,\n> \"last_name\" AS x55,\n> \"address_id\" AS x56,\n> \"hi_q_hospital_id\" AS x57,\n> \"middle_name\" AS x58,\n> \"zip4\" AS x59,\n> \"handicap_accessible\" AS x60,\n> \"sour_address\" AS x61\n> FROM \"provider\"\n> WHERE \"provider_id\" = '03563735-3798-441a-aea6-4e561ea347f7') x37\n> LEFT OUTER JOIN \"provider_attribute\" x62 ON (x37.x46 = x62.\n> \"provider_id\")\n> AND (x37.x56 = x62. \"address_id\")) x2\n> LEFT OUTER JOIN \"provider_phone\" x35 ON (x2.x3 = x35. \"provider_id\")\n> AND (x2.x4 = x35. \"address_id\")\n> LEFT OUTER JOIN \"provider_language\" x36 ON (x2.x3 = x36. \"provider_id\")\n> AND (x2.x4 = x36. \"address_id\");\n>\n>\n> > Slow Execution Plan\n> > transformations_uhc_medicaid=> explain (analyze, buffers)\n> > SELECT x2.x3, x2.x4, x2.x5, x2.x6, x2.x7, x2.x8, x2.x9, x2.x10, x2.x11,\n> > x2.x12, x2.x13, x2.x14, x2.x15, x2.x16, x2.x17, x2.x18, x2.x19, x2.x20,\n> > x2.x21, x2.x22, x2.x23, x2.x24, x2.x25, x2.x26, x2.x27, x2.x28, x2.x29,\n> > x2.x30, x2.x31, x2.x32, x2.x33, x2.x34, x35. \"provider_id\", x35.\n> > \"provider_phone_id\", x35. \"provider_id\", x35. \"address_id\", x35.\n> > \"prod_code\", x35. \"phone_number\", x35. \"phone_type\", x36. \"provider_id\",\n> > x36. \"provider_id\", x36. \"address_id\", x36. \"language_code\", x36.\n> > \"language_used_by\" FROM ( SELECT x37.x38 AS x14, x37.x39 AS x6, x37.x40\n> AS\n> > x26, x37.x41 AS x9, x37.x42 AS x20, x37.x43 AS x16, x37.x44 AS x8,\n> x37.x45\n> > AS x19, x37.x46 AS x3, x37.x47 AS x13, x37.x48 AS x12, x37.x49 AS x18,\n> > x37.x50 AS x17, x37.x51 AS x11, x37.x52 AS x22, x37.x53 AS x21, x37.x54\n> AS\n> > x10, x37.x55 AS x5, x37.x56 AS x4, x37.x57 AS x25, x37.x58 AS x7,\n> x37.x59 AS\n> > x15, x37.x60 AS x24, x37.x61 AS x23, ( CASE WHEN (x62. \"attribute_value\"\n> IS\n> > NULL) THEN NULL ELSE 1 END) AS x27, x62. \"paid\" AS x28, x62.\n> > \"attribute_value\" AS x34, x62. \"attribute_id\" AS x33, x62. \"provider_id\"\n> AS\n> > x29, x62. \"attribute_group_id\" AS x32, x62. \"parent_paid\" AS x31, x62.\n> > \"address_id\" AS x30 FROM ( SELECT \"provider_id\" AS x46, \"zip\" AS x38,\n> > \"first_name\" AS x39, \"provider_name_id\" AS x40, \"degree\" AS x41,\n> > \"preferred_flag\" AS x42, \"county\" AS x43, \"suffix\" AS x44,\n> \"individual_id\"\n> > AS x45, \"state\" AS x47, \"city\" AS x48, \"latitude\" AS x49, \"longitude\" AS\n> > x50, \"address\" AS x51, \"exclusion_type_id\" AS x52, \"quality_score\" AS\n> x53,\n> > \"gender\" AS x54, \"last_name\" AS x55, \"address_id\" AS x56,\n> \"hi_q_hospital_id\"\n> > AS x57, \"middle_name\" AS x58, \"zip4\" AS x59, \"handicap_accessible\" AS\n> x60,\n> > \"sour_address\" AS x61 FROM \"provider\" WHERE \"provider_id\" =\n> > '03563735-3798-441a-aea6-4e561ea347f7') x37 LEFT OUTER JOIN\n> > \"provider_attribute\" x62 ON (x37.x46 = x62. \"provider_id\") AND (x37.x56 =\n> > x62. \"address_id\")) x2 LEFT OUTER JOIN \"provider_phone\" x35 ON (x2.x3 =\n> x35.\n> > \"provider_id\") AND (x2.x4 = x35. \"address_id\") LEFT OUTER JOIN\n> > \"provider_language\" x36 ON (x2.x3 = x36. \"provider_id\") AND (x2.x4 = x36.\n> > \"address_id\");\n>\n> Based on a quick skim, this is the same query as before.\n>\n>\n> > I have tried to vacuum analyze the whole database still queries are slow\n> in\n> > 1 of the environment while faster in another environment.\n>\n> Is there a chance that one database has longrunning queries, slow\n> replication, or something like that, leading to a very bloated database?\n> Even if you VACUUM FULL, if there's still long-running sessions, the\n> bloat could not necessarily be removed, because those sessions might\n> still need to see the already superseded data.\n>\n> Do the table / index sizes differ between the environment? Are the\n> databases expected to have the same content?\n>\n>\n> This last point is more oriented towards other PG developers: I wonder\n> if we ought to display buffer statistics for plan time, for EXPLAIN\n> (BUFFERS). That'd surely make it easier to discern cases where we\n> e.g. access the index and scan a lot of the index from cases where we\n> hit some CPU time issue. We should easily be able to get that data, I\n> think, we already maintain it, we'd just need to compute the diff\n> between pgBufferUsage before / after planning.\n>\n> Greetings,\n>\n> Andres Freund\n>\n\nThanks for getting back to me so quicklyQueries are same and executed in 2 different environments. There are no long running queries on any of the environments since they are idle right away for my testing.I can try full vacuum if that is recommended since I tried vacuum analyze on the whole database in both environments.Datases have the same content and size is also same.Sorry but I am never seen this before , if the sizes vary or if the content varies I have seen execution time being slow and not the planning time.I am working on a dataset which I will share for further investigation and analysis.Regards,MukeshOn Tue, Nov 12, 2019 at 2:55 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2019-11-12 20:34:35 +0000, PG Bug reporting form wrote:\n> I am experiencing weird issue around planning time for the queries across\n> couple of environments below is the sample of the execution plan\n\n\nJust to confirm, these are the same queries, but executed in different\ndatabases / environments?\n\n\n> Fast Execution Plan\n> transformations=> explain (analyze, buffers)\n> SELECT x2.x3, x2.x4, x2.x5, x2.x6, x2.x7, x2.x8, x2.x9, x2.x10, x2.x11,\n> x2.x12, x2.x13, x2.x14, x2.x15, x2.x16, x2.x17, x2.x18, x2.x19, x2.x20,\n> x2.x21, x2.x22, x2.x23, x2.x24, x2.x25, x2.x26, x2.x27, x2.x28, x2.x29,\n> x2.x30, x2.x31, x2.x32, x2.x33, x2.x34, x35. \"provider_id\", x35.\n> \"provider_phone_id\", x35. \"provider_id\", x35. \"address_id\", x35.\n> \"prod_code\", x35. \"phone_number\", x35. \"phone_type\", x36. \"provider_id\",\n> x36. \"provider_id\", x36. \"address_id\", x36. \"language_code\", x36.\n> \"language_used_by\" FROM ( SELECT x37.x38 AS x14, x37.x39 AS x6, x37.x40 AS\n> x26, x37.x41 AS x9, x37.x42 AS x20, x37.x43 AS x16, x37.x44 AS x8, x37.x45\n> AS x19, x37.x46 AS x3, x37.x47 AS x13, x37.x48 AS x12, x37.x49 AS x18,\n> x37.x50 AS x17, x37.x51 AS x11, x37.x52 AS x22, x37.x53 AS x21, x37.x54 AS\n> x10, x37.x55 AS x5, x37.x56 AS x4, x37.x57 AS x25, x37.x58 AS x7, x37.x59 AS\n> x15, x37.x60 AS x24, x37.x61 AS x23, ( CASE WHEN (x62. \"attribute_value\" IS\n> NULL) THEN NULL ELSE 1 END) AS x27, x62. \"paid\" AS x28, x62.\n> \"attribute_value\" AS x34, x62. \"attribute_id\" AS x33, x62. \"provider_id\" AS\n> x29, x62. \"attribute_group_id\" AS x32, x62. \"parent_paid\" AS x31, x62.\n> \"address_id\" AS x30 FROM ( SELECT \"provider_id\" AS x46, \"zip\" AS x38,\n> \"first_name\" AS x39, \"provider_name_id\" AS x40, \"degree\" AS x41,\n> \"preferred_flag\" AS x42, \"county\" AS x43, \"suffix\" AS x44, \"individual_id\"\n> AS x45, \"state\" AS x47, \"city\" AS x48, \"latitude\" AS x49, \"longitude\" AS\n> x50, \"address\" AS x51, \"exclusion_type_id\" AS x52, \"quality_score\" AS x53,\n> \"gender\" AS x54, \"last_name\" AS x55, \"address_id\" AS x56, \"hi_q_hospital_id\"\n> AS x57, \"middle_name\" AS x58, \"zip4\" AS x59, \"handicap_accessible\" AS x60,\n> \"sour_address\" AS x61 FROM \"provider\" WHERE \"provider_id\" =\n> '03563735-3798-441a-aea6-4e561ea347f7') x37 LEFT OUTER JOIN\n> \"provider_attribute\" x62 ON (x37.x46 = x62. \"provider_id\") AND (x37.x56 =\n> x62. \"address_id\")) x2 LEFT OUTER JOIN \"provider_phone\" x35 ON (x2.x3 = x35.\n> \"provider_id\") AND (x2.x4 = x35. \"address_id\") LEFT OUTER JOIN\n> \"provider_language\" x36 ON (x2.x3 = x36. \"provider_id\") AND (x2.x4 = x36.\n> \"address_id\");\n\nThis is really hard to read for a human...\n\nHere's a automatically reformatted version:\n\nSELECT x2.x3,\n x2.x4,\n x2.x5,\n x2.x6,\n x2.x7,\n x2.x8,\n x2.x9,\n x2.x10,\n x2.x11,\n x2.x12,\n x2.x13,\n x2.x14,\n x2.x15,\n x2.x16,\n x2.x17,\n x2.x18,\n x2.x19,\n x2.x20,\n x2.x21,\n x2.x22,\n x2.x23,\n x2.x24,\n x2.x25,\n x2.x26,\n x2.x27,\n x2.x28,\n x2.x29,\n x2.x30,\n x2.x31,\n x2.x32,\n x2.x33,\n x2.x34,\n x35. \"provider_id\",\n x35. \"provider_phone_id\",\n x35. \"provider_id\",\n x35. \"address_id\",\n x35. \"prod_code\",\n x35. \"phone_number\",\n x35. \"phone_type\",\n x36. \"provider_id\",\n x36. \"provider_id\",\n x36. \"address_id\",\n x36. \"language_code\",\n x36. \"language_used_by\"\nFROM\n (SELECT x37.x38 AS x14,\n x37.x39 AS x6,\n x37.x40 AS x26,\n x37.x41 AS x9,\n x37.x42 AS x20,\n x37.x43 AS x16,\n x37.x44 AS x8,\n x37.x45 AS x19,\n x37.x46 AS x3,\n x37.x47 AS x13,\n x37.x48 AS x12,\n x37.x49 AS x18,\n x37.x50 AS x17,\n x37.x51 AS x11,\n x37.x52 AS x22,\n x37.x53 AS x21,\n x37.x54 AS x10,\n x37.x55 AS x5,\n x37.x56 AS x4,\n x37.x57 AS x25,\n x37.x58 AS x7,\n x37.x59 AS x15,\n x37.x60 AS x24,\n x37.x61 AS x23,\n (CASE\n WHEN (x62. \"attribute_value\" IS NULL) THEN NULL\n ELSE 1\n END) AS x27,\n x62. \"paid\" AS x28,\n x62. \"attribute_value\" AS x34,\n x62. \"attribute_id\" AS x33,\n x62. \"provider_id\" AS x29,\n x62. \"attribute_group_id\" AS x32,\n x62. \"parent_paid\" AS x31,\n x62. \"address_id\" AS x30\n FROM\n (SELECT \"provider_id\" AS x46,\n \"zip\" AS x38,\n \"first_name\" AS x39,\n \"provider_name_id\" AS x40,\n \"degree\" AS x41,\n \"preferred_flag\" AS x42,\n \"county\" AS x43,\n \"suffix\" AS x44,\n \"individual_id\" AS x45,\n \"state\" AS x47,\n \"city\" AS x48,\n \"latitude\" AS x49,\n \"longitude\" AS x50,\n \"address\" AS x51,\n \"exclusion_type_id\" AS x52,\n \"quality_score\" AS x53,\n \"gender\" AS x54,\n \"last_name\" AS x55,\n \"address_id\" AS x56,\n \"hi_q_hospital_id\" AS x57,\n \"middle_name\" AS x58,\n \"zip4\" AS x59,\n \"handicap_accessible\" AS x60,\n \"sour_address\" AS x61\n FROM \"provider\"\n WHERE \"provider_id\" = '03563735-3798-441a-aea6-4e561ea347f7') x37\n LEFT OUTER JOIN \"provider_attribute\" x62 ON (x37.x46 = x62. \"provider_id\")\n AND (x37.x56 = x62. \"address_id\")) x2\nLEFT OUTER JOIN \"provider_phone\" x35 ON (x2.x3 = x35. \"provider_id\")\nAND (x2.x4 = x35. \"address_id\")\nLEFT OUTER JOIN \"provider_language\" x36 ON (x2.x3 = x36. \"provider_id\")\nAND (x2.x4 = x36. \"address_id\");\n\n\n> Slow Execution Plan\n> transformations_uhc_medicaid=> explain (analyze, buffers)\n> SELECT x2.x3, x2.x4, x2.x5, x2.x6, x2.x7, x2.x8, x2.x9, x2.x10, x2.x11,\n> x2.x12, x2.x13, x2.x14, x2.x15, x2.x16, x2.x17, x2.x18, x2.x19, x2.x20,\n> x2.x21, x2.x22, x2.x23, x2.x24, x2.x25, x2.x26, x2.x27, x2.x28, x2.x29,\n> x2.x30, x2.x31, x2.x32, x2.x33, x2.x34, x35. \"provider_id\", x35.\n> \"provider_phone_id\", x35. \"provider_id\", x35. \"address_id\", x35.\n> \"prod_code\", x35. \"phone_number\", x35. \"phone_type\", x36. \"provider_id\",\n> x36. \"provider_id\", x36. \"address_id\", x36. \"language_code\", x36.\n> \"language_used_by\" FROM ( SELECT x37.x38 AS x14, x37.x39 AS x6, x37.x40 AS\n> x26, x37.x41 AS x9, x37.x42 AS x20, x37.x43 AS x16, x37.x44 AS x8, x37.x45\n> AS x19, x37.x46 AS x3, x37.x47 AS x13, x37.x48 AS x12, x37.x49 AS x18,\n> x37.x50 AS x17, x37.x51 AS x11, x37.x52 AS x22, x37.x53 AS x21, x37.x54 AS\n> x10, x37.x55 AS x5, x37.x56 AS x4, x37.x57 AS x25, x37.x58 AS x7, x37.x59 AS\n> x15, x37.x60 AS x24, x37.x61 AS x23, ( CASE WHEN (x62. \"attribute_value\" IS\n> NULL) THEN NULL ELSE 1 END) AS x27, x62. \"paid\" AS x28, x62.\n> \"attribute_value\" AS x34, x62. \"attribute_id\" AS x33, x62. \"provider_id\" AS\n> x29, x62. \"attribute_group_id\" AS x32, x62. \"parent_paid\" AS x31, x62.\n> \"address_id\" AS x30 FROM ( SELECT \"provider_id\" AS x46, \"zip\" AS x38,\n> \"first_name\" AS x39, \"provider_name_id\" AS x40, \"degree\" AS x41,\n> \"preferred_flag\" AS x42, \"county\" AS x43, \"suffix\" AS x44, \"individual_id\"\n> AS x45, \"state\" AS x47, \"city\" AS x48, \"latitude\" AS x49, \"longitude\" AS\n> x50, \"address\" AS x51, \"exclusion_type_id\" AS x52, \"quality_score\" AS x53,\n> \"gender\" AS x54, \"last_name\" AS x55, \"address_id\" AS x56, \"hi_q_hospital_id\"\n> AS x57, \"middle_name\" AS x58, \"zip4\" AS x59, \"handicap_accessible\" AS x60,\n> \"sour_address\" AS x61 FROM \"provider\" WHERE \"provider_id\" =\n> '03563735-3798-441a-aea6-4e561ea347f7') x37 LEFT OUTER JOIN\n> \"provider_attribute\" x62 ON (x37.x46 = x62. \"provider_id\") AND (x37.x56 =\n> x62. \"address_id\")) x2 LEFT OUTER JOIN \"provider_phone\" x35 ON (x2.x3 = x35.\n> \"provider_id\") AND (x2.x4 = x35. \"address_id\") LEFT OUTER JOIN\n> \"provider_language\" x36 ON (x2.x3 = x36. \"provider_id\") AND (x2.x4 = x36.\n> \"address_id\");\n\nBased on a quick skim, this is the same query as before.\n\n\n> I have tried to vacuum analyze the whole database still queries are slow in\n> 1 of the environment while faster in another environment.\n\nIs there a chance that one database has longrunning queries, slow\nreplication, or something like that, leading to a very bloated database?\nEven if you VACUUM FULL, if there's still long-running sessions, the\nbloat could not necessarily be removed, because those sessions might\nstill need to see the already superseded data.\n\nDo the table / index sizes differ between the environment? Are the\ndatabases expected to have the same content?\n\n\nThis last point is more oriented towards other PG developers: I wonder\nif we ought to display buffer statistics for plan time, for EXPLAIN\n(BUFFERS). That'd surely make it easier to discern cases where we\ne.g. access the index and scan a lot of the index from cases where we\nhit some CPU time issue. We should easily be able to get that data, I\nthink, we already maintain it, we'd just need to compute the diff\nbetween pgBufferUsage before / after planning.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Tue, 12 Nov 2019 15:01:28 -0600",
"msg_from": "Mukesh Chhatani <chhatani.mukesh@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16109: Postgres planning time is high across version - 10.6\n vs 10.10"
},
{
"msg_contents": "Hello,\n\nthere was a very similar post a few days ago:\nhttps://www.postgresql-archive.org/Slow-planning-fast-execution-for-particular-3-table-query-tt6109879.html#none\n\nthe root cause was a modification of default_statistics_target\n\nMaybe you are in the same situation ?\nor maybe tables have different SET STATISTICS values set ?\n\nRegards\nPAscal\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-bugs-f2117394.html\n\n\n",
"msg_date": "Tue, 12 Nov 2019 14:41:15 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16109: Postgres planning time is high across version -\n 10.6 vs 10.10"
},
{
"msg_contents": "\n\n13.11.2019 00:01, Mukesh Chhatani пишет:\n> Thanks for getting back to me so quickly\n> \n> Queries are same and executed in 2 different environments. There are no \n> long running queries on any of the environments since they are idle \n> right away for my testing.\n> \n> I can try full vacuum if that is recommended since I tried vacuum \n> analyze on the whole database in both environments.\n> \n> Datases have the same content and size is also same.\n> \n> Sorry but I am never seen this before , if the sizes vary or if the \n> content varies I have seen execution time being slow and not the \n> planning time.\n> \n> I am working on a dataset which I will share for further investigation \n> and analysis.\nInteresting. I will be waiting for your data set.\n\n-- \nAndrey Lepikhov\nPostgres Professional\nhttps://postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Wed, 13 Nov 2019 09:58:28 +0300",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16109: Postgres planning time is high across version - 10.6\n vs 10.10"
},
{
"msg_contents": "(moved to -hackers)\n\nOn Tue, Nov 12, 2019 at 9:55 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> This last point is more oriented towards other PG developers: I wonder\n> if we ought to display buffer statistics for plan time, for EXPLAIN\n> (BUFFERS). That'd surely make it easier to discern cases where we\n> e.g. access the index and scan a lot of the index from cases where we\n> hit some CPU time issue. We should easily be able to get that data, I\n> think, we already maintain it, we'd just need to compute the diff\n> between pgBufferUsage before / after planning.\n\nThat would be quite interesting to have. I attach as a reference a\nquick POC patch to implement it:\n\n# explain (analyze, buffers) select * from pg_stat_activity;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------\n Hash Left Join (cost=2.25..3.80 rows=100 width=440) (actual\ntime=0.259..0.276 rows=6 loops=1)\n Hash Cond: (s.usesysid = u.oid)\n Buffers: shared hit=5\n -> Hash Left Join (cost=1.05..2.32 rows=100 width=376) (actual\ntime=0.226..0.236 rows=6 loops=1)\n Hash Cond: (s.datid = d.oid)\n Buffers: shared hit=4\n -> Function Scan on pg_stat_get_activity s (cost=0.00..1.00\nrows=100 width=312) (actual time=0.148..0.151 rows=6 loop\n -> Hash (cost=1.02..1.02 rows=2 width=68) (actual\ntime=0.034..0.034 rows=5 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n Buffers: shared hit=1\n -> Seq Scan on pg_database d (cost=0.00..1.02 rows=2\nwidth=68) (actual time=0.016..0.018 rows=5 loops=1)\n Buffers: shared hit=1\n -> Hash (cost=1.09..1.09 rows=9 width=68) (actual\ntime=0.015..0.015 rows=9 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n Buffers: shared hit=1\n -> Seq Scan on pg_authid u (cost=0.00..1.09 rows=9\nwidth=68) (actual time=0.004..0.008 rows=9 loops=1)\n Buffers: shared hit=1\n Planning Time: 1.902 ms\n Buffers: shared hit=37 read=29\n I/O Timings: read=0.506\n Execution Time: 0.547 ms\n(21 rows)\n\nNote that there's a related discussion in the \"Planning counters in\npg_stat_statements\" thread, on whether to also compute buffers from\nplanning or not.",
"msg_date": "Wed, 13 Nov 2019 11:39:04 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16109: Postgres planning time is high across version - 10.6\n vs 10.10"
},
{
"msg_contents": "st 13. 11. 2019 v 11:39 odesílatel Julien Rouhaud <rjuju123@gmail.com>\nnapsal:\n\n> (moved to -hackers)\n>\n> On Tue, Nov 12, 2019 at 9:55 PM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > This last point is more oriented towards other PG developers: I wonder\n> > if we ought to display buffer statistics for plan time, for EXPLAIN\n> > (BUFFERS). That'd surely make it easier to discern cases where we\n> > e.g. access the index and scan a lot of the index from cases where we\n> > hit some CPU time issue. We should easily be able to get that data, I\n> > think, we already maintain it, we'd just need to compute the diff\n> > between pgBufferUsage before / after planning.\n>\n> That would be quite interesting to have. I attach as a reference a\n> quick POC patch to implement it:\n>\n> # explain (analyze, buffers) select * from pg_stat_activity;\n> QUERY PLAN\n>\n> --------------------------------------------------------------------------------------------------------------------------------\n> Hash Left Join (cost=2.25..3.80 rows=100 width=440) (actual\n> time=0.259..0.276 rows=6 loops=1)\n> Hash Cond: (s.usesysid = u.oid)\n> Buffers: shared hit=5\n> -> Hash Left Join (cost=1.05..2.32 rows=100 width=376) (actual\n> time=0.226..0.236 rows=6 loops=1)\n> Hash Cond: (s.datid = d.oid)\n> Buffers: shared hit=4\n> -> Function Scan on pg_stat_get_activity s (cost=0.00..1.00\n> rows=100 width=312) (actual time=0.148..0.151 rows=6 loop\n> -> Hash (cost=1.02..1.02 rows=2 width=68) (actual\n> time=0.034..0.034 rows=5 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 9kB\n> Buffers: shared hit=1\n> -> Seq Scan on pg_database d (cost=0.00..1.02 rows=2\n> width=68) (actual time=0.016..0.018 rows=5 loops=1)\n> Buffers: shared hit=1\n> -> Hash (cost=1.09..1.09 rows=9 width=68) (actual\n> time=0.015..0.015 rows=9 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 9kB\n> Buffers: shared hit=1\n> -> Seq Scan on pg_authid u (cost=0.00..1.09 rows=9\n> width=68) (actual time=0.004..0.008 rows=9 loops=1)\n> Buffers: shared hit=1\n> Planning Time: 1.902 ms\n> Buffers: shared hit=37 read=29\n> I/O Timings: read=0.506\n> Execution Time: 0.547 ms\n> (21 rows)\n>\n> Note that there's a related discussion in the \"Planning counters in\n> pg_stat_statements\" thread, on whether to also compute buffers from\n> planning or not.\n>\n\n+1\n\nPavel\n\nst 13. 11. 2019 v 11:39 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:(moved to -hackers)\n\nOn Tue, Nov 12, 2019 at 9:55 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> This last point is more oriented towards other PG developers: I wonder\n> if we ought to display buffer statistics for plan time, for EXPLAIN\n> (BUFFERS). That'd surely make it easier to discern cases where we\n> e.g. access the index and scan a lot of the index from cases where we\n> hit some CPU time issue. We should easily be able to get that data, I\n> think, we already maintain it, we'd just need to compute the diff\n> between pgBufferUsage before / after planning.\n\nThat would be quite interesting to have. I attach as a reference a\nquick POC patch to implement it:\n\n# explain (analyze, buffers) select * from pg_stat_activity;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------\n Hash Left Join (cost=2.25..3.80 rows=100 width=440) (actual\ntime=0.259..0.276 rows=6 loops=1)\n Hash Cond: (s.usesysid = u.oid)\n Buffers: shared hit=5\n -> Hash Left Join (cost=1.05..2.32 rows=100 width=376) (actual\ntime=0.226..0.236 rows=6 loops=1)\n Hash Cond: (s.datid = d.oid)\n Buffers: shared hit=4\n -> Function Scan on pg_stat_get_activity s (cost=0.00..1.00\nrows=100 width=312) (actual time=0.148..0.151 rows=6 loop\n -> Hash (cost=1.02..1.02 rows=2 width=68) (actual\ntime=0.034..0.034 rows=5 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n Buffers: shared hit=1\n -> Seq Scan on pg_database d (cost=0.00..1.02 rows=2\nwidth=68) (actual time=0.016..0.018 rows=5 loops=1)\n Buffers: shared hit=1\n -> Hash (cost=1.09..1.09 rows=9 width=68) (actual\ntime=0.015..0.015 rows=9 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n Buffers: shared hit=1\n -> Seq Scan on pg_authid u (cost=0.00..1.09 rows=9\nwidth=68) (actual time=0.004..0.008 rows=9 loops=1)\n Buffers: shared hit=1\n Planning Time: 1.902 ms\n Buffers: shared hit=37 read=29\n I/O Timings: read=0.506\n Execution Time: 0.547 ms\n(21 rows)\n\nNote that there's a related discussion in the \"Planning counters in\npg_stat_statements\" thread, on whether to also compute buffers from\nplanning or not.+1Pavel",
"msg_date": "Wed, 13 Nov 2019 11:49:58 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16109: Postgres planning time is high across version - 10.6\n vs 10.10"
},
{
"msg_contents": "All,\n\nUpdate I was able to resolve the problem by changing - partitioned tables to\nsingle table and changing data type of 2 columns used in the joins from\nvarchar to varchar(50).\n\nFYI.. default_statistics_target was set to 10000 but I changed it 100 and\neven to 1000 and still planning time was high.\n\nStill working on the dataset so that more people can investigate the issues.\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-bugs-f2117394.html\n\n\n",
"msg_date": "Wed, 13 Nov 2019 11:37:30 -0700 (MST)",
"msg_from": "Mukesh Chhatani <chhatani.mukesh@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16109: Postgres planning time is high across version -\n 10.6 vs 10.10"
},
{
"msg_contents": "Hi,\n\nOn 2019-11-13 11:37:30 -0700, Mukesh Chhatani wrote:\n> FYI.. default_statistics_target was set to 10000 but I changed it 100 and\n> even to 1000 and still planning time was high.\n\nNote that you'd need to ANALYZE the involved tables before that change\nactually would effect planning time.\n\n\n> Update I was able to resolve the problem by changing - partitioned tables to\n> single table and changing data type of 2 columns used in the joins from\n> varchar to varchar(50).\n\nThat's not going to be the fix. There's no efficiency difference between\nthose. It's more likely that, that the different statistics target would\nhave taken effect after the alter table etc.\n\n- Andres\n\n\n",
"msg_date": "Wed, 13 Nov 2019 10:42:13 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16109: Postgres planning time is high across version - 10.6\n vs 10.10"
},
{
"msg_contents": "Thanks for response.\n\nI analyzed after changing default_statistics_target but no improvement, are\nthere any recommendations around this parameter because as far as I have\nseen increasing this parameter in the past lets optimizer choose better\nplans and has never caused me this problem related to high planning time,\nbut I am open to suggestions since every problem is a new problem.\n\nI know changing partitioned to unpartitioned and then changing columns\ntypes makes no sense in terms of resolving the issue but that is what is\nworking now.\n\nI will go ahead and change the parameter - default_statistics_target to 100\nand analyze whole database and then wait for couple of hours and then run\nmy queries. Let me know if this approach is good.\n\nRegards,\nMukesh\n\nOn Wed, Nov 13, 2019 at 12:42 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2019-11-13 11:37:30 -0700, Mukesh Chhatani wrote:\n> > FYI.. default_statistics_target was set to 10000 but I changed it 100 and\n> > even to 1000 and still planning time was high.\n>\n> Note that you'd need to ANALYZE the involved tables before that change\n> actually would effect planning time.\n>\n>\n> > Update I was able to resolve the problem by changing - partitioned\n> tables to\n> > single table and changing data type of 2 columns used in the joins from\n> > varchar to varchar(50).\n>\n> That's not going to be the fix. There's no efficiency difference between\n> those. It's more likely that, that the different statistics target would\n> have taken effect after the alter table etc.\n>\n> - Andres\n>\n\nThanks for response.I analyzed after changing default_statistics_target but no improvement, are there any recommendations around this parameter because as far as I have seen increasing this parameter in the past lets optimizer choose better plans and has never caused me this problem related to high planning time, but I am open to suggestions since every problem is a new problem.I know changing partitioned to unpartitioned and then changing columns types makes no sense in terms of resolving the issue but that is what is working now.I will go ahead and change the parameter - default_statistics_target to 100 and analyze whole database and then wait for couple of hours and then run my queries. Let me know if this approach is good.Regards,MukeshOn Wed, Nov 13, 2019 at 12:42 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2019-11-13 11:37:30 -0700, Mukesh Chhatani wrote:\n> FYI.. default_statistics_target was set to 10000 but I changed it 100 and\n> even to 1000 and still planning time was high.\n\nNote that you'd need to ANALYZE the involved tables before that change\nactually would effect planning time.\n\n\n> Update I was able to resolve the problem by changing - partitioned tables to\n> single table and changing data type of 2 columns used in the joins from\n> varchar to varchar(50).\n\nThat's not going to be the fix. There's no efficiency difference between\nthose. It's more likely that, that the different statistics target would\nhave taken effect after the alter table etc.\n\n- Andres",
"msg_date": "Wed, 13 Nov 2019 12:52:13 -0600",
"msg_from": "Mukesh Chhatani <chhatani.mukesh@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16109: Postgres planning time is high across version - 10.6\n vs 10.10"
},
{
"msg_contents": "On Wed, Nov 13, 2019 at 11:39:04AM +0100, Julien Rouhaud wrote:\n> (moved to -hackers)\n> \n> On Tue, Nov 12, 2019 at 9:55 PM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > This last point is more oriented towards other PG developers: I wonder\n> > if we ought to display buffer statistics for plan time, for EXPLAIN\n> > (BUFFERS). That'd surely make it easier to discern cases where we\n> > e.g. access the index and scan a lot of the index from cases where we\n> > hit some CPU time issue. We should easily be able to get that data, I\n> > think, we already maintain it, we'd just need to compute the diff\n> > between pgBufferUsage before / after planning.\n> \n> That would be quite interesting to have. I attach as a reference a\n> quick POC patch to implement it:\n\n+1\n\n+\tresult.shared_blks_hit = stop->shared_blks_hit - start->shared_blks_hit;\n+\tresult.shared_blks_read = stop->shared_blks_read - start->shared_blks_read;\n+\tresult.shared_blks_dirtied = stop->shared_blks_dirtied -\n+\t\tstart->shared_blks_dirtied;\n[...]\n\nI think it would be more readable and maintainable using a macro:\n\n#define CALC_BUFF_DIFF(x) result.##x = stop->##x - start->##x\nCALC_BUFF_DIFF(shared_blks_hit);\nCALC_BUFF_DIFF(shared_blks_read);\nCALC_BUFF_DIFF(shared_blks_dirtied);\n...\n#undefine CALC_BUFF_DIFF\n\n\n\n",
"msg_date": "Thu, 23 Jan 2020 23:55:34 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16109: Postgres planning time is high across version\n (Expose buffer usage during planning in EXPLAIN)"
},
{
"msg_contents": "On Fri, Jan 24, 2020 at 6:55 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Wed, Nov 13, 2019 at 11:39:04AM +0100, Julien Rouhaud wrote:\n> > (moved to -hackers)\n> >\n> > On Tue, Nov 12, 2019 at 9:55 PM Andres Freund <andres@anarazel.de> wrote:\n> > >\n> > > This last point is more oriented towards other PG developers: I wonder\n> > > if we ought to display buffer statistics for plan time, for EXPLAIN\n> > > (BUFFERS). That'd surely make it easier to discern cases where we\n> > > e.g. access the index and scan a lot of the index from cases where we\n> > > hit some CPU time issue. We should easily be able to get that data, I\n> > > think, we already maintain it, we'd just need to compute the diff\n> > > between pgBufferUsage before / after planning.\n> >\n> > That would be quite interesting to have. I attach as a reference a\n> > quick POC patch to implement it:\n>\n> +1\n>\n> + result.shared_blks_hit = stop->shared_blks_hit - start->shared_blks_hit;\n> + result.shared_blks_read = stop->shared_blks_read - start->shared_blks_read;\n> + result.shared_blks_dirtied = stop->shared_blks_dirtied -\n> + start->shared_blks_dirtied;\n> [...]\n>\n> I think it would be more readable and maintainable using a macro:\n>\n> #define CALC_BUFF_DIFF(x) result.##x = stop->##x - start->##x\n> CALC_BUFF_DIFF(shared_blks_hit);\n> CALC_BUFF_DIFF(shared_blks_read);\n> CALC_BUFF_DIFF(shared_blks_dirtied);\n> ...\n> #undefine CALC_BUFF_DIFF\n\nGood idea. Note that you can't use preprocessor concatenation to\ngenerate something else than a token or a number, so the ## can just\nbe removed here.",
"msg_date": "Fri, 24 Jan 2020 22:06:11 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16109: Postgres planning time is high across version (Expose\n buffer usage during planning in EXPLAIN)"
},
{
"msg_contents": "On Fri, Jan 24, 2020 at 10:06 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Fri, Jan 24, 2020 at 6:55 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > On Wed, Nov 13, 2019 at 11:39:04AM +0100, Julien Rouhaud wrote:\n> > > (moved to -hackers)\n> > >\n> > > On Tue, Nov 12, 2019 at 9:55 PM Andres Freund <andres@anarazel.de> wrote:\n> > > >\n> > > > This last point is more oriented towards other PG developers: I wonder\n> > > > if we ought to display buffer statistics for plan time, for EXPLAIN\n> > > > (BUFFERS). That'd surely make it easier to discern cases where we\n> > > > e.g. access the index and scan a lot of the index from cases where we\n> > > > hit some CPU time issue. We should easily be able to get that data, I\n> > > > think, we already maintain it, we'd just need to compute the diff\n> > > > between pgBufferUsage before / after planning.\n> > >\n> > > That would be quite interesting to have. I attach as a reference a\n> > > quick POC patch to implement it:\n> >\n> > +1\n> >\n> > + result.shared_blks_hit = stop->shared_blks_hit - start->shared_blks_hit;\n> > + result.shared_blks_read = stop->shared_blks_read - start->shared_blks_read;\n> > + result.shared_blks_dirtied = stop->shared_blks_dirtied -\n> > + start->shared_blks_dirtied;\n> > [...]\n> >\n> > I think it would be more readable and maintainable using a macro:\n> >\n> > #define CALC_BUFF_DIFF(x) result.##x = stop->##x - start->##x\n> > CALC_BUFF_DIFF(shared_blks_hit);\n> > CALC_BUFF_DIFF(shared_blks_read);\n> > CALC_BUFF_DIFF(shared_blks_dirtied);\n> > ...\n> > #undefine CALC_BUFF_DIFF\n>\n> Good idea. Note that you can't use preprocessor concatenation to\n> generate something else than a token or a number, so the ## can just\n> be removed here.\n\nRebase due to conflict with 3ec20c7091e97.",
"msg_date": "Wed, 29 Jan 2020 12:15:59 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16109: Postgres planning time is high across version (Expose\n buffer usage during planning in EXPLAIN)"
},
{
"msg_contents": "On Wed, Jan 29, 2020 at 12:15:59PM +0100, Julien Rouhaud wrote:\n> Rebase due to conflict with 3ec20c7091e97.\n\nThis is failing to apply probably since 4a539a25ebfc48329fd656a95f3c1eb2cda38af3.\nCould you rebase? (Also, not sure if this can be set as RFC?)\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 30 Mar 2020 20:31:46 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16109: Postgres planning time is high across version\n (Expose buffer usage during planning in EXPLAIN)"
},
{
"msg_contents": "On 2020/03/31 10:31, Justin Pryzby wrote:\n> On Wed, Jan 29, 2020 at 12:15:59PM +0100, Julien Rouhaud wrote:\n>> Rebase due to conflict with 3ec20c7091e97.\n> \n> This is failing to apply probably since 4a539a25ebfc48329fd656a95f3c1eb2cda38af3.\n> Could you rebase? (Also, not sure if this can be set as RFC?)\n\nI updated the patch. Attached.\n\n+/* Compute the difference between two BufferUsage */\n+BufferUsage\n+ComputeBufferCounters(BufferUsage *start, BufferUsage *stop)\n\nSince BufferUsageAccumDiff() was exported, ComputeBufferCounters() is\nno longer necessary. In the patched version, BufferUsageAccumDiff() is\nused to calculate the difference of buffer usage.\n\n+\tif (es->summary && (planduration || es->buffers))\n+\t\tExplainOpenGroup(\"Planning\", \"Planning\", true, es);\n\nIsn't it more appropriate to check \"bufusage\" instead of \"es->buffers\" here?\nThe patch changes the code so that \"bufusage\" is checked.\n\n+ \"Planning Time\": N.N, +\n+ \"Shared Hit Blocks\": N, +\n+ \"Shared Read Blocks\": N, +\n+ \"Shared Dirtied Blocks\": N,+\n\nDoesn't this indent look strange? IMO no indent for buffer usage is\nnecessary when the format is either json, xml, and yaml. This looks\nbetter at least for me. OTOH, in text format, it seems better to indent\nthe buffer usage for more readability. Thought?\nThe patch changes the code so that \"es->indent\" is\nincrement/decrement only when the format is text.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Thu, 2 Apr 2020 02:51:36 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16109: Postgres planning time is high across version (Expose\n buffer usage during planning in EXPLAIN)"
},
{
"msg_contents": "On Wed, Apr 1, 2020 at 7:51 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n> On 2020/03/31 10:31, Justin Pryzby wrote:\n> > On Wed, Jan 29, 2020 at 12:15:59PM +0100, Julien Rouhaud wrote:\n> >> Rebase due to conflict with 3ec20c7091e97.\n> >\n> > This is failing to apply probably since 4a539a25ebfc48329fd656a95f3c1eb2cda38af3.\n> > Could you rebase? (Also, not sure if this can be set as RFC?)\n>\n> I updated the patch. Attached.\n\nThanks a lot! I'm sorry I missed Justin's ping, and it I just\nrealized that my cron job that used to warn me about cfbot failure was\nbroken :(\n\n> +/* Compute the difference between two BufferUsage */\n> +BufferUsage\n> +ComputeBufferCounters(BufferUsage *start, BufferUsage *stop)\n>\n> Since BufferUsageAccumDiff() was exported, ComputeBufferCounters() is\n> no longer necessary. In the patched version, BufferUsageAccumDiff() is\n> used to calculate the difference of buffer usage.\n\nIndeed, exposing BufferUsageAccumDiff wa definitely a good thing!\n\n> + if (es->summary && (planduration || es->buffers))\n> + ExplainOpenGroup(\"Planning\", \"Planning\", true, es);\n>\n> Isn't it more appropriate to check \"bufusage\" instead of \"es->buffers\" here?\n> The patch changes the code so that \"bufusage\" is checked.\n\nAFAICS not unless ExplainOneQuery is also changed to pass a NULL\npointer if the BUFFER option wasn't provided (and maybe also\noptionally skip the planning buffer computation). With this version\nyou now get:\n\n=# explain (analyze, buffers off) update t1 set id = id;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------\n Update on t1 (cost=0.00..22.70 rows=1270 width=42) (actual\ntime=0.170..0.170 rows=0 loops=1)\n -> Seq Scan on t1 (cost=0.00..22.70 rows=1270 width=42) (actual\ntime=0.050..0.054 rows=1 loops=1)\n Planning Time: 1.461 ms\n Buffers: shared hit=25\n Execution Time: 1.071 ms\n(5 rows)\n\nwhich seems wrong to me.\n\nI reused the es->buffers to avoid having needing something like:\n\ndiff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c\nindex b1f3fe13c6..9dbff97a32 100644\n--- a/src/backend/commands/explain.c\n+++ b/src/backend/commands/explain.c\n@@ -375,7 +375,9 @@ ExplainOneQuery(Query *query, int cursorOptions,\n BufferUsage bufusage_start,\n bufusage;\n\n- bufusage_start = pgBufferUsage;\n+ if (ex->buffers)\n+ bufusage_start = pgBufferUsage;\n+\n INSTR_TIME_SET_CURRENT(planstart);\n\n /* plan the query */\n@@ -384,13 +386,16 @@ ExplainOneQuery(Query *query, int cursorOptions,\n INSTR_TIME_SET_CURRENT(planduration);\n INSTR_TIME_SUBTRACT(planduration, planstart);\n\n- /* calc differences of buffer counters. */\n- memset(&bufusage, 0, sizeof(BufferUsage));\n- BufferUsageAccumDiff(&bufusage, &pgBufferUsage, &bufusage_start);\n+ if (es->buffers)\n+ {\n+ /* calc differences of buffer counters. */\n+ memset(&bufusage, 0, sizeof(BufferUsage));\n+ BufferUsageAccumDiff(&bufusage, &pgBufferUsage, &bufusage_start);\n+ }\n\n /* run it (if needed) and produce output */\n ExplainOnePlan(plan, into, es, queryString, params, queryEnv,\n- &planduration, &bufusage);\n+ &planduration, (es->buffers ? &bufusage : NULL));\n }\n\nwhich seemed like a win, but I'm not opposed to do that if you prefer.\n\n>\n> + \"Planning Time\": N.N, +\n> + \"Shared Hit Blocks\": N, +\n> + \"Shared Read Blocks\": N, +\n> + \"Shared Dirtied Blocks\": N,+\n>\n> Doesn't this indent look strange? IMO no indent for buffer usage is\n> necessary when the format is either json, xml, and yaml. This looks\n> better at least for me. OTOH, in text format, it seems better to indent\n> the buffer usage for more readability. Thought?\n> The patch changes the code so that \"es->indent\" is\n> increment/decrement only when the format is text.\n\nIndeed, that's way better!\n\n\n",
"msg_date": "Wed, 1 Apr 2020 20:47:12 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16109: Postgres planning time is high across version (Expose\n buffer usage during planning in EXPLAIN)"
},
{
"msg_contents": "On 2020/04/02 3:47, Julien Rouhaud wrote:\n> On Wed, Apr 1, 2020 at 7:51 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>> On 2020/03/31 10:31, Justin Pryzby wrote:\n>>> On Wed, Jan 29, 2020 at 12:15:59PM +0100, Julien Rouhaud wrote:\n>>>> Rebase due to conflict with 3ec20c7091e97.\n>>>\n>>> This is failing to apply probably since 4a539a25ebfc48329fd656a95f3c1eb2cda38af3.\n>>> Could you rebase? (Also, not sure if this can be set as RFC?)\n>>\n>> I updated the patch. Attached.\n> \n> Thanks a lot! I'm sorry I missed Justin's ping, and it I just\n> realized that my cron job that used to warn me about cfbot failure was\n> broken :(\n> \n>> +/* Compute the difference between two BufferUsage */\n>> +BufferUsage\n>> +ComputeBufferCounters(BufferUsage *start, BufferUsage *stop)\n>>\n>> Since BufferUsageAccumDiff() was exported, ComputeBufferCounters() is\n>> no longer necessary. In the patched version, BufferUsageAccumDiff() is\n>> used to calculate the difference of buffer usage.\n> \n> Indeed, exposing BufferUsageAccumDiff wa definitely a good thing!\n> \n>> + if (es->summary && (planduration || es->buffers))\n>> + ExplainOpenGroup(\"Planning\", \"Planning\", true, es);\n>>\n>> Isn't it more appropriate to check \"bufusage\" instead of \"es->buffers\" here?\n>> The patch changes the code so that \"bufusage\" is checked.\n> \n> AFAICS not unless ExplainOneQuery is also changed to pass a NULL\n> pointer if the BUFFER option wasn't provided (and maybe also\n> optionally skip the planning buffer computation). With this version\n> you now get:\n> \n> =# explain (analyze, buffers off) update t1 set id = id;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------\n> Update on t1 (cost=0.00..22.70 rows=1270 width=42) (actual\n> time=0.170..0.170 rows=0 loops=1)\n> -> Seq Scan on t1 (cost=0.00..22.70 rows=1270 width=42) (actual\n> time=0.050..0.054 rows=1 loops=1)\n> Planning Time: 1.461 ms\n> Buffers: shared hit=25\n> Execution Time: 1.071 ms\n> (5 rows)\n> \n> which seems wrong to me.\n> \n> I reused the es->buffers to avoid having needing something like:\n\nYes, you're right! So I updated the patch as you suggested.\nAttached is the updated version of the patch.\nThanks for the review!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Thu, 2 Apr 2020 13:05:56 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16109: Postgres planning time is high across version (Expose\n buffer usage during planning in EXPLAIN)"
},
{
"msg_contents": "On Thu, Apr 02, 2020 at 01:05:56PM +0900, Fujii Masao wrote:\n> \n> \n> On 2020/04/02 3:47, Julien Rouhaud wrote:\n> > On Wed, Apr 1, 2020 at 7:51 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > > \n> > > \n> > > On 2020/03/31 10:31, Justin Pryzby wrote:\n> > > > On Wed, Jan 29, 2020 at 12:15:59PM +0100, Julien Rouhaud wrote:\n> > > > > Rebase due to conflict with 3ec20c7091e97.\n> > > > \n> > > > This is failing to apply probably since 4a539a25ebfc48329fd656a95f3c1eb2cda38af3.\n> > > > Could you rebase? (Also, not sure if this can be set as RFC?)\n> > > \n> > > I updated the patch. Attached.\n> > \n> > Thanks a lot! I'm sorry I missed Justin's ping, and it I just\n> > realized that my cron job that used to warn me about cfbot failure was\n> > broken :(\n> > \n> > > +/* Compute the difference between two BufferUsage */\n> > > +BufferUsage\n> > > +ComputeBufferCounters(BufferUsage *start, BufferUsage *stop)\n> > > \n> > > Since BufferUsageAccumDiff() was exported, ComputeBufferCounters() is\n> > > no longer necessary. In the patched version, BufferUsageAccumDiff() is\n> > > used to calculate the difference of buffer usage.\n> > \n> > Indeed, exposing BufferUsageAccumDiff wa definitely a good thing!\n> > \n> > > + if (es->summary && (planduration || es->buffers))\n> > > + ExplainOpenGroup(\"Planning\", \"Planning\", true, es);\n> > > \n> > > Isn't it more appropriate to check \"bufusage\" instead of \"es->buffers\" here?\n> > > The patch changes the code so that \"bufusage\" is checked.\n> > \n> > AFAICS not unless ExplainOneQuery is also changed to pass a NULL\n> > pointer if the BUFFER option wasn't provided (and maybe also\n> > optionally skip the planning buffer computation). With this version\n> > you now get:\n> > \n> > =# explain (analyze, buffers off) update t1 set id = id;\n> > QUERY PLAN\n> > -------------------------------------------------------------------------------------------------------\n> > Update on t1 (cost=0.00..22.70 rows=1270 width=42) (actual\n> > time=0.170..0.170 rows=0 loops=1)\n> > -> Seq Scan on t1 (cost=0.00..22.70 rows=1270 width=42) (actual\n> > time=0.050..0.054 rows=1 loops=1)\n> > Planning Time: 1.461 ms\n> > Buffers: shared hit=25\n> > Execution Time: 1.071 ms\n> > (5 rows)\n> > \n> > which seems wrong to me.\n> > \n> > I reused the es->buffers to avoid having needing something like:\n> \n> Yes, you're right! So I updated the patch as you suggested.\n> Attached is the updated version of the patch.\n> Thanks for the review!\n\n\nThanks a lot, it all looks good to me!\n\n\n",
"msg_date": "Thu, 2 Apr 2020 08:01:54 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16109: Postgres planning time is high across version\n (Expose buffer usage during planning in EXPLAIN)"
},
{
"msg_contents": "\n\nOn 2020/04/02 15:01, Julien Rouhaud wrote:\n> On Thu, Apr 02, 2020 at 01:05:56PM +0900, Fujii Masao wrote:\n>>\n>>\n>> On 2020/04/02 3:47, Julien Rouhaud wrote:\n>>> On Wed, Apr 1, 2020 at 7:51 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>\n>>>>\n>>>> On 2020/03/31 10:31, Justin Pryzby wrote:\n>>>>> On Wed, Jan 29, 2020 at 12:15:59PM +0100, Julien Rouhaud wrote:\n>>>>>> Rebase due to conflict with 3ec20c7091e97.\n>>>>>\n>>>>> This is failing to apply probably since 4a539a25ebfc48329fd656a95f3c1eb2cda38af3.\n>>>>> Could you rebase? (Also, not sure if this can be set as RFC?)\n>>>>\n>>>> I updated the patch. Attached.\n>>>\n>>> Thanks a lot! I'm sorry I missed Justin's ping, and it I just\n>>> realized that my cron job that used to warn me about cfbot failure was\n>>> broken :(\n>>>\n>>>> +/* Compute the difference between two BufferUsage */\n>>>> +BufferUsage\n>>>> +ComputeBufferCounters(BufferUsage *start, BufferUsage *stop)\n>>>>\n>>>> Since BufferUsageAccumDiff() was exported, ComputeBufferCounters() is\n>>>> no longer necessary. In the patched version, BufferUsageAccumDiff() is\n>>>> used to calculate the difference of buffer usage.\n>>>\n>>> Indeed, exposing BufferUsageAccumDiff wa definitely a good thing!\n>>>\n>>>> + if (es->summary && (planduration || es->buffers))\n>>>> + ExplainOpenGroup(\"Planning\", \"Planning\", true, es);\n>>>>\n>>>> Isn't it more appropriate to check \"bufusage\" instead of \"es->buffers\" here?\n>>>> The patch changes the code so that \"bufusage\" is checked.\n>>>\n>>> AFAICS not unless ExplainOneQuery is also changed to pass a NULL\n>>> pointer if the BUFFER option wasn't provided (and maybe also\n>>> optionally skip the planning buffer computation). With this version\n>>> you now get:\n>>>\n>>> =# explain (analyze, buffers off) update t1 set id = id;\n>>> QUERY PLAN\n>>> -------------------------------------------------------------------------------------------------------\n>>> Update on t1 (cost=0.00..22.70 rows=1270 width=42) (actual\n>>> time=0.170..0.170 rows=0 loops=1)\n>>> -> Seq Scan on t1 (cost=0.00..22.70 rows=1270 width=42) (actual\n>>> time=0.050..0.054 rows=1 loops=1)\n>>> Planning Time: 1.461 ms\n>>> Buffers: shared hit=25\n>>> Execution Time: 1.071 ms\n>>> (5 rows)\n>>>\n>>> which seems wrong to me.\n>>>\n>>> I reused the es->buffers to avoid having needing something like:\n>>\n>> Yes, you're right! So I updated the patch as you suggested.\n>> Attached is the updated version of the patch.\n>> Thanks for the review!\n> \n> \n> Thanks a lot, it all looks good to me!\n\nThanks! Barring any objection, I will commit the latest version of the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 2 Apr 2020 15:52:17 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16109: Postgres planning time is high across version (Expose\n buffer usage during planning in EXPLAIN)"
},
{
"msg_contents": "\n\nOn 2020/04/02 15:52, Fujii Masao wrote:\n> \n> \n> On 2020/04/02 15:01, Julien Rouhaud wrote:\n>> On Thu, Apr 02, 2020 at 01:05:56PM +0900, Fujii Masao wrote:\n>>>\n>>>\n>>> On 2020/04/02 3:47, Julien Rouhaud wrote:\n>>>> On Wed, Apr 1, 2020 at 7:51 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>\n>>>>>\n>>>>> On 2020/03/31 10:31, Justin Pryzby wrote:\n>>>>>> On Wed, Jan 29, 2020 at 12:15:59PM +0100, Julien Rouhaud wrote:\n>>>>>>> Rebase due to conflict with 3ec20c7091e97.\n>>>>>>\n>>>>>> This is failing to apply probably since 4a539a25ebfc48329fd656a95f3c1eb2cda38af3.\n>>>>>> Could you rebase? (Also, not sure if this can be set as RFC?)\n>>>>>\n>>>>> I updated the patch. Attached.\n>>>>\n>>>> Thanks a lot! I'm sorry I missed Justin's ping, and it I just\n>>>> realized that my cron job that used to warn me about cfbot failure was\n>>>> broken :(\n>>>>\n>>>>> +/* Compute the difference between two BufferUsage */\n>>>>> +BufferUsage\n>>>>> +ComputeBufferCounters(BufferUsage *start, BufferUsage *stop)\n>>>>>\n>>>>> Since BufferUsageAccumDiff() was exported, ComputeBufferCounters() is\n>>>>> no longer necessary. In the patched version, BufferUsageAccumDiff() is\n>>>>> used to calculate the difference of buffer usage.\n>>>>\n>>>> Indeed, exposing BufferUsageAccumDiff wa definitely a good thing!\n>>>>\n>>>>> + if (es->summary && (planduration || es->buffers))\n>>>>> + ExplainOpenGroup(\"Planning\", \"Planning\", true, es);\n>>>>>\n>>>>> Isn't it more appropriate to check \"bufusage\" instead of \"es->buffers\" here?\n>>>>> The patch changes the code so that \"bufusage\" is checked.\n>>>>\n>>>> AFAICS not unless ExplainOneQuery is also changed to pass a NULL\n>>>> pointer if the BUFFER option wasn't provided (and maybe also\n>>>> optionally skip the planning buffer computation). With this version\n>>>> you now get:\n>>>>\n>>>> =# explain (analyze, buffers off) update t1 set id = id;\n>>>> QUERY PLAN\n>>>> -------------------------------------------------------------------------------------------------------\n>>>> Update on t1 (cost=0.00..22.70 rows=1270 width=42) (actual\n>>>> time=0.170..0.170 rows=0 loops=1)\n>>>> -> Seq Scan on t1 (cost=0.00..22.70 rows=1270 width=42) (actual\n>>>> time=0.050..0.054 rows=1 loops=1)\n>>>> Planning Time: 1.461 ms\n>>>> Buffers: shared hit=25\n>>>> Execution Time: 1.071 ms\n>>>> (5 rows)\n>>>>\n>>>> which seems wrong to me.\n>>>>\n>>>> I reused the es->buffers to avoid having needing something like:\n>>>\n>>> Yes, you're right! So I updated the patch as you suggested.\n>>> Attached is the updated version of the patch.\n>>> Thanks for the review!\n>>\n>>\n>> Thanks a lot, it all looks good to me!\n> \n> Thanks! Barring any objection, I will commit the latest version of the patch.\n\nPushed! Thanks a lot!!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 3 Apr 2020 11:31:20 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16109: Postgres planning time is high across version (Expose\n buffer usage during planning in EXPLAIN)"
}
] |
[
{
"msg_contents": "Hi,\n\nAttached is a draft of the press release for the update release going\nout on 2010-11-14. Please provide feedback, particularly on the\ntechnical accuracy of the statements.\n\nThanks!\n\nJonathan",
"msg_date": "Tue, 12 Nov 2019 17:17:37 -0500",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "2019-11-14 Press Release Draft"
},
{
"msg_contents": "> * Several fixes for logical replication, including a failure when the publisher\n> & subscriber had different REPLICA IDENTITY columns set.\n\n\"&\" should probably be \"and\" as I don't see it used like that in any\nother release notes.\n\nRegards,\n-- Sehrope Sarkuni\nFounder & CEO | JackDB, Inc. | https://www.jackdb.com/\n\n\n",
"msg_date": "Thu, 14 Nov 2019 07:36:02 -0500",
"msg_from": "Sehrope Sarkuni <sehrope@jackdb.com>",
"msg_from_op": false,
"msg_subject": "Re: 2019-11-14 Press Release Draft"
},
{
"msg_contents": "On Tue, 12 Nov 2019 at 22:17, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> Attached is a draft of the press release for the update release going\n> out on 2010-11-14. Please provide feedback, particularly on the\n> technical accuracy of the statements.\n\nText\n\n by the `position()`\n\nshould probably either be\n\n by `position()`\n\nor\n\n by the `position()` function\n\nno?\n\nGeoff\n\n\n",
"msg_date": "Thu, 14 Nov 2019 12:46:40 +0000",
"msg_from": "Geoff Winkless <pgsqladmin@geoff.dj>",
"msg_from_op": false,
"msg_subject": "Re: 2019-11-14 Press Release Draft"
},
{
"msg_contents": "On 11/14/19 7:46 AM, Geoff Winkless wrote:\n> On Tue, 12 Nov 2019 at 22:17, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n>> Attached is a draft of the press release for the update release going\n>> out on 2010-11-14. Please provide feedback, particularly on the\n>> technical accuracy of the statements.\n> \n> Text\n> \n> by the `position()`\n> \n> should probably either be\n> \n> by `position()`\n> \n> or\n> \n> by the `position()` function\n\nThanks Geoff & Sehrope for your suggestions / corrections. I have\nincorporated them, as well as a few other things I noticed as well.\n\nThe release is now out!\n\nJonathan",
"msg_date": "Thu, 14 Nov 2019 10:07:03 -0500",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: 2019-11-14 Press Release Draft"
}
] |
[
{
"msg_contents": "Hello hackers,\n\n From the advanced bikeshedding department: I'd like my psql\ntranscripts to have the usual alignment, but be easier to copy and\npaste later without having weird prompt stuff in the middle. How\nabout a prompt format directive %w that means \"whitespace of the same\nwidth as %/\"? Then you can make set your PROMPT2 to '%w ' and it\nbecomes invisible:\n\npgdu=# create table foo (\n i int,\n j int\n );\nCREATE TABLE\npgdu=#",
"msg_date": "Wed, 13 Nov 2019 16:14:44 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Invisible PROMPT2"
},
{
"msg_contents": "st 13. 11. 2019 v 4:15 odesílatel Thomas Munro <thomas.munro@gmail.com>\nnapsal:\n\n> Hello hackers,\n>\n> From the advanced bikeshedding department: I'd like my psql\n> transcripts to have the usual alignment, but be easier to copy and\n> paste later without having weird prompt stuff in the middle. How\n> about a prompt format directive %w that means \"whitespace of the same\n> width as %/\"? Then you can make set your PROMPT2 to '%w ' and it\n> becomes invisible:\n>\n> pgdu=# create table foo (\n> i int,\n> j int\n> );\n> CREATE TABLE\n> pgdu=#\n>\n\n+1\n\nPavel\n\nst 13. 11. 2019 v 4:15 odesílatel Thomas Munro <thomas.munro@gmail.com> napsal:Hello hackers,\n\n From the advanced bikeshedding department: I'd like my psql\ntranscripts to have the usual alignment, but be easier to copy and\npaste later without having weird prompt stuff in the middle. How\nabout a prompt format directive %w that means \"whitespace of the same\nwidth as %/\"? Then you can make set your PROMPT2 to '%w ' and it\nbecomes invisible:\n\npgdu=# create table foo (\n i int,\n j int\n );\nCREATE TABLE\npgdu=#+1Pavel",
"msg_date": "Wed, 13 Nov 2019 06:58:11 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Invisible PROMPT2"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n\n> Hello hackers,\n>\n> From the advanced bikeshedding department: I'd like my psql\n> transcripts to have the usual alignment, but be easier to copy and\n> paste later without having weird prompt stuff in the middle. How\n> about a prompt format directive %w that means \"whitespace of the same\n> width as %/\"? Then you can make set your PROMPT2 to '%w ' and it\n> becomes invisible:\n\nThat only lines up nicely if %/ is the only variable-width directive in\nPROMPT1. How about a circumfix directive (like the existing %[ ... %])\nthat replaces everything inside with whitespace, but keeps the width?\n\n- ilmari\n-- \n\"The surreality of the universe tends towards a maximum\" -- Skud's Law\n\"Never formulate a law or axiom that you're not prepared to live with\n the consequences of.\" -- Skud's Meta-Law\n\n\n",
"msg_date": "Wed, 13 Nov 2019 11:27:00 +0000",
"msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)",
"msg_from_op": false,
"msg_subject": "Re: Invisible PROMPT2"
},
{
"msg_contents": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=) writes:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n>> From the advanced bikeshedding department: I'd like my psql\n>> transcripts to have the usual alignment, but be easier to copy and\n>> paste later without having weird prompt stuff in the middle. How\n>> about a prompt format directive %w that means \"whitespace of the same\n>> width as %/\"? Then you can make set your PROMPT2 to '%w ' and it\n>> becomes invisible:\n\n> That only lines up nicely if %/ is the only variable-width directive in\n> PROMPT1.\n\nYeah, that was my first reaction too.\n\n> How about a circumfix directive (like the existing %[ ... %])\n> that replaces everything inside with whitespace, but keeps the width?\n\nOr just define %w as meaning \"whitespace of the same width as\nPROMPT1\". You couldn't use it *in* PROMPT1, then, but I see\nno use-case for that anyway.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 13 Nov 2019 09:47:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Invisible PROMPT2"
},
{
"msg_contents": "On Wed, Nov 13, 2019 at 09:47:01AM -0500, Tom Lane wrote:\n> ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=) writes:\n> > Thomas Munro <thomas.munro@gmail.com> writes:\n> >> From the advanced bikeshedding department: I'd like my psql\n> >> transcripts to have the usual alignment, but be easier to copy and\n> >> paste later without having weird prompt stuff in the middle. How\n> >> about a prompt format directive %w that means \"whitespace of the same\n> >> width as %/\"? Then you can make set your PROMPT2 to '%w ' and it\n> >> becomes invisible:\n> \n> > That only lines up nicely if %/ is the only variable-width directive in\n> > PROMPT1.\n> \n> Yeah, that was my first reaction too.\n> \n> > How about a circumfix directive (like the existing %[ ... %])\n> > that replaces everything inside with whitespace, but keeps the width?\n> \n> Or just define %w as meaning \"whitespace of the same width as\n> PROMPT1\". You couldn't use it *in* PROMPT1, then, but I see\n> no use-case for that anyway.\n\n+1 for doing it this way. Would it make more sense to error out if\nsomebody tried to set that in PROMPT1, or ignore it, or...?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Wed, 13 Nov 2019 18:49:20 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: Invisible PROMPT2"
},
{
"msg_contents": "On 11/13/19 12:49 PM, David Fetter wrote:\n>> Or just define %w as meaning \"whitespace of the same width as\n>> PROMPT1\". You couldn't use it *in* PROMPT1, then, but I see\n>> no use-case for that anyway.\n> \n> +1 for doing it this way. Would it make more sense to error out if\n> somebody tried to set that in PROMPT1, or ignore it, or...?\n\nDefine it as \"difference between PROMPT1's width and the total width\nof non-%w elements in this prompt\". Then it has a defined meaning in\nPROMPT1 too (which could be arbitrary if it appears only once, but\nhas to be zero in case it appears more than once).\n\nEaster egg: expand it to backspaces if used in PROMPT2 among other\nstuff that's already wider than PROMPT1. ;)\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Wed, 13 Nov 2019 13:03:08 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Invisible PROMPT2"
},
{
"msg_contents": "On 2019-Nov-13, David Fetter wrote:\n\n> On Wed, Nov 13, 2019 at 09:47:01AM -0500, Tom Lane wrote:\n\n> > > How about a circumfix directive (like the existing %[ ... %])\n> > > that replaces everything inside with whitespace, but keeps the width?\n> > \n> > Or just define %w as meaning \"whitespace of the same width as\n> > PROMPT1\". You couldn't use it *in* PROMPT1, then, but I see\n> > no use-case for that anyway.\n> \n> +1 for doing it this way. Would it make more sense to error out if\n> somebody tried to set that in PROMPT1, or ignore it, or...?\n\nThis seems way too specific to me. I like the \"circumfix\" directive\nbetter, because it allows one to do more things. I don't have any\nimmediate use for it, but it doesn't seem completely far-fetched that\nthere are some.\n\nBTW the psql manual says that %[ and %] were plagiarized from tcsh, but\nthat's a lie: tcsh does not contain such a feature. Bash does, however.\n(I guess not many people read the tcsh manual.)\n\nNeither bash nor tcsh have a feature to return whitespace of anything;\nwe're in a green field here ISTM.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 13 Nov 2019 15:06:08 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Invisible PROMPT2"
},
{
"msg_contents": "On Wed, Nov 13, 2019 at 03:06:08PM -0300, Alvaro Herrera wrote:\n> On 2019-Nov-13, David Fetter wrote:\n> \n> > On Wed, Nov 13, 2019 at 09:47:01AM -0500, Tom Lane wrote:\n> \n> > > > How about a circumfix directive (like the existing %[ ... %])\n> > > > that replaces everything inside with whitespace, but keeps the width?\n> > > \n> > > Or just define %w as meaning \"whitespace of the same width as\n> > > PROMPT1\". You couldn't use it *in* PROMPT1, then, but I see\n> > > no use-case for that anyway.\n> > \n> > +1 for doing it this way. Would it make more sense to error out if\n> > somebody tried to set that in PROMPT1, or ignore it, or...?\n> \n> This seems way too specific to me. I like the \"circumfix\" directive\n> better, because it allows one to do more things. I don't have any\n> immediate use for it, but it doesn't seem completely far-fetched that\n> there are some.\n> \n> BTW the psql manual says that %[ and %] were plagiarized from tcsh, but\n> that's a lie: tcsh does not contain such a feature. Bash does, however.\n> (I guess not many people read the tcsh manual.)\n> \n> Neither bash nor tcsh have a feature to return whitespace of anything;\n> we're in a green field here ISTM.\n\nSo something like %w[...%w] where people could put things like PROMPT1\ninside?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Wed, 13 Nov 2019 19:12:16 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: Invisible PROMPT2"
},
{
"msg_contents": "On 2019-Nov-13, David Fetter wrote:\n\n> On Wed, Nov 13, 2019 at 03:06:08PM -0300, Alvaro Herrera wrote:\n> > On 2019-Nov-13, David Fetter wrote:\n> > \n> > > On Wed, Nov 13, 2019 at 09:47:01AM -0500, Tom Lane wrote:\n> > \n> > > > > How about a circumfix directive (like the existing %[ ... %])\n> > > > > that replaces everything inside with whitespace, but keeps the width?\n\n> > This seems way too specific to me. I like the \"circumfix\" directive\n> > better, because it allows one to do more things. I don't have any\n> > immediate use for it, but it doesn't seem completely far-fetched that\n> > there are some.\n\n> So something like %w[...%w] where people could put things like PROMPT1\n> inside?\n\nHmm, (I'm not sure your proposed syntax works, but let's assume that\nit does.) I'm saying you'd define\n\\set PROMPT1 '%a%b%c '\n\\set PROMPT2 '%w[%a%b%c %w]'\n\nand you'd end up with matching indentation on multiline queries.\n\nI'm not sure that we'd need to make something like this work:\n PROMPT1=\"%w[$PROMPT1%w]\"\nwhich I think is what you're saying.\n\n\nWe already have \"%:PROMPT1:\" but that expands to the literal value of\nprompt1, not to the value that prompt1 would expand to:\n\n55432 13devel 11214=# \\set PROMPT2 'hello %:PROMPT1: bye'\n55432 13devel 11214=# select<Enter>\nhello %[%033[35m%]%> %:VERSION_NAME: %p%[%033[0m%]%R%# bye\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 13 Nov 2019 15:58:38 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Invisible PROMPT2"
},
{
"msg_contents": "On Wed, Nov 13, 2019 at 03:58:38PM -0300, Alvaro Herrera wrote:\n> On 2019-Nov-13, David Fetter wrote:\n> \n> > On Wed, Nov 13, 2019 at 03:06:08PM -0300, Alvaro Herrera wrote:\n> > > On 2019-Nov-13, David Fetter wrote:\n> > > \n> > > > On Wed, Nov 13, 2019 at 09:47:01AM -0500, Tom Lane wrote:\n> > > \n> > > > > > How about a circumfix directive (like the existing %[ ... %])\n> > > > > > that replaces everything inside with whitespace, but keeps the width?\n> \n> > > This seems way too specific to me. I like the \"circumfix\" directive\n> > > better, because it allows one to do more things. I don't have any\n> > > immediate use for it, but it doesn't seem completely far-fetched that\n> > > there are some.\n> \n> > So something like %w[...%w] where people could put things like PROMPT1\n> > inside?\n> \n> Hmm, (I'm not sure your proposed syntax works, but let's assume that\n> it does.) I'm saying you'd define\n> \\set PROMPT1 '%a%b%c '\n> \\set PROMPT2 '%w[%a%b%c %w]'\n> \n> and you'd end up with matching indentation on multiline queries.\n> \n> I'm not sure that we'd need to make something like this work:\n> PROMPT1=\"%w[$PROMPT1%w]\"\n> which I think is what you're saying.\n\nPROMPT2=\"%w[$PROMPT1%w]\", and basically yes.\n\n> We already have \"%:PROMPT1:\" but that expands to the literal value of\n> prompt1, not to the value that prompt1 would expand to:\n\nYeah, that's not so great for this usage. I guess \"expand variables\"\ncould be a separate useful feature (and patch) all by itself...\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Wed, 13 Nov 2019 20:57:04 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: Invisible PROMPT2"
},
{
"msg_contents": "At Wed, 13 Nov 2019 20:57:04 +0100, David Fetter <david@fetter.org> wrote in \n> On Wed, Nov 13, 2019 at 03:58:38PM -0300, Alvaro Herrera wrote:\n> > On 2019-Nov-13, David Fetter wrote:\n> > \n> > > On Wed, Nov 13, 2019 at 03:06:08PM -0300, Alvaro Herrera wrote:\n> > > > On 2019-Nov-13, David Fetter wrote:\n> > > > \n> > > > > On Wed, Nov 13, 2019 at 09:47:01AM -0500, Tom Lane wrote:\n> > > > \n> > > > > > > How about a circumfix directive (like the existing %[ ... %])\n> > > > > > > that replaces everything inside with whitespace, but keeps the width?\n> > \n> > > > This seems way too specific to me. I like the \"circumfix\" directive\n> > > > better, because it allows one to do more things. I don't have any\n> > > > immediate use for it, but it doesn't seem completely far-fetched that\n> > > > there are some.\n> > \n> > > So something like %w[...%w] where people could put things like PROMPT1\n> > > inside?\n> > \n> > Hmm, (I'm not sure your proposed syntax works, but let's assume that\n> > it does.) I'm saying you'd define\n> > \\set PROMPT1 '%a%b%c '\n> > \\set PROMPT2 '%w[%a%b%c %w]'\n> > \n> > and you'd end up with matching indentation on multiline queries.\n\nThis seems assuming %x are a kind of stable (until semicolon)\nfunction. But at least %`..` can be volatile. So, I think the %w\nthing in PROMPT2 should be able to refer the actual prompt string\nresulted from PROMPT1.\n\n> > I'm not sure that we'd need to make something like this work:\n> > PROMPT1=\"%w[$PROMPT1%w]\"\n> > which I think is what you're saying.\n> \n> PROMPT2=\"%w[$PROMPT1%w]\", and basically yes.\n\nLike this. Or may be a bit too-much and I don't came up with a\nlialistic use-case, but I think of the following syntax.\n\n\\set PROMPT1 '%w[%a%b%c%w] '\n\\set PROMPT2 '%w '\n\nwhere %w in PROMPT2 is replaced by a whitespace with the same length\nto the output of %w[..%w] part in PROMPT1.\n\n> > We already have \"%:PROMPT1:\" but that expands to the literal value of\n> > prompt1, not to the value that prompt1 would expand to:\n> \n> Yeah, that's not so great for this usage. I guess \"expand variables\"\n> could be a separate useful feature (and patch) all by itself...\n\n+1.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 14 Nov 2019 15:37:55 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Invisible PROMPT2"
},
{
"msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> This seems assuming %x are a kind of stable (until semicolon)\n> function. But at least %`..` can be volatile. So, I think the %w\n> thing in PROMPT2 should be able to refer the actual prompt string\n> resulted from PROMPT1.\n\nOh, that's a good point. But it actually leads to a much simpler\ndefinition and implementation than the other ideas we've kicked\naround: define %w as \"whitespace equal to the length of the\nlast-generated PROMPT1 string (initially empty)\", and we just\nhave to save PROMPT1 each time we generate it.\n\nExcept ... I'm not sure how to deal with hidden escape sequences.\nWe should probably assume that anything inside %[...%] has width\nzero, but how would we remember that?\n\nMaybe count the width of non-escape characters whenever we\ngenerate PROMPT1, and just save that number not the string?\nIt'd add overhead that's useless when there's no %w, but\nprobably not enough to care about.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 14 Nov 2019 09:58:03 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Invisible PROMPT2"
},
{
"msg_contents": "On Fri, Nov 15, 2019 at 3:58 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > This seems assuming %x are a kind of stable (until semicolon)\n> > function. But at least %`..` can be volatile. So, I think the %w\n> > thing in PROMPT2 should be able to refer the actual prompt string\n> > resulted from PROMPT1.\n>\n> Oh, that's a good point. But it actually leads to a much simpler\n> definition and implementation than the other ideas we've kicked\n> around: define %w as \"whitespace equal to the length of the\n> last-generated PROMPT1 string (initially empty)\", and we just\n> have to save PROMPT1 each time we generate it.\n>\n> Except ... I'm not sure how to deal with hidden escape sequences.\n> We should probably assume that anything inside %[...%] has width\n> zero, but how would we remember that?\n>\n> Maybe count the width of non-escape characters whenever we\n> generate PROMPT1, and just save that number not the string?\n> It'd add overhead that's useless when there's no %w, but\n> probably not enough to care about.\n\nNice idea. Here's one like that, that just does the counting at the\nend and looks out for readline control codes. It's pretty naive about\nwhat \"width\" means though: you'll get two spaces for UTF-8 encoded é,\nand I suppose a complete implementation would know about the half\nwidth/full width thing for Chinese and Japanese etc.",
"msg_date": "Mon, 18 Nov 2019 10:11:36 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Invisible PROMPT2"
},
{
"msg_contents": "On 2019-Nov-18, Thomas Munro wrote:\n\n> Nice idea. Here's one like that, that just does the counting at the\n> end and looks out for readline control codes. It's pretty naive about\n> what \"width\" means though: you'll get two spaces for UTF-8 encoded �,\n> and I suppose a complete implementation would know about the half\n> width/full width thing for Chinese and Japanese etc.\n\nHmm ... is this related to what Juan Jos� posted at\nhttps://postgr.es/m/CAC+AXB28ADgwdNRA=aAoWDYPqO1DZR+5NTO8iXGSsFrXyVpqYQ@mail.gmail.com\n? That's backend code of course, though.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 17 Nov 2019 21:49:24 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Invisible PROMPT2"
},
{
"msg_contents": "On Mon, Nov 18, 2019 at 1:49 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> On 2019-Nov-18, Thomas Munro wrote:\n> > Nice idea. Here's one like that, that just does the counting at the\n> > end and looks out for readline control codes. It's pretty naive about\n> > what \"width\" means though: you'll get two spaces for UTF-8 encoded é,\n> > and I suppose a complete implementation would know about the half\n> > width/full width thing for Chinese and Japanese etc.\n>\n> Hmm ... is this related to what Juan José posted at\n> https://postgr.es/m/CAC+AXB28ADgwdNRA=aAoWDYPqO1DZR+5NTO8iXGSsFrXyVpqYQ@mail.gmail.com\n> ? That's backend code of course, though.\n\nYeah. Maybe pg_wcswidth() would be OK though, and it's available in\npsql, though I guess you'd have to make a copy with the escaped bits\nstripped out.\n\n\n",
"msg_date": "Mon, 18 Nov 2019 14:40:50 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Invisible PROMPT2"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Mon, Nov 18, 2019 at 1:49 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>> On 2019-Nov-18, Thomas Munro wrote:\n>>> Nice idea. Here's one like that, that just does the counting at the\n>>> end and looks out for readline control codes. It's pretty naive about\n>>> what \"width\" means though: you'll get two spaces for UTF-8 encoded é,\n>>> and I suppose a complete implementation would know about the half\n>>> width/full width thing for Chinese and Japanese etc.\n\n> Yeah. Maybe pg_wcswidth() would be OK though, and it's available in\n> psql, though I guess you'd have to make a copy with the escaped bits\n> stripped out.\n\nRight, you should use pg_wcswidth() or the underlying PQdsplen() function\nto compute display width. The latter might be more convenient since\nyou could apply it character by character rather than making a copy\nof the string.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 18 Nov 2019 12:21:53 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Invisible PROMPT2"
},
{
"msg_contents": "On Tue, Nov 19, 2019 at 6:21 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Yeah. Maybe pg_wcswidth() would be OK though, and it's available in\n> > psql, though I guess you'd have to make a copy with the escaped bits\n> > stripped out.\n>\n> Right, you should use pg_wcswidth() or the underlying PQdsplen() function\n> to compute display width. The latter might be more convenient since\n> you could apply it character by character rather than making a copy\n> of the string.\n\nRight, a PQdsplen()/PQmblen() loop works nicely, as attached.\n\nI spotted a potential problem: I suppose I could write a PROMPT1 that\nincludes an invalid multibyte sequence at the end of the buffer and\ntrick PQmblen() or PQdsplen() into reading a few bytes past the end.\nTwo defences against that would be (1) use pg_encoding_verifymb()\ninstead of PQmblen() and (2) use pg_encoding_max_length() to make sure\nyou can't get close enough to the end of the buffer, but neither of\nthose functions are available to psql.",
"msg_date": "Tue, 19 Nov 2019 10:07:33 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Invisible PROMPT2"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Right, a PQdsplen()/PQmblen() loop works nicely, as attached.\n\n> I spotted a potential problem: I suppose I could write a PROMPT1 that\n> includes an invalid multibyte sequence at the end of the buffer and\n> trick PQmblen() or PQdsplen() into reading a few bytes past the end.\n> Two defences against that would be (1) use pg_encoding_verifymb()\n> instead of PQmblen() and (2) use pg_encoding_max_length() to make sure\n> you can't get close enough to the end of the buffer, but neither of\n> those functions are available to psql.\n\nYou should follow the logic in pg_wcswidth: compute PQmblen() first,\nand bail out if it's more than the remaining string length, otherwise\nit's ok to apply PQdsplen().\n\nIt might be a good idea to explicitly initialize last_prompt1_width to\nzero, for clarity.\n\nShould the user docs explicitly say \"of the same width as the most recent\noutput of PROMPT1\", as you have in the comments? That seems a more\nprecise specification, and it will eliminate some questions people will\notherwise ask.\n\nLGTM otherwise.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 18 Nov 2019 18:09:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Invisible PROMPT2"
},
{
"msg_contents": "On Tue, Nov 19, 2019 at 12:09 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> You should follow the logic in pg_wcswidth: compute PQmblen() first,\n> and bail out if it's more than the remaining string length, otherwise\n> it's ok to apply PQdsplen().\n\nGot it. I was worried that it wasn't safe to call even PQmblen(),\nbecause I didn't know a fact about all encodings: as described in the\ncomment of pg_gb18030_mblen(), all implementations read only the first\nbyte to determine the length, except for GB18030 which reads the\nsecond byte too, and that's OK because there's always a null\nterminator.\n\n> It might be a good idea to explicitly initialize last_prompt1_width to\n> zero, for clarity.\n>\n> Should the user docs explicitly say \"of the same width as the most recent\n> output of PROMPT1\", as you have in the comments? That seems a more\n> precise specification, and it will eliminate some questions people will\n> otherwise ask.\n>\n> LGTM otherwise.\n\nDone, and pushed. I also skipped negative results from PQdsplen like\npg_wcswidth() does (that oversight explained why a non-readline build\nshowed the correct alignment for PROMPT1 '%[%033[1m%]%M\n%n@%/%R%[%033[0m%]%# ' by strange concindence).\n\nThanks all for the feedback. I think the new bikeshed colour looks good.\n\n\n",
"msg_date": "Tue, 19 Nov 2019 16:02:48 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Invisible PROMPT2"
},
{
"msg_contents": "On Tue, Nov 19, 2019 at 04:02:48PM +1300, Thomas Munro wrote:\n> On Tue, Nov 19, 2019 at 12:09 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > You should follow the logic in pg_wcswidth: compute PQmblen() first,\n> > and bail out if it's more than the remaining string length, otherwise\n> > it's ok to apply PQdsplen().\n> \n> Got it. I was worried that it wasn't safe to call even PQmblen(),\n> because I didn't know a fact about all encodings: as described in the\n> comment of pg_gb18030_mblen(), all implementations read only the first\n> byte to determine the length, except for GB18030 which reads the\n> second byte too, and that's OK because there's always a null\n> terminator.\n> \n> > It might be a good idea to explicitly initialize last_prompt1_width to\n> > zero, for clarity.\n> >\n> > Should the user docs explicitly say \"of the same width as the most recent\n> > output of PROMPT1\", as you have in the comments? That seems a more\n> > precise specification, and it will eliminate some questions people will\n> > otherwise ask.\n> >\n> > LGTM otherwise.\n> \n> Done, and pushed. I also skipped negative results from PQdsplen like\n> pg_wcswidth() does (that oversight explained why a non-readline build\n> showed the correct alignment for PROMPT1 '%[%033[1m%]%M\n> %n@%/%R%[%033[0m%]%# ' by strange concindence).\n> \n> Thanks all for the feedback. I think the new bikeshed colour looks good.\n\nPlease find attached some polka dots for the bike shed :)\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Tue, 19 Nov 2019 22:37:26 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: Invisible PROMPT2"
},
{
"msg_contents": "Hi,\n\nI noticed that this patch does not work when PROMPT1 contains a new line,\nsince the whole length of PROMPT1 is taken into account for the length of\n%w.\nAttached screenshot shows the issue on my psql, with the following PROMPT\nvariables (colors edited out for readability):\n\n\\set PROMPT1 '\\n[pid:%p] %n :: %`hostname`:%> ‹%/› \\n› '\n\\set PROMPT2 '%w'\n\nNotice in the screenshot that just after inputting a newline, my cursor is\nfar to the right.\nThe length of %w should probably be computed starting from the last newline\nin PROMPT1.\n\nI could technically get rid of my newline, but since my prompt can get\npretty long, i like the comfort of having my first line of sql start right\nat the left of my terminal.\n\nAlso attached is a trivial patch to fix this issue, which I have not\nextensively tested (works for me at least), and might not be the right way\nto do it, but it's a start.\nOtherwise, nice feature, I like it!\n\nRegards,\nMaxence",
"msg_date": "Wed, 27 Nov 2019 16:30:12 +0100",
"msg_from": "Maxence Ahlouche <maxence.ahlouche@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Invisible PROMPT2"
},
{
"msg_contents": "Maxence Ahlouche <maxence.ahlouche@gmail.com> writes:\n> The length of %w should probably be computed starting from the last newline\n> in PROMPT1.\n\nGood idea, but I think you need to account for \"visible\" (ie, if the\nnewline is inside RL_PROMPT_START_IGNORE, it shouldn't change the width).\nIt might be best to add logic inside the existing \"if (visible)\" instead\nof making a new top-level case.\n\nAnother special case that somebody's likely to whine about is \\t, though\nto handle that we'd have to make assumptions about the tab stop distance.\nMaybe assuming that it's 8 is good enough.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 27 Nov 2019 11:09:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Invisible PROMPT2"
},
{
"msg_contents": "On Wed, 27 Nov 2019 at 17:09, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Good idea, but I think you need to account for \"visible\" (ie, if the\n> newline is inside RL_PROMPT_START_IGNORE, it shouldn't change the width).\n> It might be best to add logic inside the existing \"if (visible)\" instead\n> of making a new top-level case.\n>\n\nRight, I assumed that it was safe given that only terminal control\ncharacters were invisible.\nSince the title of the terminal window can be changed as well via control\ncharacters, it's probably better not to make that assumption.\n\nI updated the patch accordingly.\n\n\n> Another special case that somebody's likely to whine about is \\t, though\n> to handle that we'd have to make assumptions about the tab stop distance.\n> Maybe assuming that it's 8 is good enough.\n>\n\nThe problem with tabs is that any user can set their tabstops to whatever\nthey want, and a tab doesn't have a fixed width, it just goes up to the\nnext tab stop.\nOne way to do it would be to add tabs wherever necessary in prompt2 to make\nsure they have the same size as in prompt1 (a list of numbers of spaces,\nwhich we would concatenate with a tab?), but I'm not sure it's worth the\neffort.",
"msg_date": "Sun, 22 Dec 2019 17:43:26 +0100",
"msg_from": "Maxence Ahlouche <maxence.ahlouche@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Invisible PROMPT2"
},
{
"msg_contents": "On Mon, Dec 23, 2019 at 5:43 AM Maxence Ahlouche\n<maxence.ahlouche@gmail.com> wrote:\n> On Wed, 27 Nov 2019 at 17:09, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Good idea, but I think you need to account for \"visible\" (ie, if the\n>> newline is inside RL_PROMPT_START_IGNORE, it shouldn't change the width).\n>> It might be best to add logic inside the existing \"if (visible)\" instead\n>> of making a new top-level case.\n>\n> Right, I assumed that it was safe given that only terminal control characters were invisible.\n> Since the title of the terminal window can be changed as well via control characters, it's probably better not to make that assumption.\n>\n> I updated the patch accordingly.\n\nPushed. Thanks!\n\n\n",
"msg_date": "Mon, 10 Feb 2020 13:30:39 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Invisible PROMPT2"
}
] |
[
{
"msg_contents": "Dear Hackers\r\n\r\nI've been using SPI to execute some queries and this time I've tried to \r\nissue CREATE TABLE commands through SPI. I've been getting the message \r\n\"ERROR: CREATE TABLE AS is not allowed in a non-volatile function\".\r\n\r\nI'm a bit confused because my functions are set as volatile when I got \r\nthat result. I was sure I'd be able to issue CREATE TABLE through SPI \r\nbecause a possible return value of SPI_execute is SPI_OK_UTILITY if a \r\nutility command such as CREATE TABLE was executed.\r\n\r\nMaybe the caveat is the following. I am actually invoking my function \r\nthrough CREATE LANGUAGE's inline handler using an anonymous do block. So \r\nmaybe I need to take some additional considerations for this reason?\r\n\r\nThe following is what my functions look like:\r\n\r\nCREATE FUNCTION myLanguage.myLanguage_function_call_handler()\r\nRETURNS language_handler\r\nAS 'MODULE_PATHNAME','myLanguage_function_call_handler'\r\nLANGUAGE C VOLATILE;\r\n\r\nCREATE FUNCTION myLanguage.myLanguage_inline_function_handler(internal)\r\nRETURNS void\r\nAS 'MODULE_PATHNAME','myLanguage_inline_function_handler'\r\nLANGUAGE C VOLATILE;\r\n\r\nCREATE LANGUAGE myLanguage\r\nHANDLER myLanguage.myLanguage_function_call_handler\r\nINLINE myLanguage.myLanguage_inline_function_handler;\r\nCOMMENT ON LANGUAGE myLanguage IS 'My Language';\r\n\r\n\r\nHave I correctly approached the issue? Maybe there is a workaround?\r\n\r\nBest regards\r\nTom\r\n",
"msg_date": "Wed, 13 Nov 2019 05:09:31 +0000",
"msg_from": "Tom Mercha <mercha_t@hotmail.com>",
"msg_from_op": true,
"msg_subject": "SPI error with non-volatile functions"
},
{
"msg_contents": "Hi,\n\nOn 2019-11-13 05:09:31 +0000, Tom Mercha wrote:\n> I've been using SPI to execute some queries and this time I've tried to \n> issue CREATE TABLE commands through SPI. I've been getting the message \n> \"ERROR: CREATE TABLE AS is not allowed in a non-volatile function\".\n\nAny chance you're specifying read_only = true to\nSPI_execute()/execute_plan()/...?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 12 Nov 2019 21:13:59 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: SPI error with non-volatile functions"
},
{
"msg_contents": "On 13/11/2019 06:13, Andres Freund wrote:\r\n> Hi,\r\n> \r\n> On 2019-11-13 05:09:31 +0000, Tom Mercha wrote:\r\n>> I've been using SPI to execute some queries and this time I've tried to\r\n>> issue CREATE TABLE commands through SPI. I've been getting the message\r\n>> \"ERROR: CREATE TABLE AS is not allowed in a non-volatile function\".\r\n> \r\n> Any chance you're specifying read_only = true to\r\n> SPI_execute()/execute_plan()/...?\r\n> \r\n> Greetings,\r\n> \r\n> Andres Freund\r\n> \r\n\r\nDear Andres\r\n\r\nThat's exactly what's up! Everything is working as intended now. So \r\nsorry this was a bit silly of me, I didn't understand the message as a \r\nreference to that configuration.\r\n\r\nThanks so much.\r\n\r\nBest regards\r\nTom\r\n",
"msg_date": "Wed, 13 Nov 2019 05:23:27 +0000",
"msg_from": "Tom Mercha <mercha_t@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: SPI error with non-volatile functions"
}
] |
[
{
"msg_contents": "Is there any one who help me what the architecture of an extension should looks like in PostgreSQL database.\n\n\nRegards,\n____________________________________\nYonathan Misgan\nAssistant Lecturer, @ Debre Tabor University\nFaculty of Technology\nDepartment of Computer Science\nStudying MSc in Computer Science (in Data and Web Engineering)\n@ Addis Ababa University\nE-mail: yonamis@dtu.edu.et<mailto:yonamis@dtu.edu.et>\n yonathanmisgan.4@gmail.com<mailto:yonathanmisgan.4@gmail.com>\nTel: (+251)-911180185 (mob)\n\n\n\n\n\n\n\n\n\n\nIs there any one who help me what the architecture of an extension should looks like in PostgreSQL database.\n \nRegards,\n____________________________________\nYonathan Misgan \nAssistant Lecturer, @ Debre Tabor University\nFaculty of Technology\nDepartment of Computer Science\nStudying MSc in Computer Science (in\n Data and Web Engineering) \n@ Addis Ababa University \nE-mail: yonamis@dtu.edu.et\n yonathanmisgan.4@gmail.com\nTel: (+251)-911180185 (mob)",
"msg_date": "Wed, 13 Nov 2019 08:03:57 +0000",
"msg_from": "Yonatan Misgan <yonamis@dtu.edu.et>",
"msg_from_op": true,
"msg_subject": "Extension development"
}
] |
[
{
"msg_contents": "Hi,\nTrivial patch:\n- remove a gcc warning (since commit 7a0574b5)\nexpression which evaluates to zero treated as a null pointer constant of\n type 'HeapTuple' (aka 'struct HeapTupleData *')\n\n- always use \"if (newtuple == NULL)\" rather than mixing !newtuple and\nnewtuple == NULL\n\nRegards\nDidier",
"msg_date": "Wed, 13 Nov 2019 11:29:26 +0100",
"msg_from": "didier <did447@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] gcc warning 'expression which evaluates to zero treated as a\n null pointer'"
},
{
"msg_contents": "didier <did447@gmail.com> writes:\n> Trivial patch:\n> - remove a gcc warning (since commit 7a0574b5)\n> expression which evaluates to zero treated as a null pointer constant of\n> type 'HeapTuple' (aka 'struct HeapTupleData *')\n\nHmm, the initializations \"HeapTuple newtuple = false\" are certainly\nbogus-looking and not per project style; I wonder who's to blame for\nthose? (I do not see what 7a0574b5 would have had to do with it;\nthat didn't affect any backend code.)\n\n> - always use \"if (newtuple == NULL)\" rather than mixing !newtuple and\n> newtuple == NULL\n\nDon't particularly agree with these changes though. \"if (!ptr)\" is\na very common C idiom, and no programmer would tolerate a compiler\nthat warned about it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 13 Nov 2019 14:52:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] gcc warning 'expression which evaluates to zero treated\n as a null pointer'"
},
{
"msg_contents": "Hi,\nOn Wed, Nov 13, 2019 at 8:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> didier <did447@gmail.com> writes:\n> > Trivial patch:\n> > - remove a gcc warning (since commit 7a0574b5)\n> > expression which evaluates to zero treated as a null pointer constant of\n> > type 'HeapTuple' (aka 'struct HeapTupleData *')\n>\n> Hmm, the initializations \"HeapTuple newtuple = false\" are certainly\n> bogus-looking and not per project style; I wonder who's to blame for\n> those? (I do not see what 7a0574b5 would have had to do with it;\n> that didn't affect any backend code.)\n\nMy mistake it's not gcc but clang for JIT, maybe because it could\nchange false definition?\nclang version: 6.0.0-1ubuntu2\nclang -E output before 7a0574b5\nHeapTuple newtuple = 0;\nwith 7a0574b5\nHeapTuple newtuple = ((bool) 0);\n\n>\n> > - always use \"if (newtuple == NULL)\" rather than mixing !newtuple and\n> > newtuple == NULL\n>\n> Don't particularly agree with these changes though. \"if (!ptr)\" is\n> a very common C idiom, and no programmer would tolerate a compiler\n> that warned about it.\nThere's no warning, it's stylistic. In the same function there's both\nforms a couple of lines apart: \"if (!ptr)\" follow by \"if (ptr ==\nNULL)\", using only one form is smother on the brain, at least mine.\n\nRegards\nDidier\n\n\n",
"msg_date": "Wed, 13 Nov 2019 21:38:12 +0100",
"msg_from": "didier <did447@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] gcc warning 'expression which evaluates to zero treated\n as a null pointer'"
},
{
"msg_contents": "didier <did447@gmail.com> writes:\n> clang -E output before 7a0574b5\n> HeapTuple newtuple = 0;\n> with 7a0574b5\n> HeapTuple newtuple = ((bool) 0);\n\nHm, did you re-run configure after 7a0574b5? If you didn't, it would\nhave gone through the not-stdbool.h path in c.h, which might account\nfor this. It's a good catch though, even if by accident :-)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 13 Nov 2019 16:01:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] gcc warning 'expression which evaluates to zero treated\n as a null pointer'"
},
{
"msg_contents": "Hi,\n\nOn Wed, Nov 13, 2019 at 10:01 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> didier <did447@gmail.com> writes:\n> > clang -E output before 7a0574b5\n> > HeapTuple newtuple = 0;\n> > with 7a0574b5\n> > HeapTuple newtuple = ((bool) 0);\n>\n> Hm, did you re-run configure after 7a0574b5? If you didn't, it would\n> have gone through the not-stdbool.h path in c.h, which might account\n> for this. It's a good catch though, even if by accident :-)\nYes, that's it. I should have known better, it's no the first time I\nmade this mistake,\nthanks.\n\nDidier\n\n\n",
"msg_date": "Thu, 14 Nov 2019 13:07:54 +0100",
"msg_from": "didier <did447@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] gcc warning 'expression which evaluates to zero treated\n as a null pointer'"
}
] |
[
{
"msg_contents": "Hi,\nSurely that \"s->nChildXids > 0\", protects s->childXids to be NULL!\nBut, when we exchange the test (s->nChildXids > 0) by (s->childXids != NULL), I believe we have the same protection, because, if \"s->childXids\" is not NULL, \"s->nChildXids\" is > 0, naturally.\n\nThat way we can improve the function and avoid calling and setting unnecessarily!\n\nBonus: silent compiler warning potential null pointer derenferencing.\n\nBest regards,\nRanier Vilela\n\n--- \\dll\\postgresql-12.0\\a\\backend\\access\\transam\\xact.c\tMon Sep 30 17:06:55 2019\n+++ xact.c\tWed Nov 13 13:03:28 2019\n@@ -1580,20 +1580,20 @@\n \t */\n \ts->parent->childXids[s->parent->nChildXids] = XidFromFullTransactionId(s->fullTransactionId);\n \n-\tif (s->nChildXids > 0)\n+\tif (s->childXids != NULL) {\n \t\tmemcpy(&s->parent->childXids[s->parent->nChildXids + 1],\n \t\t\t s->childXids,\n \t\t\t s->nChildXids * sizeof(TransactionId));\n+\t /* Release child's array to avoid leakage */\n+ pfree(s->childXids);\n \n+\t /* We must reset these to avoid double-free if fail later in commit */\n+\t s->childXids = NULL;\n+\t s->nChildXids = 0;\n+\t s->maxChildXids = 0;\n+ }\n+\tAssert(s->nChildXids == 0 && s->maxChildXids == 0);\n \ts->parent->nChildXids = new_nChildXids;\n-\n-\t/* Release child's array to avoid leakage */\n-\tif (s->childXids != NULL)\n-\t\tpfree(s->childXids);\n-\t/* We must reset these to avoid double-free if fail later in commit */\n-\ts->childXids = NULL;\n-\ts->nChildXids = 0;\n-\ts->maxChildXids = 0;\n }\n \n /* ----------------------------------------------------------------",
"msg_date": "Wed, 13 Nov 2019 16:18:46 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Improve AtSubCommit_childXids"
},
{
"msg_contents": "Hi,\n\nOn 2019-11-13 16:18:46 +0000, Ranier Vilela wrote:\n> Surely that \"s->nChildXids > 0\", protects s->childXids to be NULL!\n> But, when we exchange the test (s->nChildXids > 0) by (s->childXids != NULL), I believe we have the same protection, because, if \"s->childXids\" is not NULL, \"s->nChildXids\" is > 0, naturally.\n> \n> That way we can improve the function and avoid calling and setting unnecessarily!\n\nWhy is this an improvement? And what setting are we removing? You mean\nthat we reset nChildXids, even if it's already 0? Hard to see how that\nmatters.\n\n\n> Bonus: silent compiler warning potential null pointer derenferencing.\n\nWhich compiler issues a warning here?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 13 Nov 2019 09:10:39 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Improve AtSubCommit_childXids"
},
{
"msg_contents": "\"Why is this an improvement? And what setting are we removing? You mean\nthat we reset nChildXids, even if it's already 0? Hard to see how that\nmatters.\"\n\nThe orginal function, ever set ChildXidsm, nChildXidsa and maxChildXids.\nSee at lines 1594, 1595, 1596, even if it's already 0!\n\nThe test (nChildXids > 0), possibly works, but, may confuse when do use\nmemcpy function soon after, and access one pointer that below, is checked by NULL.\nHow hard to see this?\n\nOriginal file:\n\tif (s->nChildXids > 0) \n\t\tmemcpy(&s->parent->childXids[s->parent->nChildXids + 1],\n\t\t\t s->childXids, // s->childXids null pointer potential dereferencing\n\t\t\t s->nChildXids * sizeof(TransactionId));\n\n\ts->parent->nChildXids = new_nChildXids;\n\n\t/* Release child's array to avoid leakage */\n\tif (s->childXids != NULL) // Check null pointer!\n\t\tpfree(s->childXids);\n\t/* We must reset these to avoid double-free if fail later in commit */\n\ts->childXids = NULL; // ever set to 0\n\ts->nChildXids = 0; // ever set to 0\n\ts->maxChildXids = 0; // ever set to 0\n\nbest regards,\nRanier Vilela\n________________________________________\nDe: Andres Freund <andres@anarazel.de>\nEnviado: quarta-feira, 13 de novembro de 2019 17:10\nPara: Ranier Vilela\nCc: PostgreSQL Hackers\nAssunto: Re: [PATCH] Improve AtSubCommit_childXids\n\nHi,\n\nOn 2019-11-13 16:18:46 +0000, Ranier Vilela wrote:\n> Surely that \"s->nChildXids > 0\", protects s->childXids to be NULL!\n> But, when we exchange the test (s->nChildXids > 0) by (s->childXids != NULL), I believe we have the same protection, because, if \"s->childXids\" is not NULL, \"s->nChildXids\" is > 0, naturally.\n>\n> That way we can improve the function and avoid calling and setting unnecessarily!\n\nWhy is this an improvement? And what setting are we removing? You mean\nthat we reset nChildXids, even if it's already 0? Hard to see how that\nmatters.\n\n\n> Bonus: silent compiler warning potential null pointer derenferencing.\n\nWhich compiler issues a warning here?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 13 Nov 2019 17:40:27 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Improve AtSubCommit_childXids"
},
{
"msg_contents": "Hi,\n\nOn this list we quote inline, and trim quoted messages to the relevant\nparts...\n\nOn 2019-11-13 17:40:27 +0000, Ranier Vilela wrote:\n> \"Why is this an improvement? And what setting are we removing? You mean\n> that we reset nChildXids, even if it's already 0? Hard to see how that\n> matters.\"\n> \n> The orginal function, ever set ChildXidsm, nChildXidsa and maxChildXids.\n> See at lines 1594, 1595, 1596, even if it's already 0!\n\nSo? It's easier to reason about that way anyway, and it's just about\nfree, because the cacheline is already touched.\n\n\n> The test (nChildXids > 0), possibly works, but, may confuse when do use\n> memcpy function soon after, and access one pointer that below, is checked by NULL.\n> How hard to see this?\n\nBut they don't necessarily have to mean the same. One is about the array\nbeing allocated, and one is about the number of actual xids in\nthere. The memcpy cares about the number of xids in it. The free cares\nabout whether memory is allocated.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 13 Nov 2019 10:02:00 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Improve AtSubCommit_childXids"
}
] |
[
{
"msg_contents": "I realized only today that if role A is a member of role B,\nA can ALTER and DROP objects owned by B.\n\nI don't have a problem with that, but the documentation seems to\nsuggest otherwise. For example, for DROP TABLE:\n\n Only the table owner, the schema owner, and superuser can drop a table.\n\nShould I compose a doc patch, or is that too much of a corner case\nto mention? I wanted to ask before I do the repetetive work.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Wed, 13 Nov 2019 22:36:11 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Role membership and DROP"
},
{
"msg_contents": "Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> I realized only today that if role A is a member of role B,\n> A can ALTER and DROP objects owned by B.\n> I don't have a problem with that, but the documentation seems to\n> suggest otherwise. For example, for DROP TABLE:\n\n> Only the table owner, the schema owner, and superuser can drop a table.\n\nGenerally, if you are a member of a role, that means you are the role for\nprivilege-test purposes. I'm not on board with adding \"(or a member of\nthat role)\" to every place it could conceivably be added; I think that\nwould be more annoying than helpful.\n\nIt might be worth clarifying this point in section 5.7,\n\nhttps://www.postgresql.org/docs/devel/ddl-priv.html\n\nbut let's not duplicate that in every ref/ page.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 13 Nov 2019 17:17:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Role membership and DROP"
},
{
"msg_contents": "On Wed, 2019-11-13 at 17:17 -0500, Tom Lane wrote:\n> Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> > I realized only today that if role A is a member of role B,\n> > A can ALTER and DROP objects owned by B.\n> > I don't have a problem with that, but the documentation seems to\n> > suggest otherwise. For example, for DROP TABLE:\n> > Only the table owner, the schema owner, and superuser can drop a table.\n> \n> Generally, if you are a member of a role, that means you are the role for\n> privilege-test purposes. I'm not on board with adding \"(or a member of\n> that role)\" to every place it could conceivably be added; I think that\n> would be more annoying than helpful.\n> \n> It might be worth clarifying this point in section 5.7,\n> \n> https://www.postgresql.org/docs/devel/ddl-priv.html\n> \n> but let's not duplicate that in every ref/ page.\n\nThat's much better.\n\nI have attached a proposed patch.\n\nYours,\nLaurenz Albe",
"msg_date": "Fri, 15 Nov 2019 10:32:11 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Role membership and DROP"
},
{
"msg_contents": "Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> On Wed, 2019-11-13 at 17:17 -0500, Tom Lane wrote:\n>> It might be worth clarifying this point in section 5.7,\n>> https://www.postgresql.org/docs/devel/ddl-priv.html\n>> but let's not duplicate that in every ref/ page.\n\n> I have attached a proposed patch.\n\n <para>\n The right to modify or destroy an object is always the privilege of\n- the owner only.\n+ the owner. Like all privileges, that right can be inherited by members of\n+ the owning role.\n </para>\n\nHm. This is more or less contradicting the original meaning of the\nexisting sentence, so maybe we need to rewrite a bit more. What do\nyou think of\n\n The right to modify or destroy an object is inherent in being the\n object's owner. Like all privileges, that right can be inherited by\n members of the owning role; but there is no way to grant or revoke\n it more selectively.\n\nA larger problem (pre-existing, since there's a reference to being a\nmember of the owning role just a bit further down) is that I don't think\nwe've defined role membership at this point, so the reader is quite\nentitled to come away more confused than they were before. It might not\nbe advisable to try to cover role membership here, but we should at\nleast add a cross-reference to where it's explained.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 15 Nov 2019 13:41:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Role membership and DROP"
},
{
"msg_contents": "On Fri, 2019-11-15 at 13:41 -0500, Tom Lane wrote:\n> Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> > On Wed, 2019-11-13 at 17:17 -0500, Tom Lane wrote:\n> > > It might be worth clarifying this point in section 5.7,\n> > > https://www.postgresql.org/docs/devel/ddl-priv.html\n> > > but let's not duplicate that in every ref/ page.\n> > I have attached a proposed patch.\n> \n> <para>\n> The right to modify or destroy an object is always the privilege of\n> - the owner only.\n> + the owner. Like all privileges, that right can be inherited by members of\n> + the owning role.\n> </para>\n> \n> Hm. This is more or less contradicting the original meaning of the\n> existing sentence, so maybe we need to rewrite a bit more. What do\n> you think of\n> \n> The right to modify or destroy an object is inherent in being the\n> object's owner. Like all privileges, that right can be inherited by\n> members of the owning role; but there is no way to grant or revoke\n> it more selectively.\n> \n> A larger problem (pre-existing, since there's a reference to being a\n> member of the owning role just a bit further down) is that I don't think\n> we've defined role membership at this point, so the reader is quite\n> entitled to come away more confused than they were before. It might not\n> be advisable to try to cover role membership here, but we should at\n> least add a cross-reference to where it's explained.\n\nI think you are right about the potential confusion; I have added a\ncross-reference. That cross-reference is hopefully still in short-term\nmemory when the reader proceeds to the second reference to role membership\na few sentences later.\n\nI like your second sentence, but I think that \"the right ... is inherent\nin being the ... owner\" is unnecessarily complicated.\nRemoving the \"always\" and \"only\" makes the apparent contradiction between\nthe sentences less jarring to me.\n\nI won't fight about words though. Attached is my second attempt.\n\nYours,\nLaurenz Albe",
"msg_date": "Mon, 18 Nov 2019 15:40:51 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Role membership and DROP"
},
{
"msg_contents": "Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> On Fri, 2019-11-15 at 13:41 -0500, Tom Lane wrote:\n>> Laurenz Albe <laurenz.albe@cybertec.at> writes:\n>>> On Wed, 2019-11-13 at 17:17 -0500, Tom Lane wrote:\n>>>> It might be worth clarifying this point in section 5.7,\n>>>> https://www.postgresql.org/docs/devel/ddl-priv.html\n\n> I like your second sentence, but I think that \"the right ... is inherent\n> in being the ... owner\" is unnecessarily complicated.\n> Removing the \"always\" and \"only\" makes the apparent contradiction between\n> the sentences less jarring to me.\n\nI think it's important to emphasize that this is implicit in object\nownership.\n\nLooking at the page again, I notice that there's a para a little further\ndown that overlaps quite a bit with what we're discussing here, but it's\nabout implicit grant options rather than the right to DROP. In the\nattached, I reworded that too, and moved it because it's not fully\nintelligible until we've explained grant options. Thoughts?\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 19 Nov 2019 13:21:37 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Role membership and DROP"
},
{
"msg_contents": "On Tue, 2019-11-19 at 13:21 -0500, Tom Lane wrote:\n> Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> > On Fri, 2019-11-15 at 13:41 -0500, Tom Lane wrote:\n> > > Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> > > > On Wed, 2019-11-13 at 17:17 -0500, Tom Lane wrote:\n> > > > > It might be worth clarifying this point in section 5.7,\n> > > > > https://www.postgresql.org/docs/devel/ddl-priv.html\n> > I like your second sentence, but I think that \"the right ... is inherent\n> > in being the ... owner\" is unnecessarily complicated.\n> > Removing the \"always\" and \"only\" makes the apparent contradiction between\n> > the sentences less jarring to me.\n> \n> I think it's important to emphasize that this is implicit in object\n> ownership.\n> \n> Looking at the page again, I notice that there's a para a little further\n> down that overlaps quite a bit with what we're discussing here, but it's\n> about implicit grant options rather than the right to DROP. In the\n> attached, I reworded that too, and moved it because it's not fully\n> intelligible until we've explained grant options. Thoughts?\n\nI am fine with that.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Wed, 20 Nov 2019 00:05:11 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Role membership and DROP"
},
{
"msg_contents": "Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> On Tue, 2019-11-19 at 13:21 -0500, Tom Lane wrote:\n>> Looking at the page again, I notice that there's a para a little further\n>> down that overlaps quite a bit with what we're discussing here, but it's\n>> about implicit grant options rather than the right to DROP. In the\n>> attached, I reworded that too, and moved it because it's not fully\n>> intelligible until we've explained grant options. Thoughts?\n\n> I am fine with that.\n\nOK, pushed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 20 Nov 2019 12:27:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Role membership and DROP"
}
] |
[
{
"msg_contents": "Hi Hackers,\r\n\r\nI'm sending an updated patch:\r\n1. add GUC enable_not_in_transform to guard the optimization/transformation, the guc is on by default.\r\n2. fix a bug: bail out NOT IN transformation early in convert_ANY_sublink_to_join so that parse->rtable doesn't get appended conditions are not met for the transformation.\r\n3. add a CTE not in test case.\r\n\r\nHere are the conditions for the transformation:\r\n/*\r\n *Allow transformation from NOT IN query to ANTI JOIN if ALL of the\r\n * following conditions are true: \r\n * 1. The GUC apg_not_in_transform_enabled is set to true.\r\n * 2. the NOT IN subquery is not hashable, in which case an expensive\r\n * subplan will be generated if we don't transform.\r\n * 3. the subquery does not define any CTE.\r\n */\r\n\r\nRegards,\r\n-----------\r\nZheng Li\r\nAWS, Amazon Aurora PostgreSQL",
"msg_date": "Wed, 13 Nov 2019 22:25:56 +0000",
"msg_from": "\"Li, Zheng\" <zhelli@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: NOT IN subquery optimization"
}
] |
[
{
"msg_contents": "I have seen the error\n\n could not stat promote trigger file \"...\": Permission denied\n\nbecause of a misconfiguration (for example, setting promote_trigger_file \nto point into a directory to which you don't have appropriate read or \nexecute access).\n\nThe problem is that because this happens in the startup process, the \nERROR is turned into a FATAL and the whole instance shuts down. That \nseems like a harsh penalty. Would it be better to turn this ERROR into \na WARNING?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 14 Nov 2019 14:58:32 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "could not stat promote trigger file leads to shutdown"
},
{
"msg_contents": "On Thu, Nov 14, 2019 at 10:58 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> I have seen the error\n>\n> could not stat promote trigger file \"...\": Permission denied\n>\n> because of a misconfiguration (for example, setting promote_trigger_file\n> to point into a directory to which you don't have appropriate read or\n> execute access).\n>\n> The problem is that because this happens in the startup process, the\n> ERROR is turned into a FATAL and the whole instance shuts down. That\n> seems like a harsh penalty. Would it be better to turn this ERROR into\n> a WARNING?\n\n+1\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Thu, 14 Nov 2019 23:22:01 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: could not stat promote trigger file leads to shutdown"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> I have seen the error\n> could not stat promote trigger file \"...\": Permission denied\n> because of a misconfiguration (for example, setting promote_trigger_file \n> to point into a directory to which you don't have appropriate read or \n> execute access).\n\n> The problem is that because this happens in the startup process, the \n> ERROR is turned into a FATAL and the whole instance shuts down. That \n> seems like a harsh penalty. Would it be better to turn this ERROR into \n> a WARNING?\n\nIt is harsh, but I suspect if we just logged the complaint, we'd get\nbug reports about \"Postgres isn't reacting to my trigger file\",\nbecause people don't read the postmaster log unless forced to.\nIs there some more-visible way to report the problem, short of\nshutting down?\n\n(BTW, from this perspective, WARNING is especially bad because it\nmight not get logged at all. Better to use LOG.)\n\nOne thought is to try to detect the misconfiguration at postmaster\nstart --- better to fail at startup than sometime later. But I'm\nnot sure how reliably we could do that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 14 Nov 2019 10:38:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: could not stat promote trigger file leads to shutdown"
},
{
"msg_contents": "On Thu, Nov 14, 2019 at 10:38:30AM -0500, Tom Lane wrote:\n> It is harsh, but I suspect if we just logged the complaint, we'd get\n> bug reports about \"Postgres isn't reacting to my trigger file\",\n> because people don't read the postmaster log unless forced to.\n> Is there some more-visible way to report the problem, short of\n> shutting down?\n> \n> (BTW, from this perspective, WARNING is especially bad because it\n> might not get logged at all. Better to use LOG.)\n\nNeither am I comfortable with that.\n\n> One thought is to try to detect the misconfiguration at postmaster\n> start --- better to fail at startup than sometime later. But I'm\n> not sure how reliably we could do that.\n\nI think that we could do something close to the area where\nRemovePromoteSignalFiles() gets called. Why not simply checking the\npath defined by PromoteTriggerFile() at startup time then? I take it\nthat the only thing we should not complain about is stat() returning\nENOENT when looking at the promote file defined.\n--\nMichael",
"msg_date": "Fri, 15 Nov 2019 10:49:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: could not stat promote trigger file leads to shutdown"
},
{
"msg_contents": "On Fri, Nov 15, 2019 at 10:49 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Nov 14, 2019 at 10:38:30AM -0500, Tom Lane wrote:\n> > It is harsh, but I suspect if we just logged the complaint, we'd get\n> > bug reports about \"Postgres isn't reacting to my trigger file\",\n> > because people don't read the postmaster log unless forced to.\n> > Is there some more-visible way to report the problem, short of\n> > shutting down?\n> >\n> > (BTW, from this perspective, WARNING is especially bad because it\n> > might not get logged at all. Better to use LOG.)\n>\n> Neither am I comfortable with that.\n\nI always wonder why WARNING was defined that way.\nI think that users usually pay attention to the word \"WARNING\"\nrather than \"LOG\".\n\n> > One thought is to try to detect the misconfiguration at postmaster\n> > start --- better to fail at startup than sometime later. But I'm\n> > not sure how reliably we could do that.\n>\n> I think that we could do something close to the area where\n> RemovePromoteSignalFiles() gets called. Why not simply checking the\n> path defined by PromoteTriggerFile() at startup time then? I take it\n> that the only thing we should not complain about is stat() returning\n> ENOENT when looking at the promote file defined.\n\npromote_trigger_file is declared as PGC_SIGHUP,\nso such check would be necessary even while the standby is running.\nWhich can cause the server to fail after the startup.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Fri, 15 Nov 2019 11:14:00 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: could not stat promote trigger file leads to shutdown"
},
{
"msg_contents": "On 2019-11-15 03:14, Fujii Masao wrote:\n>>> One thought is to try to detect the misconfiguration at postmaster\n>>> start --- better to fail at startup than sometime later. But I'm\n>>> not sure how reliably we could do that.\n>> I think that we could do something close to the area where\n>> RemovePromoteSignalFiles() gets called. Why not simply checking the\n>> path defined by PromoteTriggerFile() at startup time then? I take it\n>> that the only thing we should not complain about is stat() returning\n>> ENOENT when looking at the promote file defined.\n> promote_trigger_file is declared as PGC_SIGHUP,\n> so such check would be necessary even while the standby is running.\n> Which can cause the server to fail after the startup.\n\nLet me illustrate a scenario in a more lively way:\n\nSay you want to set up promote_trigger_file to point to a file outside \nof the data directory, maybe because you want to integrate it with some \nexternal tooling. So you go into your configuration and set\n\n promote_trigger_file = '/srv/foobar/trigger'\n\nand reload the server. Everything is happy. The fact that the \ndirectory /srv/foobar/ does not exist at this point is completely ignored.\n\nNow you become root and run\n\n mkdir /srv/foobar\n\nand, depending circumstances such as root's umask or the permissions of \n/srv, your PostgreSQL server crashes immediately. That can't be good.\n\nAlso imagine the above steps being run by a configuration management system.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 15 Nov 2019 10:43:07 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: could not stat promote trigger file leads to shutdown"
},
{
"msg_contents": "Fujii Masao <masao.fujii@gmail.com> writes:\n> On Fri, Nov 15, 2019 at 10:49 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> On Thu, Nov 14, 2019 at 10:38:30AM -0500, Tom Lane wrote:\n>>> (BTW, from this perspective, WARNING is especially bad because it\n>>>> might not get logged at all. Better to use LOG.)\n\n>> Neither am I comfortable with that.\n\n> I always wonder why WARNING was defined that way.\n> I think that users usually pay attention to the word \"WARNING\"\n> rather than \"LOG\".\n\nThe issue really is \"what are we warning about\". The way things\nare set up basically assumes that WARNING is for complaining about\nuser-issued commands that might not be doing what the user wants.\nWhich is a legitimate use-case, but it doesn't necessarily mean\nsomething that's very important to put in the postmaster log.\n\nWhat we're seeing, in these repeated proposals to use WARNING in\nsome background process that doesn't run user commands, is that\nthere is also a use-case for \"more-significant-than-usual log\nmessages\". Maybe we need a new elevel category for that.\nSYSTEM_WARNING or LOG_WARNING, perhaps?\n\n>> I think that we could do something close to the area where\n>> RemovePromoteSignalFiles() gets called. Why not simply checking the\n>> path defined by PromoteTriggerFile() at startup time then? I take it\n>> that the only thing we should not complain about is stat() returning\n>> ENOENT when looking at the promote file defined.\n\n> promote_trigger_file is declared as PGC_SIGHUP,\n> so such check would be necessary even while the standby is running.\n> Which can cause the server to fail after the startup.\n\nNo, it'd be just the same as any other GUC: if we make such a test\nin the check hook, then we'd fail for a bad value at startup, but\nat SIGHUP we'd just reject the new setting. I think this might be\na workable answer to Peter's concern.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 15 Nov 2019 11:31:12 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: could not stat promote trigger file leads to shutdown"
},
{
"msg_contents": "Hello\n\n> Maybe we need a new elevel category for that.\n> SYSTEM_WARNING or LOG_WARNING, perhaps?\n\nI think a separate levels for user warnings and system warnings (and errors) would be great for log analytics. Error due to user typo in query is not the same as cache lookup error (for example).\n\nregards, Sergei\n\n\n",
"msg_date": "Fri, 15 Nov 2019 19:49:20 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: could not stat promote trigger file leads to shutdown"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> Say you want to set up promote_trigger_file to point to a file outside \n> of the data directory, maybe because you want to integrate it with some \n> external tooling. So you go into your configuration and set\n> promote_trigger_file = '/srv/foobar/trigger'\n> and reload the server. Everything is happy. The fact that the \n> directory /srv/foobar/ does not exist at this point is completely ignored.\n> Now you become root and run\n> mkdir /srv/foobar\n> and, depending circumstances such as root's umask or the permissions of \n> /srv, your PostgreSQL server crashes immediately. That can't be good.\n\nNo, it's not good, but the proposed fix of s/ERROR/LOG/ simply delays\nthe problem till later, ie when you try to promote the server nothing\nhappens. That's not good either. (To be clear: I'm not necessarily\nagainst that change, I just don't think it's a sufficient response.)\n\nIf we add a GUC-check-hook test, then the problem of misconfiguration\nis reduced to the previously unsolved problem that we have crappy\nfeedback for erroneous on-the-fly configuration changes. So it's\nstill unsolved, but at least we've got one unsolved problem not two.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 15 Nov 2019 13:23:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: could not stat promote trigger file leads to shutdown"
},
{
"msg_contents": "On 2019-Nov-15, Tom Lane wrote:\n\n> If we add a GUC-check-hook test, then the problem of misconfiguration\n> is reduced to the previously unsolved problem that we have crappy\n> feedback for erroneous on-the-fly configuration changes. So it's\n> still unsolved, but at least we've got one unsolved problem not two.\n\nI am now against this kind of behavior, because nowadays it is common\nto have external orchestrating systems stopping and starting postmaster\non their own volition.\n\nIf this kind of misconfiguration causes postmaster refuse to start, it\ncan effectively become a service-shutdown scenario which requires the\nDBA to go temporarily mad.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 16 Nov 2019 22:59:01 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: could not stat promote trigger file leads to shutdown"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Nov-15, Tom Lane wrote:\n>> If we add a GUC-check-hook test, then the problem of misconfiguration\n>> is reduced to the previously unsolved problem that we have crappy\n>> feedback for erroneous on-the-fly configuration changes. So it's\n>> still unsolved, but at least we've got one unsolved problem not two.\n\n> I am now against this kind of behavior, because nowadays it is common\n> to have external orchestrating systems stopping and starting postmaster\n> on their own volition.\n\n> If this kind of misconfiguration causes postmaster refuse to start, it\n> can effectively become a service-shutdown scenario which requires the\n> DBA to go temporarily mad.\n\nBy that argument, postgresql.conf could contain complete garbage\nand the postmaster should still start. I'm not willing to say\nthat an \"external orchestrating system\" doesn't need to take\nresponsibility for putting valid info into the config file.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 17 Nov 2019 15:05:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: could not stat promote trigger file leads to shutdown"
},
{
"msg_contents": "On 2019-11-15 19:23, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> Say you want to set up promote_trigger_file to point to a file outside\n>> of the data directory, maybe because you want to integrate it with some\n>> external tooling. So you go into your configuration and set\n>> promote_trigger_file = '/srv/foobar/trigger'\n>> and reload the server. Everything is happy. The fact that the\n>> directory /srv/foobar/ does not exist at this point is completely ignored.\n>> Now you become root and run\n>> mkdir /srv/foobar\n>> and, depending circumstances such as root's umask or the permissions of\n>> /srv, your PostgreSQL server crashes immediately. That can't be good.\n> \n> No, it's not good, but the proposed fix of s/ERROR/LOG/ simply delays\n> the problem till later, ie when you try to promote the server nothing\n> happens. That's not good either. (To be clear: I'm not necessarily\n> against that change, I just don't think it's a sufficient response.)\n> \n> If we add a GUC-check-hook test, then the problem of misconfiguration\n> is reduced to the previously unsolved problem that we have crappy\n> feedback for erroneous on-the-fly configuration changes. So it's\n> still unsolved, but at least we've got one unsolved problem not two.\n\nAFAICT, a GUC check hook wouldn't actually be able to address the \nspecific scenario I described. At the time the GUC is set, the \ncontaining the directory of the trigger file does not exist yet. This \nis currently not an error. The problem only happens if after the GUC is \nset the containing directory appears and is not readable.\n\nI notice that we use LOG level if an SSL key or certificate file is not \naccessible on reload, so that seems somewhat similar.\n\nWe don't have any GUC check hooks on other file system location string \nsettings that ensure accessibility or presence of the file. Although I \ndo notice that we use check_canonical_path() in some places and not \nothers for mysterious and undocumented reasons.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 20 Nov 2019 09:12:05 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: could not stat promote trigger file leads to shutdown"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2019-11-15 19:23, Tom Lane wrote:\n>> If we add a GUC-check-hook test, then the problem of misconfiguration\n>> is reduced to the previously unsolved problem that we have crappy\n>> feedback for erroneous on-the-fly configuration changes. So it's\n>> still unsolved, but at least we've got one unsolved problem not two.\n\n> AFAICT, a GUC check hook wouldn't actually be able to address the \n> specific scenario I described. At the time the GUC is set, the \n> containing the directory of the trigger file does not exist yet. This \n> is currently not an error. The problem only happens if after the GUC is \n> set the containing directory appears and is not readable.\n\nTrue, if the hook just consists of trying fopen() and checking the\nerrno. Would it be feasible to insist that the containing directory\nexist and be readable? We have enough infrastructure that that\nshould only take a few lines of code, so the question is whether\nor not that's a nicer behavior than we have now.\n\nIf we had this to do over, I'd argue that we misdesigned trigger\nfiles: they should be required to exist always, and triggering\ndepends on file contents (eg empty vs. not) not existence. That\nwould make it far easier to check for configuration mistakes\nat startup. But I suppose it's too late now.\n\n> We don't have any GUC check hooks on other file system location string \n> settings that ensure accessibility or presence of the file.\n\nRight, but I'm suggesting we should add that where appropriate.\nBasically the complaint here is that the system lacks checks\nthat the given configuration settings are workable, and we ought\nto add such.\n\n> Although I \n> do notice that we use check_canonical_path() in some places and not \n> others for mysterious and undocumented reasons.\n\nProbably only that some patch authors didn't know about it :-(\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 20 Nov 2019 10:21:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: could not stat promote trigger file leads to shutdown"
},
{
"msg_contents": "On 2019-11-20 16:21, Tom Lane wrote:\n>> AFAICT, a GUC check hook wouldn't actually be able to address the\n>> specific scenario I described. At the time the GUC is set, the\n>> containing the directory of the trigger file does not exist yet. This\n>> is currently not an error. The problem only happens if after the GUC is\n>> set the containing directory appears and is not readable.\n> True, if the hook just consists of trying fopen() and checking the\n> errno. Would it be feasible to insist that the containing directory\n> exist and be readable? We have enough infrastructure that that\n> should only take a few lines of code, so the question is whether\n> or not that's a nicer behavior than we have now.\n\nIs it possible to do this in a mostly bullet-proof way? Just because \nthe directory exists and looks pretty good otherwise, doesn't mean we \ncan read a file created in it later in a way that doesn't fall afoul of \nthe existing error checks. There could be something like SELinux \nlurking, for example.\n\nMaybe some initial checking would be useful, but I think we still need \nto downgrade the error check at use time a bit to not crash in the cases \nthat we miss.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 4 Dec 2019 11:52:33 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: could not stat promote trigger file leads to shutdown"
},
{
"msg_contents": "At Wed, 4 Dec 2019 11:52:33 +0100, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote in \n> On 2019-11-20 16:21, Tom Lane wrote:\n> >> AFAICT, a GUC check hook wouldn't actually be able to address the\n> >> specific scenario I described. At the time the GUC is set, the\n> >> containing the directory of the trigger file does not exist yet. This\n> >> is currently not an error. The problem only happens if after the GUC\n> >> is\n> >> set the containing directory appears and is not readable.\n> > True, if the hook just consists of trying fopen() and checking the\n> > errno. Would it be feasible to insist that the containing directory\n> > exist and be readable? We have enough infrastructure that that\n> > should only take a few lines of code, so the question is whether\n> > or not that's a nicer behavior than we have now.\n> \n> Is it possible to do this in a mostly bullet-proof way? Just because\n> the directory exists and looks pretty good otherwise, doesn't mean we\n> can read a file created in it later in a way that doesn't fall afoul\n> of the existing error checks. There could be something like SELinux\n> lurking, for example.\n> \n> Maybe some initial checking would be useful, but I think we still need\n> to downgrade the error check at use time a bit to not crash in the\n> cases that we miss.\n\n+1. Any GUC variables that points to outer, or externally-modifiable\nresources, including directories, files, commands can face that kind\nof problem. For example a bogus value for archive_command doesn't\npreveint server from starting. I understand that the reason is that we\ndon't have a reliable means to check-up the command before we actually\nexecute it, but server can (or should) continue running even if it\nfails. I think this issue falls into that category.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 05 Dec 2019 10:28:03 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: could not stat promote trigger file leads to shutdown"
},
{
"msg_contents": "On Wed, Dec 04, 2019 at 11:52:33AM +0100, Peter Eisentraut wrote:\n> Is it possible to do this in a mostly bullet-proof way? Just because the\n> directory exists and looks pretty good otherwise, doesn't mean we can read a\n> file created in it later in a way that doesn't fall afoul of the existing\n> error checks. There could be something like SELinux lurking, for example.\n> \n> Maybe some initial checking would be useful, but I think we still need to\n> downgrade the error check at use time a bit to not crash in the cases that\n> we miss.\n\nI got that thread in my backlog for some time, and was not able to\ncome back to it. Reading it again the thread, it seems to me that\nusing a LOG would make the promote file handling more consistent with\nwhat we do for the SSL context reload. Still, one downside I can see\nhere is that this causes the backend to create a new LOG entry each\ntime the promote file is checked, aka each time we check if WAL is\navailable. Couldn't that bloat be a problem? During the SSL reload,\nwe only generate LOG entries for each backend on SIGHUP.\n--\nMichael",
"msg_date": "Tue, 19 May 2020 15:13:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: could not stat promote trigger file leads to shutdown"
}
] |
[
{
"msg_contents": "Hello friends,\n \nI am not sure if I am right here bcz its my forst post so..\n \nI am using the Provider=MSDASQL.1 through psqlODBC as Data Source(created user DSN).\nPostgresversion is 12.\nProgramming language is C++.\nOS = Windows 10.\n \nSo my problem is when I call a sql function which returns a refcursor for dynamic sql purposes it doesnt fill the recordset.\n \nSo my question is does the ODBC driver supports refcursor??\n \nIs there any example/codesnippet or any sugestions? I did search alot abt this topic without success.\n \n \nThx in advance..\n",
"msg_date": "Fri, 15 Nov 2019 11:06:32 +0100",
"msg_from": "\"Kubilay Kaan\" <heraklea@gmx.de>",
"msg_from_op": true,
"msg_subject": "Getting Recordset through returning refcursor"
},
{
"msg_contents": "Hello friends,\n \nI am not sure if I am right here bcz its my forst post so..\n \nI am using the Provider=MSDASQL.1 through psqlODBC as Data Source(created user DSN).\nPostgresversion is 12.\nProgramming language is C++.\nOS = Windows 10.\n \nSo my problem is when I call a sql function which returns a refcursor for dynamic sql purposes it doesnt fill the recordset.\n \nSo my question is does the ODBC driver supports refcursor??\n \nIs there any example/codesnippet or any sugestions? I did search alot abt this topic without success.\n \n \nThx in advance..\n\n\n",
"msg_date": "Fri, 15 Nov 2019 11:10:59 +0100",
"msg_from": "\"Kubilay Kaan\" <heraklea@gmx.de>",
"msg_from_op": true,
"msg_subject": "Getting Recordset through returning refcursor - second try(first\n has wrong format sorry)"
},
{
"msg_contents": "On Fri, Nov 15, 2019 at 3:36 PM Kubilay Kaan <heraklea@gmx.de> wrote:\n>\n> So my problem is when I call a sql function which returns a refcursor for dynamic sql purposes it doesnt fill the recordset.\n>\n> So my question is does the ODBC driver supports refcursor??\n>\n\nI think the chances of getting an answer on ODBC related queries will\nbe more if you post on pgsql-interfaces.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 15 Nov 2019 15:55:28 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Getting Recordset through returning refcursor"
},
{
"msg_contents": "On Fri, Nov 15, 2019 at 03:55:28PM +0530, Amit Kapila wrote:\n> I think the chances of getting an answer on ODBC related queries will\n> be more if you post on pgsql-interfaces.\n\nThere is also a mailing list dedicated to Postgres ODBC:\nhttps://www.postgresql.org/list/pgsql-odbc/\n--\nMichael",
"msg_date": "Mon, 18 Nov 2019 21:52:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Getting Recordset through returning refcursor"
}
] |
[
{
"msg_contents": "Hi,\nLast time, I promise.\n\nIt's probably not happening, but it can happen, I think.\n\nBest regards.\nRanier Vilela\n\n--- \\dll\\postgresql-12.0\\a\\backend\\access\\brin\\brin_validate.c\tMon Sep 30 17:06:55 2019\n+++ brin_validate.c\tFri Nov 15 08:14:58 2019\n@@ -57,8 +57,10 @@\n \n \t/* Fetch opclass information */\n \tclasstup = SearchSysCache1(CLAOID, ObjectIdGetDatum(opclassoid));\n-\tif (!HeapTupleIsValid(classtup))\n+\tif (!HeapTupleIsValid(classtup)) {\n \t\telog(ERROR, \"cache lookup failed for operator class %u\", opclassoid);\n+ return false;\n+ }\n \tclassform = (Form_pg_opclass) GETSTRUCT(classtup);\n \n \topfamilyoid = classform->opcfamily;\n@@ -67,8 +69,11 @@\n \n \t/* Fetch opfamily information */\n \tfamilytup = SearchSysCache1(OPFAMILYOID, ObjectIdGetDatum(opfamilyoid));\n-\tif (!HeapTupleIsValid(familytup))\n+\tif (!HeapTupleIsValid(familytup)) {\n \t\telog(ERROR, \"cache lookup failed for operator family %u\", opfamilyoid);\n+\t ReleaseSysCache(classtup);\n+ return false;\n+ }\n \tfamilyform = (Form_pg_opfamily) GETSTRUCT(familytup);\n \n \topfamilyname = NameStr(familyform->opfname);",
"msg_date": "Fri, 15 Nov 2019 11:25:07 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH][BUG FIX] Unsafe access pointers."
},
{
"msg_contents": "> On 15 Nov 2019, at 12:25, Ranier Vilela <ranier_gyn@hotmail.com> wrote:\n\n> It's probably not happening, but it can happen, I think.\n\nI don't think it can, given how elog() works.\n\n> -\tif (!HeapTupleIsValid(classtup))\n> +\tif (!HeapTupleIsValid(classtup)) {\n> \t\telog(ERROR, \"cache lookup failed for operator class %u\", opclassoid);\n> + return false;\n\nelog or ereport with a severity of ERROR or higher will never return.\n\n> -\tif (!HeapTupleIsValid(familytup))\n> +\tif (!HeapTupleIsValid(familytup)) {\n> \t\telog(ERROR, \"cache lookup failed for operator family %u\", opfamilyoid);\n> +\t ReleaseSysCache(classtup);\n> + return false;\n> + }\n\nNot only will elog(ERROR ..) not return to run this, the errorhandling\nmachinery will automatically release resources and clean up.\n\ncheers ./daniel\n\n\n",
"msg_date": "Fri, 15 Nov 2019 12:58:36 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH][BUG FIX] Unsafe access pointers."
},
{
"msg_contents": "Hi,\nThank you for the explanation.\n\nBest regards.\nRanier Vilela\n________________________________________\nDe: Daniel Gustafsson <daniel@yesql.se>\nEnviado: sexta-feira, 15 de novembro de 2019 11:58\nPara: Ranier Vilela\nCc: pgsql-hackers@lists.postgresql.org\nAssunto: Re: [PATCH][BUG FIX] Unsafe access pointers.\n\n> On 15 Nov 2019, at 12:25, Ranier Vilela <ranier_gyn@hotmail.com> wrote:\n\n> It's probably not happening, but it can happen, I think.\n\nI don't think it can, given how elog() works.\n\n> - if (!HeapTupleIsValid(classtup))\n> + if (!HeapTupleIsValid(classtup)) {\n> elog(ERROR, \"cache lookup failed for operator class %u\", opclassoid);\n> + return false;\n\nelog or ereport with a severity of ERROR or higher will never return.\n\n> - if (!HeapTupleIsValid(familytup))\n> + if (!HeapTupleIsValid(familytup)) {\n> elog(ERROR, \"cache lookup failed for operator family %u\", opfamilyoid);\n> + ReleaseSysCache(classtup);\n> + return false;\n> + }\n\nNot only will elog(ERROR ..) not return to run this, the errorhandling\nmachinery will automatically release resources and clean up.\n\ncheers ./daniel\n\n\n",
"msg_date": "Fri, 15 Nov 2019 12:24:04 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH][BUG FIX] Unsafe access pointers."
},
{
"msg_contents": "On 2019-Nov-15, Ranier Vilela wrote:\n\n> Hi,\n> Last time, I promise.\n> \n> It's probably not happening, but it can happen, I think.\n\nThis patch assumes that anything can happen after elog(ERROR). That's\nwrong -- under ERROR or higher, elog() (as well as ereport) never\nreturns to the caller. If this was possible, there would be thousands\nof places that would need to be patched, all over the server code. But\nit's not.\n\n> \t/* Fetch opclass information */\n> \tclasstup = SearchSysCache1(CLAOID, ObjectIdGetDatum(opclassoid));\n> -\tif (!HeapTupleIsValid(classtup))\n> +\tif (!HeapTupleIsValid(classtup)) {\n> \t\telog(ERROR, \"cache lookup failed for operator class %u\", opclassoid);\n> + return false;\n> + }\n\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 18 Nov 2019 14:42:04 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH][BUG FIX] Unsafe access pointers."
}
] |
[
{
"msg_contents": "Hello hackers,\n\nI don't like this:\n\n -> Parallel Hash (... rows=416667 ...) (... rows=333333 ...)\n\nI think the logic in get_parallel_divisor() only makes sense for\nqueries like this (output with nearby patch to show leader\ncontribution):\n\npostgres=# set parallel_tuple_cost = 0;\nSET\npostgres=# set parallel_setup_cost = 0;\nSET\npostgres=# explain (analyze, verbose) select * from t;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------\n Gather (cost=0.00..8591.67 rows=1000000 width=4) (actual\ntime=0.460..411.944 rows=1000000 loops=1)\n Output: i\n Workers Planned: 2\n Workers Launched: 2\n -> Parallel Seq Scan on public.t (cost=0.00..8591.67 rows=416667\nwidth=4) (actual time=0.066..136.777 rows=333333 loops=3)\n Output: i\n Leader: actual time=0.022..8.061 rows=30058 loops=1\n<--- poor contribution\n Worker 0: actual time=0.150..201.007 rows=502574 loops=1\n Worker 1: actual time=0.027..201.263 rows=467368 loops=1\n Planning Time: 0.071 ms\n Execution Time: 495.700 ms\n(11 rows)\n\nFor anything that consumes its entire input before emitting a tuple,\nfor example Partial Aggregate, Sort and Hash, it doesn't make sense to\nmangle the child paths' row estimates. For example:\n\n -> Parallel Hash (cost=8591.67..8591.67 rows=416667 width=4)\n(actual time=263.402..263.403 rows=333333 loops=3)\n Output: t2.i\n Buckets: 131072 Batches: 16 Memory Usage: 3520kB\n Leader: actual time=266.861..266.861 rows=341888 loops=1\n<--- equal contribution\n Worker 0: actual time=261.521..261.522 rows=322276 loops=1\n Worker 1: actual time=261.824..261.825 rows=335836 loops=1\n\nget_parallel_divisor() effectively assumes that every tuple emitted by\nthe path will cause a tuple to reach the Gather node, at which point\nthe gather distraction quotient needs to be estimated, so it does that\nup front. Whether that really happens depends on where the path\nfinishes up being used.\n\nI wonder if it would be better to get rid of that logic completely,\nand instead tweak the gather path's run cost to account for the\nprocessing asymmetry. In cases where the leader contributes very\nlittle, you could argue that it's not OK to ignore leader distraction\nin child paths (and therefore use avg(cardinality), not\nmax(cardinality) over all processes in, say, a nestloop), but on the\nother hand, for cases that use if for something important like\nestimating memory consumption (hash, sort, agg) that's exactly what\nyou want anyway because they're greedy.\n\nWith this approach, I suspect gather merge doesn't need to do anything\nat all, because there we expect the leader to be forced to contribute\nequally by the ordering requirement.\n\nHere's an experimental patch to show what I mean. Not tested much,\njust trying out the idea.",
"msg_date": "Sat, 16 Nov 2019 14:02:01 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Partial path row estimates"
}
] |
[
{
"msg_contents": "I am researching whether an postgres installation can be done on Unix system services(USS) in z/OS. USS is a POSIX compliant OS on z/OS and i wonder if you have any experience with installing it there that you can share with me. I would be highly appreciative of your comments and thoughts.\nThanksParveen\nI am researching whether an postgres installation can be done on Unix system services(USS) in z/OS. USS is a POSIX compliant OS on z/OS and i wonder if you have any experience with installing it there that you can share with me. I would be highly appreciative of your comments and thoughts.ThanksParveen",
"msg_date": "Sat, 16 Nov 2019 11:31:35 +0000 (UTC)",
"msg_from": "parveen mehta <sim_mehta@yahoo.com>",
"msg_from_op": true,
"msg_subject": "Postgres on IBM z/OS 2.2.0 and 2.3.0"
},
{
"msg_contents": "parveen mehta <sim_mehta@yahoo.com> writes:\n> I am researching whether an postgres installation can be done on Unix system services(USS) in z/OS. USS is a POSIX compliant OS on z/OS and i wonder if you have any experience with installing it there that you can share with me. I would be highly appreciative of your comments and thoughts.\n\nThe last discussion around this [1] concluded that you'd probably crash\nand burn due to z/OS wanting to use EBCDIC encoding. There's a lot of\nASCII-related assumptions in our code, and nobody is interested in\ntrying to get rid of them.\n\nIt's possible that you could run the server in ASCII and treat EBCDIC\nas a client-only encoding, which would limit the parts of the system\nthat would have to be cleansed of ASCII-isms to libpq and src/bin/.\nBut that's already a nontrivial headache I suspect.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/BLU437-SMTP4B3FF36035D8A3C3816D49C160%40phx.gbl\n\n\n",
"msg_date": "Sat, 16 Nov 2019 10:33:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Postgres on IBM z/OS 2.2.0 and 2.3.0"
},
{
"msg_contents": "Tom,\nThanks for providing valuable inputs into my concerns. In the last part you mentioned and i am quoting it here \"would limit the parts of the system\nthat would have to be cleansed of ASCII-isms to libpq and src/bin/.\nBut that's already a nontrivial headache I suspect.\" I am not clear on the ASCII-isms to libpq and src/bin/. Can you share some knowledge on those items. Are those standard directory locations ? Sorry if i am being ignorant.\nRegards\n\n On Saturday, November 16, 2019, 10:33:28 AM EST, Tom Lane <tgl@sss.pgh.pa.us> wrote: \n \n parveen mehta <sim_mehta@yahoo.com> writes:\n> I am researching whether an postgres installation can be done on Unix system services(USS) in z/OS. USS is a POSIX compliant OS on z/OS and i wonder if you have any experience with installing it there that you can share with me. I would be highly appreciative of your comments and thoughts.\n\nThe last discussion around this [1] concluded that you'd probably crash\nand burn due to z/OS wanting to use EBCDIC encoding. There's a lot of\nASCII-related assumptions in our code, and nobody is interested in\ntrying to get rid of them.\n\nIt's possible that you could run the server in ASCII and treat EBCDIC\nas a client-only encoding, which would limit the parts of the system\nthat would have to be cleansed of ASCII-isms to libpq and src/bin/.\nBut that's already a nontrivial headache I suspect.\n\n regards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/BLU437-SMTP4B3FF36035D8A3C3816D49C160%40phx.gbl \n\nTom,Thanks for providing valuable inputs into my concerns. In the last part you mentioned and i am quoting it here \"would limit the parts of the systemthat would have to be cleansed of ASCII-isms to libpq and src/bin/.But that's already a nontrivial headache I suspect.\" I am not clear on the ASCII-isms to libpq and src/bin/. Can you share some knowledge on those items. Are those standard directory locations ? Sorry if i am being ignorant.Regards\n\n\n\n On Saturday, November 16, 2019, 10:33:28 AM EST, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n \n\n\nparveen mehta <sim_mehta@yahoo.com> writes:> I am researching whether an postgres installation can be done on Unix system services(USS) in z/OS. USS is a POSIX compliant OS on z/OS and i wonder if you have any experience with installing it there that you can share with me. I would be highly appreciative of your comments and thoughts.The last discussion around this [1] concluded that you'd probably crashand burn due to z/OS wanting to use EBCDIC encoding. There's a lot ofASCII-related assumptions in our code, and nobody is interested intrying to get rid of them.It's possible that you could run the server in ASCII and treat EBCDICas a client-only encoding, which would limit the parts of the systemthat would have to be cleansed of ASCII-isms to libpq and src/bin/.But that's already a nontrivial headache I suspect. regards, tom lane[1] https://www.postgresql.org/message-id/flat/BLU437-SMTP4B3FF36035D8A3C3816D49C160%40phx.gbl",
"msg_date": "Mon, 18 Nov 2019 18:24:11 +0000 (UTC)",
"msg_from": "parveen mehta <sim_mehta@yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: Postgres on IBM z/OS 2.2.0 and 2.3.0"
},
{
"msg_contents": "parveen mehta <sim_mehta@yahoo.com> writes:\n> Thanks for providing valuable inputs into my concerns. In the last part you mentioned and i am quoting it here \"would limit the parts of the system\n> that would have to be cleansed of ASCII-isms to libpq and src/bin/.\n> But that's already a nontrivial headache I suspect.\" I am not clear on the ASCII-isms to libpq and src/bin/. Can you share some knowledge on those items. Are those standard directory locations ? Sorry if i am being ignorant.\n\nI'm just speaking of those subtrees of PG's source code:\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=tree\n\nSome of the concrete problems are likely to be the same ones mentioned\non wikipedia's EBCDIC page:\n\nhttps://en.wikipedia.org/wiki/EBCDIC\n\nnotably that in ASCII the upper-case English letters have consecutive\ncodes, as do the lower-case ones, but that's not the case in EBCDIC.\nThis'd break pg_toupper and pg_tolower, and perhaps other places.\n(Or perhaps that's the only place, but there's a lot of code to be\naudited to find out.)\n\nThe lack of well-defined equivalents for a lot of common ASCII\npunctuation is likely to be a problem as well. psql's internal\nparsing of SQL commands, for example, is entirely unprepared for\nthe idea that characters it needs to recognize might be encoding\ndependent. But unless you want to restrict yourself to just one\nEBCDIC code page, something would have to be done about that.\n\nAnother issue, which is something that might be unique to\nPostgres, is that we expect that bytes with the high bit set\nare elements of multibyte characters, while bytes without are\nplain ASCII. I think you could set things up so that mblen()\nreturns 1 despite the high bit being set, but at the very least\nthis would result in an efficiency hit due to taking the slow\n\"multibyte\" code paths even for plain English letters. Perhaps\nit wouldn't matter too much on the client side; hard to tell\nwithout investing a lot of work to try it.\n\nThe server-side code has a lot more of these assumptions buried\nin it than the client side, which is why I doubt it's feasible\nto get the server to run in EBCDIC. But maybe you could create\nencoding translation functions and treat EBCDIC code pages as\nclient-only encodings, which is a concept we already have.\n\nOn the whole though, it seems like a lot of work for a dubious\ngoal, which is probably why nobody's tackled it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 18 Nov 2019 18:58:58 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Postgres on IBM z/OS 2.2.0 and 2.3.0"
}
] |
[
{
"msg_contents": "Hi,\n\nI noticed that some of the source files does not include the copyright\ninformation. Most of the files have included it, but few files have\nnot included it. I felt it should be included. The attached patch\ncontains the fix for including the copyright information in the source\nfiles. Let me know your thoughts on the same.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Sat, 16 Nov 2019 23:06:16 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Copyright information in source files"
},
{
"msg_contents": "On Sun, Nov 17, 2019 at 6:36 AM vignesh C <vignesh21@gmail.com> wrote:\n> I noticed that some of the source files does not include the copyright\n> information. Most of the files have included it, but few files have\n> not included it. I felt it should be included. The attached patch\n> contains the fix for including the copyright information in the source\n> files. Let me know your thoughts on the same.\n\nI'd like to get rid of those IDENTIFICATION lines completely (they are\nleft over from the time when the project used CVS, and that section\nhad a $Header$ \"ident\" tag, but in the git era, those ident tags are\nno longer in fashion).\n\nThere are other inconsistencies in the copyright messages, like\nwhether we say \"Portions\" or not for PGDU, and whether we use 1996- or\nthe year the file was created, and whether the Berkeley copyright is\nthere or not (different people seem to have different ideas about\nwhether that's needed for a post-Berkeley file).\n\n\n",
"msg_date": "Fri, 22 Nov 2019 09:28:44 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Copyright information in source files"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> I'd like to get rid of those IDENTIFICATION lines completely (they are\n> left over from the time when the project used CVS, and that section\n> had a $Header$ \"ident\" tag, but in the git era, those ident tags are\n> no longer in fashion).\n\nI'm not for that. Arguments about CVS vs git are irrelevant: the\nusefulness of those lines comes up when you've got a file that's\nnot in your source tree but somewhere else. It's particularly\nuseful for the Makefiles, which are otherwise often same-y and\nhard to identify.\n\n> There are other inconsistencies in the copyright messages, like\n> whether we say \"Portions\" or not for PGDU, and whether we use 1996- or\n> the year the file was created, and whether the Berkeley copyright is\n> there or not (different people seem to have different ideas about\n> whether that's needed for a post-Berkeley file).\n\nYeah, it'd be nice to have some greater consistency there. My own\nthought about it is that it's rare to have a file that's *completely*\nde novo code, and can be guaranteed to stay that way --- more usually\nthere is some amount of copying&pasting, and then you have to wonder\nhow much of that material could be traced back to Berkeley. So I\nprefer to err on the side of including their copyright. That line of\nargument basically leads to the conclusion that all the copyright tags\nshould be identical, which doesn't seem like an unreasonable rule.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 21 Nov 2019 15:42:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Copyright information in source files"
},
{
"msg_contents": "On Fri, Nov 22, 2019 at 2:12 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > I'd like to get rid of those IDENTIFICATION lines completely (they are\n> > left over from the time when the project used CVS, and that section\n> > had a $Header$ \"ident\" tag, but in the git era, those ident tags are\n> > no longer in fashion).\n>\n> I'm not for that. Arguments about CVS vs git are irrelevant: the\n> usefulness of those lines comes up when you've got a file that's\n> not in your source tree but somewhere else. It's particularly\n> useful for the Makefiles, which are otherwise often same-y and\n> hard to identify.\n>\n> > There are other inconsistencies in the copyright messages, like\n> > whether we say \"Portions\" or not for PGDU, and whether we use 1996- or\n> > the year the file was created, and whether the Berkeley copyright is\n> > there or not (different people seem to have different ideas about\n> > whether that's needed for a post-Berkeley file).\n>\n> Yeah, it'd be nice to have some greater consistency there. My own\n> thought about it is that it's rare to have a file that's *completely*\n> de novo code, and can be guaranteed to stay that way --- more usually\n> there is some amount of copying&pasting, and then you have to wonder\n> how much of that material could be traced back to Berkeley. So I\n> prefer to err on the side of including their copyright. That line of\n> argument basically leads to the conclusion that all the copyright tags\n> should be identical, which doesn't seem like an unreasonable rule.\n>\n\nI had seen that most files use the below format:\n/*-------------------------------------------------------------------------\n * relation.c\n * PostgreSQL logical replication\n *\n * Copyright (c) 2016-2019, PostgreSQL Global Development Group\n *\n * IDENTIFICATION\n * src/backend/replication/logical/relation.c\n *\n * NOTES\n * This file contains helper functions for logical replication relation\n * mapping cache.\n *\n *-------------------------------------------------------------------------\n */\n\nCan we use the above format as a standard format?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 23 Nov 2019 22:08:43 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Copyright information in source files"
},
{
"msg_contents": "On Sat, Nov 23, 2019 at 11:39 PM vignesh C <vignesh21@gmail.com> wrote:\n\n> * Copyright (c) 2016-2019, PostgreSQL Global Development Group\n\nWhile we're talking about copyrights, I noticed while researching\nsomething else that the PHP project recently got rid of all the\ncopyright years from their files, which is one less thing to update\nand one less cause of noise in the change log for rarely-changed\nfiles. Is there actually a good reason to update the year?\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 24 Nov 2019 08:54:40 +0700",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Copyright information in source files"
},
{
"msg_contents": "On Sun, Nov 24, 2019 at 7:24 AM John Naylor <john.naylor@2ndquadrant.com> wrote:\n>\n> On Sat, Nov 23, 2019 at 11:39 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> > * Copyright (c) 2016-2019, PostgreSQL Global Development Group\n>\n> While we're talking about copyrights, I noticed while researching\n> something else that the PHP project recently got rid of all the\n> copyright years from their files, which is one less thing to update\n> and one less cause of noise in the change log for rarely-changed\n> files. Is there actually a good reason to update the year?\n>\n\nThat idea sounds good to me. Also that way no need to update the year\nevery year or can we mention using current to indicate the latest\nyear, something like:\n* Copyright (c) 2016-current, PostgreSQL Global Development Group\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 24 Nov 2019 15:21:03 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Copyright information in source files"
},
{
"msg_contents": "On Thu, Nov 21, 2019 at 03:42:26PM -0500, Tom Lane wrote:\n> Yeah, it'd be nice to have some greater consistency there. My own\n> thought about it is that it's rare to have a file that's *completely*\n> de novo code, and can be guaranteed to stay that way --- more usually\n> there is some amount of copying&pasting, and then you have to wonder\n> how much of that material could be traced back to Berkeley. So I\n> prefer to err on the side of including their copyright. That line of\n> argument basically leads to the conclusion that all the copyright tags\n> should be identical, which doesn't seem like an unreasonable rule.\n\nAgreed. Doing that is also a no-brainer when adding new files into\nthe tree or for your own, separate, modules and that's FWIW the way of\ndoing things I tend to follow.\n--\nMichael",
"msg_date": "Sun, 24 Nov 2019 21:48:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Copyright information in source files"
},
{
"msg_contents": "John Naylor <john.naylor@2ndquadrant.com> writes:\n> On Sat, Nov 23, 2019 at 11:39 PM vignesh C <vignesh21@gmail.com> wrote:\n>> * Copyright (c) 2016-2019, PostgreSQL Global Development Group\n\n> While we're talking about copyrights, I noticed while researching\n> something else that the PHP project recently got rid of all the\n> copyright years from their files, which is one less thing to update\n> and one less cause of noise in the change log for rarely-changed\n> files. Is there actually a good reason to update the year?\n\nGood question.\n\nI was wondering about something even simpler: is there a reason to\nhave per-file copyright notices at all? Why isn't it good enough\nto have one copyright notice at the top of the tree?\n\nActual legal advice might be a good thing to have here ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 24 Nov 2019 10:14:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Copyright information in source files"
},
{
"msg_contents": "Hello Tom,\n\n>> While we're talking about copyrights, I noticed while researching \n>> something else that the PHP project recently got rid of all the \n>> copyright years from their files, which is one less thing to update and \n>> one less cause of noise in the change log for rarely-changed files. Is \n>> there actually a good reason to update the year?\n>\n> Good question.\n>\n> I was wondering about something even simpler: is there a reason to have \n> per-file copyright notices at all? Why isn't it good enough to have one \n> copyright notice at the top of the tree?\n>\n> Actual legal advice might be a good thing to have here ...\n\nI have no legal skills, but I (well Google really:-) found this:\n\nhttps://softwarefreedom.org/resources/2012/ManagingCopyrightInformation.html\n\n\"Contrary to popular belief, copyright notices arenοΏ½t required to secure \ncopyright.\"\n\nThere is a section about \"Comparing two systems: file-scope and \ncentralized notices\" which is probably what you are looking for.\n\nThe \"file-scope\" approach suggests that each dev should add its own notice \non each significant change. This is not was pg does and does not look too \npractical. It looks that the copyright notice is interpreted as a VCS.\n\nThen there is some stuff about distributed VCS, but pg really uses git as \na centralized VCS: when a patch is submitted, it is really applied by \nsomeone but not merged into the code from an external source. The good \nnews is that git comments include the contributor identification, to some \nextent.\n\nThen there is the centralized approach, which seems just to require \nper-file \"pointer\" to the license. Maybe pg should do that, which would \nstrip a large part of repeated copyright headers.\n\n-- \nFabien.",
"msg_date": "Sun, 24 Nov 2019 16:57:29 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Copyright information in source files"
},
{
"msg_contents": "On Sun, Nov 24, 2019 at 8:44 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> John Naylor <john.naylor@2ndquadrant.com> writes:\n> > On Sat, Nov 23, 2019 at 11:39 PM vignesh C <vignesh21@gmail.com> wrote:\n> >> * Copyright (c) 2016-2019, PostgreSQL Global Development Group\n>\n> > While we're talking about copyrights, I noticed while researching\n> > something else that the PHP project recently got rid of all the\n> > copyright years from their files, which is one less thing to update\n> > and one less cause of noise in the change log for rarely-changed\n> > files. Is there actually a good reason to update the year?\n>\n> Good question.\n>\n> I was wondering about something even simpler: is there a reason to\n> have per-file copyright notices at all? Why isn't it good enough\n> to have one copyright notice at the top of the tree?\n>\n> Actual legal advice might be a good thing to have here ...\n\n+1 for having single copyright notice at the top of the tree.\nWhat about file header, should we have anything at all?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 26 Nov 2019 22:01:41 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Copyright information in source files"
}
] |
[
{
"msg_contents": "Folks,\n\nPlease find attached a patch for $Subject.\n\nMotivation:\n\nWhen people are doing keyset pagination, the simple cases redound to\nadding a WHERE that looks like\n\n (a, b, c) > (most_recent_a, most_recent_b, most_recent_c)\n\nwhich corresponds to an ORDER BY clause that looks like\n\n ORDER BY a, b, c\n\nThe fun starts when there are mixes of ASC and DESC in the ORDER BY\nclause. Reverse collations make this simpler by inverting the meaning\nof > (or similar), which makes the rowtypes still sortable in a new\ndictionary order, so the pagination would look like:\n\n\n (a, b, c) > (most_recent_a, most_recent_b COLLATE \"C_backwards\", most_recent_c)\n\nwith an ORDER BY like:\n\n ORDER BY a, b DESC, c\n\nWhat say?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Sun, 17 Nov 2019 19:24:08 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Reverse collations (initially for making keyset pagination cover\n more cases)"
},
{
"msg_contents": "David Fetter <david@fetter.org> writes:\n> Please find attached a patch for $Subject.\n\nI think there's a reason why this hasn't been proposed before.\n\nBack before we had full support of ASC/DESC index sort order, there was\ninterest in having reverse-sort operator classes, and there are bits and\npieces still in the code that tried to cater to that. But we never got\nit to the point where such things would really be pleasant to use.\nNow that we have ASC/DESC indexes, there's no value in a reverse-sort\noperator class, so the idea's pretty much been consigned to the dustbin.\n\nThis looks to me like it's trying to go down that same path at the\ncollation level, and it seems like just as bad of an idea here.\n\nThe fundamental problem with what you propose is that it'd require\na bunch of infrastructure (which you haven't even attempted) to teach\nthe planner about the relationships between forward- and reverse-sort\ncollation pairs, so that it could figure out that scanning some index\nbackwards would satisfy a request for the reverse-sort collation,\nor vice versa. Without such infrastructure, the feature is really\njust a gotcha, because queries won't get optimized the way users\nwould expect them to.\n\nAnd no, I don't think we should accept the feature and then go write\nthat infrastructure. If we couldn't make it work well at the opclass\nlevel, I don't think things will go better at the collation level.\n\nLastly, your proposed use-case has some attraction, but this proposal\nonly supports it if the column you need to be differently sorted is\ntextual. What if the sort columns are all numerics and timestamps?\nThinking about that, it seems like what we'd want is some sort of\nmore-general notion of row comparison, to express \"bounded below in\nan arbitrary ORDER BY ordering\". Not quite sure what it ought to\nlook like.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 17 Nov 2019 14:30:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reverse collations (initially for making keyset pagination cover\n more cases)"
},
{
"msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n Tom> Lastly, your proposed use-case has some attraction, but this\n Tom> proposal only supports it if the column you need to be differently\n Tom> sorted is textual. What if the sort columns are all numerics and\n Tom> timestamps?\n\nThere are already trivial ways to reverse the orders of those, viz.\n(-number) and (-extract(epoch from timestampcol)). The lack of any\nequivalent method for text is what prompted this idea.\n\n Tom> Thinking about that, it seems like what we'd want is some sort of\n Tom> more-general notion of row comparison, to express \"bounded below\n Tom> in an arbitrary ORDER BY ordering\". Not quite sure what it ought\n Tom> to look like.\n\nWell, one obvious completely general method is to teach the planner\n(somehow) to spot conditions of the form\n\n (a > $1 OR (a = $1 AND b > $2) OR (a = $1 AND b = $2 AND c > $3) ...)\n \netc. and make them indexable if the sense of the > or < operator at\neach step matched an ASC or DESC column in the index.\n\nThis would be a substantial win, because this kind of condition is one\noften (incorrectly, for current PG) shown as an example of how to do\nkeyset pagination on multiple columns. But it would require some amount\nof new logic in both the planner and, afaik, in the btree AM; I haven't\nlooked at how much.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Sun, 17 Nov 2019 19:56:16 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: Reverse collations (initially for making keyset pagination cover\n more cases)"
},
{
"msg_contents": "On Sun, Nov 17, 2019 at 02:30:35PM -0500, Tom Lane wrote:\n> David Fetter <david@fetter.org> writes:\n> > Please find attached a patch for $Subject.\n> \n> I think there's a reason why this hasn't been proposed before.\n> \n> Back before we had full support of ASC/DESC index sort order, there was\n> interest in having reverse-sort operator classes, and there are bits and\n> pieces still in the code that tried to cater to that. But we never got\n> it to the point where such things would really be pleasant to use.\n> Now that we have ASC/DESC indexes, there's no value in a reverse-sort\n> operator class, so the idea's pretty much been consigned to the dustbin.\n> \n> This looks to me like it's trying to go down that same path at the\n> collation level, and it seems like just as bad of an idea here.\n> \n> The fundamental problem with what you propose is that it'd require\n> a bunch of infrastructure (which you haven't even attempted) to teach\n> the planner about the relationships between forward- and reverse-sort\n> collation pairs, so that it could figure out that scanning some index\n> backwards would satisfy a request for the reverse-sort collation,\n> or vice versa. Without such infrastructure, the feature is really\n> just a gotcha, because queries won't get optimized the way users\n> would expect them to.\n> \n> And no, I don't think we should accept the feature and then go write\n> that infrastructure. If we couldn't make it work well at the opclass\n> level, I don't think things will go better at the collation level.\n> \n> Lastly, your proposed use-case has some attraction, but this proposal\n> only supports it if the column you need to be differently sorted is\n> textual. What if the sort columns are all numerics and timestamps?\n\nThose are pretty straightforward to generate: -column, and\n-extract('epoch' FROM column), respectively.\n\n> Thinking about that, it seems like what we'd want is some sort of\n> more-general notion of row comparison, to express \"bounded below in\n> an arbitrary ORDER BY ordering\". Not quite sure what it ought to\n> look like.\n\nI'm not, either, but the one I'm proposing seems like a lot less\nredundant code (and hence a lot less room for errors) than what people\ngenerally see proposed for this use case, to wit:\n\n(a, b, c) < ($1, $2 COLLATE \"C_backwards\", $3)\n...\nORDER BY a, b DESC, c\n\nas opposed to the \"standard\" way to do it\n\n(a > $1) OR\n(a = $1 AND b < $2) OR\n(a = $1 AND b = $2 AND c > $3)\n...\nORDER BY a, b DESC, c\n\nwhich may not even get optimized correctly.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Sun, 17 Nov 2019 22:54:26 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Reverse collations (initially for making keyset pagination cover\n more cases)"
},
{
"msg_contents": ">>>>> \"David\" == David Fetter <david@fetter.org> writes:\n\nFirst, in testing the patch I found there were indeed some missing\ncases: the sortsupport version of the comparator needs to be fixed too.\nI attach a draft addition to your patch, you should probably look at\nadding test cases that need this to work.\n\n David> (a, b, c) < ($1, $2 COLLATE \"C_backwards\", $3)\n David> ...\n David> ORDER BY a, b DESC, c\n\nThat would have to be:\n\n WHERE (a, b COLLATE \"C_backwards\", c) < ($1, $2, $3)\n ...\n ORDER BY a, b COLLATE \"C_backwards\", c\n\nAdding the below patch to yours, I can get this on the regression test\ndb (note that this is a -O0 asserts build, timings may be slow relative\nto a production build):\n\ncreate collation \"C_rev\" ( LOCALE = \"C\", REVERSE = true );\ncreate index on tenk1 (hundred, (stringu1::text collate \"C_rev\"), string4);\n\nexplain analyze\n select hundred, stringu1::text, string4\n from tenk1\n where (hundred, stringu1::text COLLATE \"C_rev\", string4)\n > (10, 'WKAAAA', 'VVVVxx')\n order by hundred, (stringu1::text collate \"C_rev\"), string4\n limit 5;\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.29..1.28 rows=5 width=132) (actual time=0.029..0.038 rows=5 loops=1)\n -> Index Scan using tenk1_hundred_stringu1_string4_idx on tenk1 (cost=0.29..1768.49 rows=8900 width=132) (actual time=0.028..0.036 rows=5 loops=1)\n Index Cond: (ROW(hundred, ((stringu1)::text)::text, string4) > ROW(10, 'WKAAAA'::text, 'VVVVxx'::name))\n Planning Time: 0.225 ms\n Execution Time: 0.072 ms\n(5 rows)\n\nand I checked the results, and they look correct now.\n\n-- \nAndrew (irc:RhodiumToad)",
"msg_date": "Sun, 17 Nov 2019 23:23:23 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: Reverse collations (initially for making keyset pagination cover\n more cases)"
},
{
"msg_contents": "On Sun, Nov 17, 2019 at 11:23:23PM +0000, Andrew Gierth wrote:\n> >>>>> \"David\" == David Fetter <david@fetter.org> writes:\n> \n> First, in testing the patch I found there were indeed some missing\n> cases: the sortsupport version of the comparator needs to be fixed too.\n> I attach a draft addition to your patch, you should probably look at\n> adding test cases that need this to work.\n> \n> David> (a, b, c) < ($1, $2 COLLATE \"C_backwards\", $3)\n> David> ...\n> David> ORDER BY a, b DESC, c\n> \n> That would have to be:\n> \n> WHERE (a, b COLLATE \"C_backwards\", c) < ($1, $2, $3)\n> ...\n> ORDER BY a, b COLLATE \"C_backwards\", c\n> \n> Adding the below patch to yours, I can get this on the regression test\n> db (note that this is a -O0 asserts build, timings may be slow relative\n> to a production build):\n> \n> create collation \"C_rev\" ( LOCALE = \"C\", REVERSE = true );\n> create index on tenk1 (hundred, (stringu1::text collate \"C_rev\"), string4);\n> \n> explain analyze\n> select hundred, stringu1::text, string4\n> from tenk1\n> where (hundred, stringu1::text COLLATE \"C_rev\", string4)\n> > (10, 'WKAAAA', 'VVVVxx')\n> order by hundred, (stringu1::text collate \"C_rev\"), string4\n> limit 5;\n> QUERY PLAN \n> --------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.29..1.28 rows=5 width=132) (actual time=0.029..0.038 rows=5 loops=1)\n> -> Index Scan using tenk1_hundred_stringu1_string4_idx on tenk1 (cost=0.29..1768.49 rows=8900 width=132) (actual time=0.028..0.036 rows=5 loops=1)\n> Index Cond: (ROW(hundred, ((stringu1)::text)::text, string4) > ROW(10, 'WKAAAA'::text, 'VVVVxx'::name))\n> Planning Time: 0.225 ms\n> Execution Time: 0.072 ms\n> (5 rows)\n> \n> and I checked the results, and they look correct now.\n\nHere's that patch with your correction rolled in.\n\nThis will need more tests, and possibly more documentation.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Mon, 18 Nov 2019 03:29:39 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Reverse collations (initially for making keyset pagination cover\n more cases)"
},
{
"msg_contents": "Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n> Tom> Lastly, your proposed use-case has some attraction, but this\n> Tom> proposal only supports it if the column you need to be differently\n> Tom> sorted is textual. What if the sort columns are all numerics and\n> Tom> timestamps?\n\n> There are already trivial ways to reverse the orders of those, viz.\n> (-number) and (-extract(epoch from timestampcol)). The lack of any\n> equivalent method for text is what prompted this idea.\n\nThose \"trivial ways\" have failure cases, eg with INT_MIN. I don't buy\nthat this argument justifies introducing a kluge into text comparison.\n\n> Tom> Thinking about that, it seems like what we'd want is some sort of\n> Tom> more-general notion of row comparison, to express \"bounded below\n> Tom> in an arbitrary ORDER BY ordering\". Not quite sure what it ought\n> Tom> to look like.\n\n> Well, one obvious completely general method is to teach the planner\n> (somehow) to spot conditions of the form\n> (a > $1 OR (a = $1 AND b > $2) OR (a = $1 AND b = $2 AND c > $3) ...)\n> etc. and make them indexable if the sense of the > or < operator at\n> each step matched an ASC or DESC column in the index.\n\nI think really the only attraction of that is that it could be argued\nto be standard --- but I rather doubt that it's common for DBMSes to\nrecognize such things. It'd certainly be a royal pain in the rear\nboth to implement and to use, at least for more than about two sort\ncolumns.\n\nBack at\nhttps://www.postgresql.org/message-id/10492.1531515255%40sss.pgh.pa.us\nI proposed that we might consider allowing row comparisons to specify\nan explicit list of operators:\n\n>> One idea for resolving that is to extend the OPERATOR syntax to allow\n>> multiple operator names for row comparisons, along the lines of\n>>\tROW(a,b) OPERATOR(pg_catalog.<, public.<) ROW(c,d)\n\nI wonder whether it'd be feasible to solve this problem by doing that\nand then allowing the operators to be of different comparison types,\nthat is \"ROW(a,b) OPERATOR(<, >) ROW(c,d)\". The semantics would be\nthat the first not-equal column pair determines the result according\nto the relevant operator. But I'm not quite sure what to do if the\nrows are in fact equal --- if some of the operators are like \"<\" and\nsome are like \"<=\", what should the result be? Maybe let the last\ncolumn's operator decide that?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 18 Nov 2019 12:49:50 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reverse collations (initially for making keyset pagination cover\n more cases)"
},
{
"msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n >> Well, one obvious completely general method is to teach the planner\n >> (somehow) to spot conditions of the form\n >> (a > $1 OR (a = $1 AND b > $2) OR (a = $1 AND b = $2 AND c > $3) ...)\n >> etc. and make them indexable if the sense of the > or < operator at\n >> each step matched an ASC or DESC column in the index.\n\n Tom> I think really the only attraction of that is that it could be\n Tom> argued to be standard --- but I rather doubt that it's common for\n Tom> DBMSes to recognize such things.\n\nAt least MSSQL can recognize that a query with\n\n WHERE (a > @a OR (a = @a AND b > @b)) ORDER BY a,b\n\ncan be satisfied with an ordered index scan on an (a,b) index and no\nsort, which is good enough for pagination queries. Haven't confirmed\nyes/no for any other databases yet.\n\n(As an aside, if you try and do that in PG using UNION ALL in place of\nthe OR, to try and get a mergeappend of two index scans, it doesn't work\nwell because of how we discard redundant pathkeys; you end up with Sort\nnodes in the plan.)\n\n Tom> It'd certainly be a royal pain in the rear both to implement and\n Tom> to use, at least for more than about two sort columns.\n\nFor pagination one or two columns seems most likely, but in any event\nthe query can be generated mechanically if need be.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Mon, 18 Nov 2019 21:48:03 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: Reverse collations (initially for making keyset pagination cover\n more cases)"
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nThe planner will use big table as inner table in hash join if small table\nhave fewer unique values.\nBut this plan is much slower than using small table as inner table. This\nproblem occurs on master\nbranch without parallel scan.\n\nFor example\n\ncreate table t_small(a int);\ncreate table t_big(b int);\ninsert into t_small select i%100 from generate_series(0, 3000);\ninsert into t_big select i%100000 from generate_series(1, 100000000)i ;\nanalyze t_small;\nanalyze t_big;\nset max_parallel_workers_per_gather = 0;\n\nand the plan made by planner is\ndemo2=# explain select * from t_small, t_big where a = b;\n QUERY PLAN\n-------------------------------------------------------------------------------\n Hash Join (cost=3083104.72..3508073.65 rows=3045990 width=8)\n Hash Cond: (t_small.a = t_big.b)\n -> Seq Scan on t_small (cost=0.00..44.01 rows=3001 width=4)\n -> Hash (cost=1442478.32..1442478.32 rows=100000032 width=4)\n -> Seq Scan on t_big (cost=0.00..1442478.32 rows=100000032\nwidth=4)\n\nand it runs nearly 58s\ndemo2=# select * from t_small, t_big where a = b;\nTime: 58544.525 ms (00:58.545)\n\nBut if we do some hack and use the small table as inner. It runs 19s.\ndemo2=# explain select * from t_small, t_big where a = b;\n QUERY PLAN\n-------------------------------------------------------------------------\n Hash Join (cost=81.52..1723019.82 rows=3045990 width=8)\n Hash Cond: (t_big.b = t_small.a)\n -> Seq Scan on t_big (cost=0.00..1442478.32 rows=100000032 width=4)\n -> Hash (cost=44.01..44.01 rows=3001 width=4)\n -> Seq Scan on t_small (cost=0.00..44.01 rows=3001 width=4)\n\ndemo2=# select * from t_small, t_big where a = b;\nTime: 18751.588 ms (00:18.752)\n\n\nRCA:\n\nThe cost of the inner table mainly comes from creating a hash table.\nstartup_cost += (cpu_operator_cost * num_hashclauses + cpu_tuple_cost)\n* inner_path_rows;\n\nThe cost of the outer table mainly comes from search the hash table.\nCalculate the hash value:\nrun_cost += cpu_operator_cost * num_hashclauses * outer_path_rows;\n\nTraverse the linked list in the bucket and compare:\nrun_cost += hash_qual_cost.per_tuple * outer_path_rows *\nclamp_row_est(inner_path_rows * innerbucketsize) * 0.5;\n\nIn general, the cost of creating a hash table is higher than the cost of\nquerying a hash table.\nSo we tend to use small tables as internal tables. But if the average chain\nlength of the bucket\nis large, the situation is just the opposite.\n\nIn the test case above, the small table has 3000 tuples and 100 distinct\nvalues on column ‘a’.\nIf we use small table as inner table. The chan length of the bucket is 30.\nAnd we need to\nsearch the whole chain on probing the hash table. So the cost of probing is\nbigger than build\nhash table, and we need to use big table as inner.\n\nBut in fact this is not true. We initialized 620,000 buckets in hashtable.\nBut only 100 buckets\nhas chains with length 30. Other buckets are empty. Only hash values need\nto be compared.\nIts costs are very small. We have 100,000 distinct key and 100,000,000\ntuple on outer table.\nOnly (100/100000)* tuple_num tuples will search the whole chain. The other\ntuples\n(number = (98900/100000)*tuple_num*) in outer\ntable just compare with the hash value. So the actual cost is much smaller\nthan the planner\ncalculated. This is the reason why using a small table as inner is faster.\n\nHi Hackers, The planner will use big table as inner table in hash join if small table have fewer unique values. But this plan is much slower than using small table as inner table. This problem occurs on masterbranch without parallel scan.For examplecreate table t_small(a int);create table t_big(b int);insert into t_small select i%100 from generate_series(0, 3000);insert into t_big select i%100000 from generate_series(1, 100000000)i ;analyze t_small;analyze t_big;set max_parallel_workers_per_gather = 0;and the plan made by planner isdemo2=# explain select * from t_small, t_big where a = b; QUERY PLAN------------------------------------------------------------------------------- Hash Join (cost=3083104.72..3508073.65 rows=3045990 width=8) Hash Cond: (t_small.a = t_big.b) -> Seq Scan on t_small (cost=0.00..44.01 rows=3001 width=4) -> Hash (cost=1442478.32..1442478.32 rows=100000032 width=4) -> Seq Scan on t_big (cost=0.00..1442478.32 rows=100000032 width=4)and it runs nearly 58sdemo2=# select * from t_small, t_big where a = b;Time: 58544.525 ms (00:58.545)But if we do some hack and use the small table as inner. It runs 19s.demo2=# explain select * from t_small, t_big where a = b; QUERY PLAN------------------------------------------------------------------------- Hash Join (cost=81.52..1723019.82 rows=3045990 width=8) Hash Cond: (t_big.b = t_small.a) -> Seq Scan on t_big (cost=0.00..1442478.32 rows=100000032 width=4) -> Hash (cost=44.01..44.01 rows=3001 width=4) -> Seq Scan on t_small (cost=0.00..44.01 rows=3001 width=4)demo2=# select * from t_small, t_big where a = b;Time: 18751.588 ms (00:18.752)RCA:The cost of the inner table mainly comes from creating a hash table.startup_cost += (cpu_operator_cost * num_hashclauses + cpu_tuple_cost)\t\t* inner_path_rows;The cost of the outer table mainly comes from search the hash table.Calculate the hash value:run_cost += cpu_operator_cost * num_hashclauses * outer_path_rows;Traverse the linked list in the bucket and compare:run_cost += hash_qual_cost.per_tuple * outer_path_rows *\tclamp_row_est(inner_path_rows * innerbucketsize) * 0.5;In general, the cost of creating a hash table is higher than the cost of querying a hash table. So we tend to use small tables as internal tables. But if the average chain length of the bucketis large, the situation is just the opposite.In the test case above, the small table has 3000 tuples and 100 distinct values on column ‘a’.If we use small table as inner table. The chan length of the bucket is 30. And we need to search the whole chain on probing the hash table. So the cost of probing is bigger than buildhash table, and we need to use big table as inner.But in fact this is not true. We initialized 620,000 buckets in hashtable. But only 100 buckets has chains with length 30. Other buckets are empty. Only hash values need to be compared. Its costs are very small. We have 100,000 distinct key and 100,000,000 tuple on outer table.Only (100/100000)* tuple_num tuples will search the whole chain. The other tuples (number = (98900/100000)*tuple_num*) in outer table just compare with the hash value. So the actual cost is much smaller than the plannercalculated. This is the reason why using a small table as inner is faster.",
"msg_date": "Mon, 18 Nov 2019 14:48:17 +0800",
"msg_from": "Jinbao Chen <jinchen@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Planner chose a much slower plan in hashjoin, using a large table as\n the inner table."
},
{
"msg_contents": "On Mon, Nov 18, 2019 at 7:48 PM Jinbao Chen <jinchen@pivotal.io> wrote:\n> In the test case above, the small table has 3000 tuples and 100 distinct values on column ‘a’.\n> If we use small table as inner table. The chan length of the bucket is 30. And we need to\n> search the whole chain on probing the hash table. So the cost of probing is bigger than build\n> hash table, and we need to use big table as inner.\n>\n> But in fact this is not true. We initialized 620,000 buckets in hashtable. But only 100 buckets\n> has chains with length 30. Other buckets are empty. Only hash values need to be compared.\n> Its costs are very small. We have 100,000 distinct key and 100,000,000 tuple on outer table.\n> Only (100/100000)* tuple_num tuples will search the whole chain. The other tuples\n> (number = (98900/100000)*tuple_num*) in outer\n> table just compare with the hash value. So the actual cost is much smaller than the planner\n> calculated. This is the reason why using a small table as inner is faster.\n\nSo basically we think that if t_big is on the outer side, we'll do\n100,000,000 probes and each one is going to scan a t_small bucket with\nchain length 30, so that looks really expensive. Actually only a\nsmall percentage of its probes find tuples with the right hash value,\nbut final_cost_hash_join() doesn't know that. So we hash t_big\ninstead, which we estimated pretty well and it finishes up with\nbuckets of length 1,000 (which is actually fine in this case, they're\nnot unwanted hash collisions, they're duplicate keys that we need to\nemit) and we probe them 3,000 times (which is also fine in this case),\nbut we had to do a bunch of memory allocation and/or batch file IO and\nthat turns out to be slower.\n\nI am not at all sure about this but I wonder if it would be better to\nuse something like:\n\n run_cost += outer_path_rows * some_small_probe_cost;\n run_cost += hash_qual_cost.per_tuple * approximate_tuple_count();\n\nIf we can estimate how many tuples will actually match accurately,\nthat should also be the number of times we have to run the quals,\nsince we don't usually expect hash collisions (bucket collisions, yes,\nbut hash collisions where the key doesn't turn out to be equal, no*).\n\n* ... but also yes as you approach various limits, so you could also\nfactor in bucket chain length that is due to being prevented from\nexpanding the number of buckets by arbitrary constraints, and perhaps\nalso birthday_problem(hash size, key space) to factor in unwanted hash\ncollisions that start to matter once you get to billions of keys and\nexpect collisions with short hashes.\n\n\n",
"msg_date": "Tue, 19 Nov 2019 20:46:21 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planner chose a much slower plan in hashjoin, using a large table\n as the inner table."
},
{
"msg_contents": "I think we have the same understanding of this issue.\n\nSometimes use smaller costs on scanning the chain in bucket like below would\nbe better.\nrun_cost += outer_path_rows * some_small_probe_cost;\nrun_cost += hash_qual_cost.per_tuple * approximate_tuple_count();\nIn some version of GreenPlum(a database based on postgres), we just disabled\nthe cost on scanning the bucket chain. In most cases, this can get a better\nquery\nplan. But I am worried that it will be worse in some cases.\n\nNow only the small table's distinct value is much smaller than the bucket\nnumber,\nand much smaller than the distinct value of the large table, the planner\nwill get the\nwrong plan.\n\nFor example, if inner table has 100 distinct values, and 3000 rows. Hash\ntable\nhas 1000 buckets. Outer table has 10000 distinct values.\nWe can assume that all the 100 distinct values of the inner table are\nincluded in the\n10000 distinct values of the outer table. So (100/10000)*outer_rows tuples\nwill\nprobe the buckets has chain. And (9900/10000)*outer_rows tuples will probe\nall the 1000 buckets randomly. So (9900/10000)*outer_rows*(900/1000) tuples\nwill\nprobe empty buckets. So the costs on scanning bucket chain is\n\nhash_qual_cost.per_tuple*innerbucketsize*outer_rows*\n(1 - ((outer_distinct - inner_distinct)/outer_distinct)*((buckets_num -\ninner_disttinct)/buckets_num))\n\nDo you think this assumption is reasonable?\n\n\nOn Tue, Nov 19, 2019 at 3:46 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Mon, Nov 18, 2019 at 7:48 PM Jinbao Chen <jinchen@pivotal.io> wrote:\n> > In the test case above, the small table has 3000 tuples and 100 distinct\n> values on column ‘a’.\n> > If we use small table as inner table. The chan length of the bucket is\n> 30. And we need to\n> > search the whole chain on probing the hash table. So the cost of probing\n> is bigger than build\n> > hash table, and we need to use big table as inner.\n> >\n> > But in fact this is not true. We initialized 620,000 buckets in\n> hashtable. But only 100 buckets\n> > has chains with length 30. Other buckets are empty. Only hash values\n> need to be compared.\n> > Its costs are very small. We have 100,000 distinct key and 100,000,000\n> tuple on outer table.\n> > Only (100/100000)* tuple_num tuples will search the whole chain. The\n> other tuples\n> > (number = (98900/100000)*tuple_num*) in outer\n> > table just compare with the hash value. So the actual cost is much\n> smaller than the planner\n> > calculated. This is the reason why using a small table as inner is\n> faster.\n>\n> So basically we think that if t_big is on the outer side, we'll do\n> 100,000,000 probes and each one is going to scan a t_small bucket with\n> chain length 30, so that looks really expensive. Actually only a\n> small percentage of its probes find tuples with the right hash value,\n> but final_cost_hash_join() doesn't know that. So we hash t_big\n> instead, which we estimated pretty well and it finishes up with\n> buckets of length 1,000 (which is actually fine in this case, they're\n> not unwanted hash collisions, they're duplicate keys that we need to\n> emit) and we probe them 3,000 times (which is also fine in this case),\n> but we had to do a bunch of memory allocation and/or batch file IO and\n> that turns out to be slower.\n>\n> I am not at all sure about this but I wonder if it would be better to\n> use something like:\n>\n> run_cost += outer_path_rows * some_small_probe_cost;\n> run_cost += hash_qual_cost.per_tuple * approximate_tuple_count();\n>\n> If we can estimate how many tuples will actually match accurately,\n> that should also be the number of times we have to run the quals,\n> since we don't usually expect hash collisions (bucket collisions, yes,\n> but hash collisions where the key doesn't turn out to be equal, no*).\n>\n> * ... but also yes as you approach various limits, so you could also\n> factor in bucket chain length that is due to being prevented from\n> expanding the number of buckets by arbitrary constraints, and perhaps\n> also birthday_problem(hash size, key space) to factor in unwanted hash\n> collisions that start to matter once you get to billions of keys and\n> expect collisions with short hashes.\n>\n\nI think we have the same understanding of this issue.Sometimes use smaller costs on scanning the chain in bucket like below wouldbe better. run_cost += outer_path_rows * some_small_probe_cost;run_cost += hash_qual_cost.per_tuple * approximate_tuple_count();In some version of GreenPlum(a database based on postgres), we just disabledthe cost on scanning the bucket chain. In most cases, this can get a better queryplan. But I am worried that it will be worse in some cases.Now only the small table's distinct value is much smaller than the bucket number,and much smaller than the distinct value of the large table, the planner will get thewrong plan. For example, if inner table has 100 distinct values, and 3000 rows. Hash table has 1000 buckets. Outer table has 10000 distinct values.We can assume that all the 100 distinct values of the inner table are included in the10000 distinct values of the outer table. So (100/10000)*outer_rows tuples willprobe the buckets has chain. And (9900/10000)*outer_rows tuples will probeall the 1000 buckets randomly. So (9900/10000)*outer_rows*(900/1000) tuples willprobe empty buckets. So the costs on scanning bucket chain ishash_qual_cost.per_tuple*innerbucketsize*outer_rows*(1 - ((outer_distinct - inner_distinct)/outer_distinct)*((buckets_num - inner_disttinct)/buckets_num))Do you think this assumption is reasonable?On Tue, Nov 19, 2019 at 3:46 PM Thomas Munro <thomas.munro@gmail.com> wrote:On Mon, Nov 18, 2019 at 7:48 PM Jinbao Chen <jinchen@pivotal.io> wrote:\n> In the test case above, the small table has 3000 tuples and 100 distinct values on column ‘a’.\n> If we use small table as inner table. The chan length of the bucket is 30. And we need to\n> search the whole chain on probing the hash table. So the cost of probing is bigger than build\n> hash table, and we need to use big table as inner.\n>\n> But in fact this is not true. We initialized 620,000 buckets in hashtable. But only 100 buckets\n> has chains with length 30. Other buckets are empty. Only hash values need to be compared.\n> Its costs are very small. We have 100,000 distinct key and 100,000,000 tuple on outer table.\n> Only (100/100000)* tuple_num tuples will search the whole chain. The other tuples\n> (number = (98900/100000)*tuple_num*) in outer\n> table just compare with the hash value. So the actual cost is much smaller than the planner\n> calculated. This is the reason why using a small table as inner is faster.\n\nSo basically we think that if t_big is on the outer side, we'll do\n100,000,000 probes and each one is going to scan a t_small bucket with\nchain length 30, so that looks really expensive. Actually only a\nsmall percentage of its probes find tuples with the right hash value,\nbut final_cost_hash_join() doesn't know that. So we hash t_big\ninstead, which we estimated pretty well and it finishes up with\nbuckets of length 1,000 (which is actually fine in this case, they're\nnot unwanted hash collisions, they're duplicate keys that we need to\nemit) and we probe them 3,000 times (which is also fine in this case),\nbut we had to do a bunch of memory allocation and/or batch file IO and\nthat turns out to be slower.\n\nI am not at all sure about this but I wonder if it would be better to\nuse something like:\n\n run_cost += outer_path_rows * some_small_probe_cost;\n run_cost += hash_qual_cost.per_tuple * approximate_tuple_count();\n\nIf we can estimate how many tuples will actually match accurately,\nthat should also be the number of times we have to run the quals,\nsince we don't usually expect hash collisions (bucket collisions, yes,\nbut hash collisions where the key doesn't turn out to be equal, no*).\n\n* ... but also yes as you approach various limits, so you could also\nfactor in bucket chain length that is due to being prevented from\nexpanding the number of buckets by arbitrary constraints, and perhaps\nalso birthday_problem(hash size, key space) to factor in unwanted hash\ncollisions that start to matter once you get to billions of keys and\nexpect collisions with short hashes.",
"msg_date": "Tue, 19 Nov 2019 17:56:06 +0800",
"msg_from": "Jinbao Chen <jinchen@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Planner chose a much slower plan in hashjoin, using a large table\n as the inner table."
},
{
"msg_contents": "Hi hackers,\n\nI have made a patch to fix the problem.\n\nAdded the selection rate of the inner table non-empty bucket\n\nThe planner will use big table as inner table in hash join\nif small table have fewer unique values. But this plan is\nmuch slower than using small table as inner table.\n\nIn general, the cost of creating a hash table is higher\nthan the cost of querying a hash table. So we tend to use\nsmall tables as internal tables. But if the average chain\nlength of the bucket is large, the situation is just the\nopposite.\n\nIf virtualbuckets is much larger than innerndistinct, and\nouterndistinct is much larger than innerndistinct. Then most\ntuples of the outer table will match the empty bucket. So when\nwe calculate the cost of traversing the bucket, we need to\nignore the tuple matching empty bucket.\n\nSo we add the selection rate of the inner table non-empty\nbucket. The formula is:\n(1 - ((outerndistinct - innerndistinct)/outerndistinct)*\n((virtualbuckets - innerndistinct)/virtualbuckets))\n\n\nOn Tue, Nov 19, 2019 at 5:56 PM Jinbao Chen <jinchen@pivotal.io> wrote:\n\n> I think we have the same understanding of this issue.\n>\n> Sometimes use smaller costs on scanning the chain in bucket like below\n> would\n> be better.\n> run_cost += outer_path_rows * some_small_probe_cost;\n> run_cost += hash_qual_cost.per_tuple * approximate_tuple_count();\n> In some version of GreenPlum(a database based on postgres), we just\n> disabled\n> the cost on scanning the bucket chain. In most cases, this can get a\n> better query\n> plan. But I am worried that it will be worse in some cases.\n>\n> Now only the small table's distinct value is much smaller than the bucket\n> number,\n> and much smaller than the distinct value of the large table, the planner\n> will get the\n> wrong plan.\n>\n> For example, if inner table has 100 distinct values, and 3000 rows. Hash\n> table\n> has 1000 buckets. Outer table has 10000 distinct values.\n> We can assume that all the 100 distinct values of the inner table are\n> included in the\n> 10000 distinct values of the outer table. So (100/10000)*outer_rows tuples\n> will\n> probe the buckets has chain. And (9900/10000)*outer_rows tuples will probe\n> all the 1000 buckets randomly. So (9900/10000)*outer_rows*(900/1000)\n> tuples will\n> probe empty buckets. So the costs on scanning bucket chain is\n>\n> hash_qual_cost.per_tuple*innerbucketsize*outer_rows*\n> (1 - ((outer_distinct - inner_distinct)/outer_distinct)*((buckets_num -\n> inner_disttinct)/buckets_num))\n>\n> Do you think this assumption is reasonable?\n>\n>\n> On Tue, Nov 19, 2019 at 3:46 PM Thomas Munro <thomas.munro@gmail.com>\n> wrote:\n>\n>> On Mon, Nov 18, 2019 at 7:48 PM Jinbao Chen <jinchen@pivotal.io> wrote:\n>> > In the test case above, the small table has 3000 tuples and 100\n>> distinct values on column ‘a’.\n>> > If we use small table as inner table. The chan length of the bucket is\n>> 30. And we need to\n>> > search the whole chain on probing the hash table. So the cost of\n>> probing is bigger than build\n>> > hash table, and we need to use big table as inner.\n>> >\n>> > But in fact this is not true. We initialized 620,000 buckets in\n>> hashtable. But only 100 buckets\n>> > has chains with length 30. Other buckets are empty. Only hash values\n>> need to be compared.\n>> > Its costs are very small. We have 100,000 distinct key and 100,000,000\n>> tuple on outer table.\n>> > Only (100/100000)* tuple_num tuples will search the whole chain. The\n>> other tuples\n>> > (number = (98900/100000)*tuple_num*) in outer\n>> > table just compare with the hash value. So the actual cost is much\n>> smaller than the planner\n>> > calculated. This is the reason why using a small table as inner is\n>> faster.\n>>\n>> So basically we think that if t_big is on the outer side, we'll do\n>> 100,000,000 probes and each one is going to scan a t_small bucket with\n>> chain length 30, so that looks really expensive. Actually only a\n>> small percentage of its probes find tuples with the right hash value,\n>> but final_cost_hash_join() doesn't know that. So we hash t_big\n>> instead, which we estimated pretty well and it finishes up with\n>> buckets of length 1,000 (which is actually fine in this case, they're\n>> not unwanted hash collisions, they're duplicate keys that we need to\n>> emit) and we probe them 3,000 times (which is also fine in this case),\n>> but we had to do a bunch of memory allocation and/or batch file IO and\n>> that turns out to be slower.\n>>\n>> I am not at all sure about this but I wonder if it would be better to\n>> use something like:\n>>\n>> run_cost += outer_path_rows * some_small_probe_cost;\n>> run_cost += hash_qual_cost.per_tuple * approximate_tuple_count();\n>>\n>> If we can estimate how many tuples will actually match accurately,\n>> that should also be the number of times we have to run the quals,\n>> since we don't usually expect hash collisions (bucket collisions, yes,\n>> but hash collisions where the key doesn't turn out to be equal, no*).\n>>\n>> * ... but also yes as you approach various limits, so you could also\n>> factor in bucket chain length that is due to being prevented from\n>> expanding the number of buckets by arbitrary constraints, and perhaps\n>> also birthday_problem(hash size, key space) to factor in unwanted hash\n>> collisions that start to matter once you get to billions of keys and\n>> expect collisions with short hashes.\n>>\n>",
"msg_date": "Fri, 22 Nov 2019 18:50:43 +0800",
"msg_from": "Jinbao Chen <jinchen@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Planner chose a much slower plan in hashjoin, using a large table\n as the inner table."
},
{
"msg_contents": "On Fri, Nov 22, 2019 at 6:51 PM Jinbao Chen <jinchen@pivotal.io> wrote:\n\n> Hi hackers,\n>\n> I have made a patch to fix the problem.\n>\n> Added the selection rate of the inner table non-empty bucket\n>\n> The planner will use big table as inner table in hash join\n> if small table have fewer unique values. But this plan is\n> much slower than using small table as inner table.\n>\n> In general, the cost of creating a hash table is higher\n> than the cost of querying a hash table. So we tend to use\n> small tables as internal tables. But if the average chain\n> length of the bucket is large, the situation is just the\n> opposite.\n>\n> If virtualbuckets is much larger than innerndistinct, and\n> outerndistinct is much larger than innerndistinct. Then most\n> tuples of the outer table will match the empty bucket. So when\n> we calculate the cost of traversing the bucket, we need to\n> ignore the tuple matching empty bucket.\n>\n> So we add the selection rate of the inner table non-empty\n> bucket. The formula is:\n> (1 - ((outerndistinct - innerndistinct)/outerndistinct)*\n> ((virtualbuckets - innerndistinct)/virtualbuckets))\n>\n>\n> On Tue, Nov 19, 2019 at 5:56 PM Jinbao Chen <jinchen@pivotal.io> wrote:\n>\n>> I think we have the same understanding of this issue.\n>>\n>> Sometimes use smaller costs on scanning the chain in bucket like below\n>> would\n>> be better.\n>> run_cost += outer_path_rows * some_small_probe_cost;\n>> run_cost += hash_qual_cost.per_tuple * approximate_tuple_count();\n>> In some version of GreenPlum(a database based on postgres), we just\n>> disabled\n>> the cost on scanning the bucket chain. In most cases, this can get a\n>> better query\n>> plan. But I am worried that it will be worse in some cases.\n>>\n>> Now only the small table's distinct value is much smaller than the bucket\n>> number,\n>> and much smaller than the distinct value of the large table, the planner\n>> will get the\n>> wrong plan.\n>>\n>> For example, if inner table has 100 distinct values, and 3000 rows. Hash\n>> table\n>> has 1000 buckets. Outer table has 10000 distinct values.\n>> We can assume that all the 100 distinct values of the inner table are\n>> included in the\n>> 10000 distinct values of the outer table. So (100/10000)*outer_rows\n>> tuples will\n>> probe the buckets has chain. And (9900/10000)*outer_rows tuples will probe\n>> all the 1000 buckets randomly. So (9900/10000)*outer_rows*(900/1000)\n>> tuples will\n>> probe empty buckets. So the costs on scanning bucket chain is\n>>\n>> hash_qual_cost.per_tuple*innerbucketsize*outer_rows*\n>> (1 - ((outer_distinct - inner_distinct)/outer_distinct)*((buckets_num -\n>> inner_disttinct)/buckets_num))\n>>\n>> Do you think this assumption is reasonable?\n>>\n>>\n>> On Tue, Nov 19, 2019 at 3:46 PM Thomas Munro <thomas.munro@gmail.com>\n>> wrote:\n>>\n>>> On Mon, Nov 18, 2019 at 7:48 PM Jinbao Chen <jinchen@pivotal.io> wrote:\n>>> > In the test case above, the small table has 3000 tuples and 100\n>>> distinct values on column ‘a’.\n>>> > If we use small table as inner table. The chan length of the bucket\n>>> is 30. And we need to\n>>> > search the whole chain on probing the hash table. So the cost of\n>>> probing is bigger than build\n>>> > hash table, and we need to use big table as inner.\n>>> >\n>>> > But in fact this is not true. We initialized 620,000 buckets in\n>>> hashtable. But only 100 buckets\n>>> > has chains with length 30. Other buckets are empty. Only hash values\n>>> need to be compared.\n>>> > Its costs are very small. We have 100,000 distinct key and 100,000,000\n>>> tuple on outer table.\n>>> > Only (100/100000)* tuple_num tuples will search the whole chain. The\n>>> other tuples\n>>> > (number = (98900/100000)*tuple_num*) in outer\n>>> > table just compare with the hash value. So the actual cost is much\n>>> smaller than the planner\n>>> > calculated. This is the reason why using a small table as inner is\n>>> faster.\n>>>\n>>> So basically we think that if t_big is on the outer side, we'll do\n>>> 100,000,000 probes and each one is going to scan a t_small bucket with\n>>> chain length 30, so that looks really expensive. Actually only a\n>>> small percentage of its probes find tuples with the right hash value,\n>>> but final_cost_hash_join() doesn't know that. So we hash t_big\n>>> instead, which we estimated pretty well and it finishes up with\n>>> buckets of length 1,000 (which is actually fine in this case, they're\n>>> not unwanted hash collisions, they're duplicate keys that we need to\n>>> emit) and we probe them 3,000 times (which is also fine in this case),\n>>> but we had to do a bunch of memory allocation and/or batch file IO and\n>>> that turns out to be slower.\n>>>\n>>> I am not at all sure about this but I wonder if it would be better to\n>>> use something like:\n>>>\n>>> run_cost += outer_path_rows * some_small_probe_cost;\n>>> run_cost += hash_qual_cost.per_tuple * approximate_tuple_count();\n>>>\n>>> If we can estimate how many tuples will actually match accurately,\n>>> that should also be the number of times we have to run the quals,\n>>> since we don't usually expect hash collisions (bucket collisions, yes,\n>>> but hash collisions where the key doesn't turn out to be equal, no*).\n>>>\n>>> * ... but also yes as you approach various limits, so you could also\n>>> factor in bucket chain length that is due to being prevented from\n>>> expanding the number of buckets by arbitrary constraints, and perhaps\n>>> also birthday_problem(hash size, key space) to factor in unwanted hash\n>>> collisions that start to matter once you get to billions of keys and\n>>> expect collisions with short hashes.\n>>>\n>>\nFYI: I tried this on 12.1, and find it use small_table as inner table\nalready. I didn't look into the details so far.\n\npostgres=# explain (costs off) select * from join_hash_t_small,\njoin_hash_t_big where a = b;\n QUERY PLAN\n--------------------------------------------------------\n Hash Join\n Hash Cond: (join_hash_t_big.b = join_hash_t_small.a)\n -> Seq Scan on join_hash_t_big\n -> Hash\n -> Seq Scan on join_hash_t_small\n(5 rows)\n\npostgres=# select version();\n version\n-----------------------------------------------------------------------------------------------------------------\n PostgreSQL 12.1 on x86_64-apple-darwin18.7.0, compiled by Apple LLVM\nversion 10.0.1 (clang-1001.0.46.4), 64-bit\n(1 row)\n\nOn Fri, Nov 22, 2019 at 6:51 PM Jinbao Chen <jinchen@pivotal.io> wrote:Hi hackers,I have made a patch to fix the problem. Added the selection rate of the inner table non-empty bucketThe planner will use big table as inner table in hash joinif small table have fewer unique values. But this plan ismuch slower than using small table as inner table.In general, the cost of creating a hash table is higherthan the cost of querying a hash table. So we tend to usesmall tables as internal tables. But if the average chainlength of the bucket is large, the situation is just theopposite.If virtualbuckets is much larger than innerndistinct, andouterndistinct is much larger than innerndistinct. Then mosttuples of the outer table will match the empty bucket. So whenwe calculate the cost of traversing the bucket, we need toignore the tuple matching empty bucket.So we add the selection rate of the inner table non-emptybucket. The formula is:(1 - ((outerndistinct - innerndistinct)/outerndistinct)*((virtualbuckets - innerndistinct)/virtualbuckets))On Tue, Nov 19, 2019 at 5:56 PM Jinbao Chen <jinchen@pivotal.io> wrote:I think we have the same understanding of this issue.Sometimes use smaller costs on scanning the chain in bucket like below wouldbe better. run_cost += outer_path_rows * some_small_probe_cost;run_cost += hash_qual_cost.per_tuple * approximate_tuple_count();In some version of GreenPlum(a database based on postgres), we just disabledthe cost on scanning the bucket chain. In most cases, this can get a better queryplan. But I am worried that it will be worse in some cases.Now only the small table's distinct value is much smaller than the bucket number,and much smaller than the distinct value of the large table, the planner will get thewrong plan. For example, if inner table has 100 distinct values, and 3000 rows. Hash table has 1000 buckets. Outer table has 10000 distinct values.We can assume that all the 100 distinct values of the inner table are included in the10000 distinct values of the outer table. So (100/10000)*outer_rows tuples willprobe the buckets has chain. And (9900/10000)*outer_rows tuples will probeall the 1000 buckets randomly. So (9900/10000)*outer_rows*(900/1000) tuples willprobe empty buckets. So the costs on scanning bucket chain ishash_qual_cost.per_tuple*innerbucketsize*outer_rows*(1 - ((outer_distinct - inner_distinct)/outer_distinct)*((buckets_num - inner_disttinct)/buckets_num))Do you think this assumption is reasonable?On Tue, Nov 19, 2019 at 3:46 PM Thomas Munro <thomas.munro@gmail.com> wrote:On Mon, Nov 18, 2019 at 7:48 PM Jinbao Chen <jinchen@pivotal.io> wrote:\n> In the test case above, the small table has 3000 tuples and 100 distinct values on column ‘a’.\n> If we use small table as inner table. The chan length of the bucket is 30. And we need to\n> search the whole chain on probing the hash table. So the cost of probing is bigger than build\n> hash table, and we need to use big table as inner.\n>\n> But in fact this is not true. We initialized 620,000 buckets in hashtable. But only 100 buckets\n> has chains with length 30. Other buckets are empty. Only hash values need to be compared.\n> Its costs are very small. We have 100,000 distinct key and 100,000,000 tuple on outer table.\n> Only (100/100000)* tuple_num tuples will search the whole chain. The other tuples\n> (number = (98900/100000)*tuple_num*) in outer\n> table just compare with the hash value. So the actual cost is much smaller than the planner\n> calculated. This is the reason why using a small table as inner is faster.\n\nSo basically we think that if t_big is on the outer side, we'll do\n100,000,000 probes and each one is going to scan a t_small bucket with\nchain length 30, so that looks really expensive. Actually only a\nsmall percentage of its probes find tuples with the right hash value,\nbut final_cost_hash_join() doesn't know that. So we hash t_big\ninstead, which we estimated pretty well and it finishes up with\nbuckets of length 1,000 (which is actually fine in this case, they're\nnot unwanted hash collisions, they're duplicate keys that we need to\nemit) and we probe them 3,000 times (which is also fine in this case),\nbut we had to do a bunch of memory allocation and/or batch file IO and\nthat turns out to be slower.\n\nI am not at all sure about this but I wonder if it would be better to\nuse something like:\n\n run_cost += outer_path_rows * some_small_probe_cost;\n run_cost += hash_qual_cost.per_tuple * approximate_tuple_count();\n\nIf we can estimate how many tuples will actually match accurately,\nthat should also be the number of times we have to run the quals,\nsince we don't usually expect hash collisions (bucket collisions, yes,\nbut hash collisions where the key doesn't turn out to be equal, no*).\n\n* ... but also yes as you approach various limits, so you could also\nfactor in bucket chain length that is due to being prevented from\nexpanding the number of buckets by arbitrary constraints, and perhaps\nalso birthday_problem(hash size, key space) to factor in unwanted hash\ncollisions that start to matter once you get to billions of keys and\nexpect collisions with short hashes.FYI: I tried this on 12.1, and find it use small_table as inner table already. I didn't look into the details so far.postgres=# explain (costs off) select * from join_hash_t_small, join_hash_t_big where a = b; QUERY PLAN-------------------------------------------------------- Hash Join Hash Cond: (join_hash_t_big.b = join_hash_t_small.a) -> Seq Scan on join_hash_t_big -> Hash -> Seq Scan on join_hash_t_small(5 rows)postgres=# select version(); version----------------------------------------------------------------------------------------------------------------- PostgreSQL 12.1 on x86_64-apple-darwin18.7.0, compiled by Apple LLVM version 10.0.1 (clang-1001.0.46.4), 64-bit(1 row)",
"msg_date": "Thu, 28 Nov 2019 17:45:58 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planner chose a much slower plan in hashjoin, using a large table\n as the inner table."
},
{
"msg_contents": "Hi Andy,\n\nI just test the query on 12.1. But pg use big_table as inner.\n\ndemo=# explain (costs off) select * from t_small, t_big where a = b;\n QUERY PLAN\n------------------------------------\n Hash Join\n Hash Cond: (t_small.a = t_big.b)\n -> Seq Scan on t_small\n -> Hash\n -> Seq Scan on t_big\n\nDo you insert data and set max_parallel_workers_per_gather to 0 like above?\n\ncreate table t_small(a int);\ncreate table t_big(b int);\ninsert into t_small select i%100 from generate_series(0, 3000)i;\ninsert into t_big select i%100000 from generate_series(1, 100000000)i ;\nanalyze t_small;\nanalyze t_big;\nset max_parallel_workers_per_gather = 0;\n\nOn Thu, Nov 28, 2019 at 5:46 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n>\n>\n> On Fri, Nov 22, 2019 at 6:51 PM Jinbao Chen <jinchen@pivotal.io> wrote:\n>\n>> Hi hackers,\n>>\n>> I have made a patch to fix the problem.\n>>\n>> Added the selection rate of the inner table non-empty bucket\n>>\n>> The planner will use big table as inner table in hash join\n>> if small table have fewer unique values. But this plan is\n>> much slower than using small table as inner table.\n>>\n>> In general, the cost of creating a hash table is higher\n>> than the cost of querying a hash table. So we tend to use\n>> small tables as internal tables. But if the average chain\n>> length of the bucket is large, the situation is just the\n>> opposite.\n>>\n>> If virtualbuckets is much larger than innerndistinct, and\n>> outerndistinct is much larger than innerndistinct. Then most\n>> tuples of the outer table will match the empty bucket. So when\n>> we calculate the cost of traversing the bucket, we need to\n>> ignore the tuple matching empty bucket.\n>>\n>> So we add the selection rate of the inner table non-empty\n>> bucket. The formula is:\n>> (1 - ((outerndistinct - innerndistinct)/outerndistinct)*\n>> ((virtualbuckets - innerndistinct)/virtualbuckets))\n>>\n>>\n>> On Tue, Nov 19, 2019 at 5:56 PM Jinbao Chen <jinchen@pivotal.io> wrote:\n>>\n>>> I think we have the same understanding of this issue.\n>>>\n>>> Sometimes use smaller costs on scanning the chain in bucket like below\n>>> would\n>>> be better.\n>>> run_cost += outer_path_rows * some_small_probe_cost;\n>>> run_cost += hash_qual_cost.per_tuple * approximate_tuple_count();\n>>> In some version of GreenPlum(a database based on postgres), we just\n>>> disabled\n>>> the cost on scanning the bucket chain. In most cases, this can get a\n>>> better query\n>>> plan. But I am worried that it will be worse in some cases.\n>>>\n>>> Now only the small table's distinct value is much smaller than the\n>>> bucket number,\n>>> and much smaller than the distinct value of the large table, the planner\n>>> will get the\n>>> wrong plan.\n>>>\n>>> For example, if inner table has 100 distinct values, and 3000 rows. Hash\n>>> table\n>>> has 1000 buckets. Outer table has 10000 distinct values.\n>>> We can assume that all the 100 distinct values of the inner table are\n>>> included in the\n>>> 10000 distinct values of the outer table. So (100/10000)*outer_rows\n>>> tuples will\n>>> probe the buckets has chain. And (9900/10000)*outer_rows tuples will\n>>> probe\n>>> all the 1000 buckets randomly. So (9900/10000)*outer_rows*(900/1000)\n>>> tuples will\n>>> probe empty buckets. So the costs on scanning bucket chain is\n>>>\n>>> hash_qual_cost.per_tuple*innerbucketsize*outer_rows*\n>>> (1 - ((outer_distinct - inner_distinct)/outer_distinct)*((buckets_num -\n>>> inner_disttinct)/buckets_num))\n>>>\n>>> Do you think this assumption is reasonable?\n>>>\n>>>\n>>> On Tue, Nov 19, 2019 at 3:46 PM Thomas Munro <thomas.munro@gmail.com>\n>>> wrote:\n>>>\n>>>> On Mon, Nov 18, 2019 at 7:48 PM Jinbao Chen <jinchen@pivotal.io> wrote:\n>>>> > In the test case above, the small table has 3000 tuples and 100\n>>>> distinct values on column ‘a’.\n>>>> > If we use small table as inner table. The chan length of the bucket\n>>>> is 30. And we need to\n>>>> > search the whole chain on probing the hash table. So the cost of\n>>>> probing is bigger than build\n>>>> > hash table, and we need to use big table as inner.\n>>>> >\n>>>> > But in fact this is not true. We initialized 620,000 buckets in\n>>>> hashtable. But only 100 buckets\n>>>> > has chains with length 30. Other buckets are empty. Only hash values\n>>>> need to be compared.\n>>>> > Its costs are very small. We have 100,000 distinct key and\n>>>> 100,000,000 tuple on outer table.\n>>>> > Only (100/100000)* tuple_num tuples will search the whole chain. The\n>>>> other tuples\n>>>> > (number = (98900/100000)*tuple_num*) in outer\n>>>> > table just compare with the hash value. So the actual cost is much\n>>>> smaller than the planner\n>>>> > calculated. This is the reason why using a small table as inner is\n>>>> faster.\n>>>>\n>>>> So basically we think that if t_big is on the outer side, we'll do\n>>>> 100,000,000 probes and each one is going to scan a t_small bucket with\n>>>> chain length 30, so that looks really expensive. Actually only a\n>>>> small percentage of its probes find tuples with the right hash value,\n>>>> but final_cost_hash_join() doesn't know that. So we hash t_big\n>>>> instead, which we estimated pretty well and it finishes up with\n>>>> buckets of length 1,000 (which is actually fine in this case, they're\n>>>> not unwanted hash collisions, they're duplicate keys that we need to\n>>>> emit) and we probe them 3,000 times (which is also fine in this case),\n>>>> but we had to do a bunch of memory allocation and/or batch file IO and\n>>>> that turns out to be slower.\n>>>>\n>>>> I am not at all sure about this but I wonder if it would be better to\n>>>> use something like:\n>>>>\n>>>> run_cost += outer_path_rows * some_small_probe_cost;\n>>>> run_cost += hash_qual_cost.per_tuple * approximate_tuple_count();\n>>>>\n>>>> If we can estimate how many tuples will actually match accurately,\n>>>> that should also be the number of times we have to run the quals,\n>>>> since we don't usually expect hash collisions (bucket collisions, yes,\n>>>> but hash collisions where the key doesn't turn out to be equal, no*).\n>>>>\n>>>> * ... but also yes as you approach various limits, so you could also\n>>>> factor in bucket chain length that is due to being prevented from\n>>>> expanding the number of buckets by arbitrary constraints, and perhaps\n>>>> also birthday_problem(hash size, key space) to factor in unwanted hash\n>>>> collisions that start to matter once you get to billions of keys and\n>>>> expect collisions with short hashes.\n>>>>\n>>>\n> FYI: I tried this on 12.1, and find it use small_table as inner table\n> already. I didn't look into the details so far.\n>\n> postgres=# explain (costs off) select * from join_hash_t_small,\n> join_hash_t_big where a = b;\n> QUERY PLAN\n> --------------------------------------------------------\n> Hash Join\n> Hash Cond: (join_hash_t_big.b = join_hash_t_small.a)\n> -> Seq Scan on join_hash_t_big\n> -> Hash\n> -> Seq Scan on join_hash_t_small\n> (5 rows)\n>\n> postgres=# select version();\n> version\n>\n> -----------------------------------------------------------------------------------------------------------------\n> PostgreSQL 12.1 on x86_64-apple-darwin18.7.0, compiled by Apple LLVM\n> version 10.0.1 (clang-1001.0.46.4), 64-bit\n> (1 row)\n>\n\nHi Andy,I just test the query on 12.1. But pg use big_table as inner.demo=# explain (costs off) select * from t_small, t_big where a = b; QUERY PLAN------------------------------------ Hash Join Hash Cond: (t_small.a = t_big.b) -> Seq Scan on t_small -> Hash -> Seq Scan on t_bigDo you insert data and set max_parallel_workers_per_gather to 0 like above?create table t_small(a int);create table t_big(b int);insert into t_small select i%100 from generate_series(0, 3000)i;insert into t_big select i%100000 from generate_series(1, 100000000)i ;analyze t_small;analyze t_big;set max_parallel_workers_per_gather = 0;On Thu, Nov 28, 2019 at 5:46 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:On Fri, Nov 22, 2019 at 6:51 PM Jinbao Chen <jinchen@pivotal.io> wrote:Hi hackers,I have made a patch to fix the problem. Added the selection rate of the inner table non-empty bucketThe planner will use big table as inner table in hash joinif small table have fewer unique values. But this plan ismuch slower than using small table as inner table.In general, the cost of creating a hash table is higherthan the cost of querying a hash table. So we tend to usesmall tables as internal tables. But if the average chainlength of the bucket is large, the situation is just theopposite.If virtualbuckets is much larger than innerndistinct, andouterndistinct is much larger than innerndistinct. Then mosttuples of the outer table will match the empty bucket. So whenwe calculate the cost of traversing the bucket, we need toignore the tuple matching empty bucket.So we add the selection rate of the inner table non-emptybucket. The formula is:(1 - ((outerndistinct - innerndistinct)/outerndistinct)*((virtualbuckets - innerndistinct)/virtualbuckets))On Tue, Nov 19, 2019 at 5:56 PM Jinbao Chen <jinchen@pivotal.io> wrote:I think we have the same understanding of this issue.Sometimes use smaller costs on scanning the chain in bucket like below wouldbe better. run_cost += outer_path_rows * some_small_probe_cost;run_cost += hash_qual_cost.per_tuple * approximate_tuple_count();In some version of GreenPlum(a database based on postgres), we just disabledthe cost on scanning the bucket chain. In most cases, this can get a better queryplan. But I am worried that it will be worse in some cases.Now only the small table's distinct value is much smaller than the bucket number,and much smaller than the distinct value of the large table, the planner will get thewrong plan. For example, if inner table has 100 distinct values, and 3000 rows. Hash table has 1000 buckets. Outer table has 10000 distinct values.We can assume that all the 100 distinct values of the inner table are included in the10000 distinct values of the outer table. So (100/10000)*outer_rows tuples willprobe the buckets has chain. And (9900/10000)*outer_rows tuples will probeall the 1000 buckets randomly. So (9900/10000)*outer_rows*(900/1000) tuples willprobe empty buckets. So the costs on scanning bucket chain ishash_qual_cost.per_tuple*innerbucketsize*outer_rows*(1 - ((outer_distinct - inner_distinct)/outer_distinct)*((buckets_num - inner_disttinct)/buckets_num))Do you think this assumption is reasonable?On Tue, Nov 19, 2019 at 3:46 PM Thomas Munro <thomas.munro@gmail.com> wrote:On Mon, Nov 18, 2019 at 7:48 PM Jinbao Chen <jinchen@pivotal.io> wrote:\n> In the test case above, the small table has 3000 tuples and 100 distinct values on column ‘a’.\n> If we use small table as inner table. The chan length of the bucket is 30. And we need to\n> search the whole chain on probing the hash table. So the cost of probing is bigger than build\n> hash table, and we need to use big table as inner.\n>\n> But in fact this is not true. We initialized 620,000 buckets in hashtable. But only 100 buckets\n> has chains with length 30. Other buckets are empty. Only hash values need to be compared.\n> Its costs are very small. We have 100,000 distinct key and 100,000,000 tuple on outer table.\n> Only (100/100000)* tuple_num tuples will search the whole chain. The other tuples\n> (number = (98900/100000)*tuple_num*) in outer\n> table just compare with the hash value. So the actual cost is much smaller than the planner\n> calculated. This is the reason why using a small table as inner is faster.\n\nSo basically we think that if t_big is on the outer side, we'll do\n100,000,000 probes and each one is going to scan a t_small bucket with\nchain length 30, so that looks really expensive. Actually only a\nsmall percentage of its probes find tuples with the right hash value,\nbut final_cost_hash_join() doesn't know that. So we hash t_big\ninstead, which we estimated pretty well and it finishes up with\nbuckets of length 1,000 (which is actually fine in this case, they're\nnot unwanted hash collisions, they're duplicate keys that we need to\nemit) and we probe them 3,000 times (which is also fine in this case),\nbut we had to do a bunch of memory allocation and/or batch file IO and\nthat turns out to be slower.\n\nI am not at all sure about this but I wonder if it would be better to\nuse something like:\n\n run_cost += outer_path_rows * some_small_probe_cost;\n run_cost += hash_qual_cost.per_tuple * approximate_tuple_count();\n\nIf we can estimate how many tuples will actually match accurately,\nthat should also be the number of times we have to run the quals,\nsince we don't usually expect hash collisions (bucket collisions, yes,\nbut hash collisions where the key doesn't turn out to be equal, no*).\n\n* ... but also yes as you approach various limits, so you could also\nfactor in bucket chain length that is due to being prevented from\nexpanding the number of buckets by arbitrary constraints, and perhaps\nalso birthday_problem(hash size, key space) to factor in unwanted hash\ncollisions that start to matter once you get to billions of keys and\nexpect collisions with short hashes.FYI: I tried this on 12.1, and find it use small_table as inner table already. I didn't look into the details so far.postgres=# explain (costs off) select * from join_hash_t_small, join_hash_t_big where a = b; QUERY PLAN-------------------------------------------------------- Hash Join Hash Cond: (join_hash_t_big.b = join_hash_t_small.a) -> Seq Scan on join_hash_t_big -> Hash -> Seq Scan on join_hash_t_small(5 rows)postgres=# select version(); version----------------------------------------------------------------------------------------------------------------- PostgreSQL 12.1 on x86_64-apple-darwin18.7.0, compiled by Apple LLVM version 10.0.1 (clang-1001.0.46.4), 64-bit(1 row)",
"msg_date": "Thu, 28 Nov 2019 19:18:57 +0800",
"msg_from": "Jinbao Chen <jinchen@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Planner chose a much slower plan in hashjoin, using a large table\n as the inner table."
},
{
"msg_contents": "On Thu, Nov 28, 2019 at 7:19 PM Jinbao Chen <jinchen@pivotal.io> wrote:\n\n> Hi Andy,\n>\n> I just test the query on 12.1. But pg use big_table as inner.\n>\n> demo=# explain (costs off) select * from t_small, t_big where a = b;\n> QUERY PLAN\n> ------------------------------------\n> Hash Join\n> Hash Cond: (t_small.a = t_big.b)\n> -> Seq Scan on t_small\n> -> Hash\n> -> Seq Scan on t_big\n>\n> Do you insert data and set max_parallel_workers_per_gather to 0 like\n> above?\n>\n\nSorry for the noise.. you are right. I thought I load the data but and\nrun the query immediately without running the analyzing.\n\nnow it is using big table as inner table.\n\n\n> create table t_small(a int);\n> create table t_big(b int);\n> insert into t_small select i%100 from generate_series(0, 3000)i;\n> insert into t_big select i%100000 from generate_series(1, 100000000)i ;\n> analyze t_small;\n> analyze t_big;\n> set max_parallel_workers_per_gather = 0;\n>\n> On Thu, Nov 28, 2019 at 5:46 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n>>\n>>\n>> On Fri, Nov 22, 2019 at 6:51 PM Jinbao Chen <jinchen@pivotal.io> wrote:\n>>\n>>> Hi hackers,\n>>>\n>>> I have made a patch to fix the problem.\n>>>\n>>> Added the selection rate of the inner table non-empty bucket\n>>>\n>>> The planner will use big table as inner table in hash join\n>>> if small table have fewer unique values. But this plan is\n>>> much slower than using small table as inner table.\n>>>\n>>> In general, the cost of creating a hash table is higher\n>>> than the cost of querying a hash table. So we tend to use\n>>> small tables as internal tables. But if the average chain\n>>> length of the bucket is large, the situation is just the\n>>> opposite.\n>>>\n>>> If virtualbuckets is much larger than innerndistinct, and\n>>> outerndistinct is much larger than innerndistinct. Then most\n>>> tuples of the outer table will match the empty bucket. So when\n>>> we calculate the cost of traversing the bucket, we need to\n>>> ignore the tuple matching empty bucket.\n>>>\n>>> So we add the selection rate of the inner table non-empty\n>>> bucket. The formula is:\n>>> (1 - ((outerndistinct - innerndistinct)/outerndistinct)*\n>>> ((virtualbuckets - innerndistinct)/virtualbuckets))\n>>>\n>>>\n>>> On Tue, Nov 19, 2019 at 5:56 PM Jinbao Chen <jinchen@pivotal.io> wrote:\n>>>\n>>>> I think we have the same understanding of this issue.\n>>>>\n>>>> Sometimes use smaller costs on scanning the chain in bucket like below\n>>>> would\n>>>> be better.\n>>>> run_cost += outer_path_rows * some_small_probe_cost;\n>>>> run_cost += hash_qual_cost.per_tuple * approximate_tuple_count();\n>>>> In some version of GreenPlum(a database based on postgres), we just\n>>>> disabled\n>>>> the cost on scanning the bucket chain. In most cases, this can get a\n>>>> better query\n>>>> plan. But I am worried that it will be worse in some cases.\n>>>>\n>>>> Now only the small table's distinct value is much smaller than the\n>>>> bucket number,\n>>>> and much smaller than the distinct value of the large table, the\n>>>> planner will get the\n>>>> wrong plan.\n>>>>\n>>>> For example, if inner table has 100 distinct values, and 3000 rows.\n>>>> Hash table\n>>>> has 1000 buckets. Outer table has 10000 distinct values.\n>>>> We can assume that all the 100 distinct values of the inner table are\n>>>> included in the\n>>>> 10000 distinct values of the outer table. So (100/10000)*outer_rows\n>>>> tuples will\n>>>> probe the buckets has chain. And (9900/10000)*outer_rows tuples will\n>>>> probe\n>>>> all the 1000 buckets randomly. So (9900/10000)*outer_rows*(900/1000)\n>>>> tuples will\n>>>> probe empty buckets. So the costs on scanning bucket chain is\n>>>>\n>>>> hash_qual_cost.per_tuple*innerbucketsize*outer_rows*\n>>>> (1 - ((outer_distinct - inner_distinct)/outer_distinct)*((buckets_num -\n>>>> inner_disttinct)/buckets_num))\n>>>>\n>>>> Do you think this assumption is reasonable?\n>>>>\n>>>>\n>>>> On Tue, Nov 19, 2019 at 3:46 PM Thomas Munro <thomas.munro@gmail.com>\n>>>> wrote:\n>>>>\n>>>>> On Mon, Nov 18, 2019 at 7:48 PM Jinbao Chen <jinchen@pivotal.io>\n>>>>> wrote:\n>>>>> > In the test case above, the small table has 3000 tuples and 100\n>>>>> distinct values on column ‘a’.\n>>>>> > If we use small table as inner table. The chan length of the bucket\n>>>>> is 30. And we need to\n>>>>> > search the whole chain on probing the hash table. So the cost of\n>>>>> probing is bigger than build\n>>>>> > hash table, and we need to use big table as inner.\n>>>>> >\n>>>>> > But in fact this is not true. We initialized 620,000 buckets in\n>>>>> hashtable. But only 100 buckets\n>>>>> > has chains with length 30. Other buckets are empty. Only hash values\n>>>>> need to be compared.\n>>>>> > Its costs are very small. We have 100,000 distinct key and\n>>>>> 100,000,000 tuple on outer table.\n>>>>> > Only (100/100000)* tuple_num tuples will search the whole chain. The\n>>>>> other tuples\n>>>>> > (number = (98900/100000)*tuple_num*) in outer\n>>>>> > table just compare with the hash value. So the actual cost is much\n>>>>> smaller than the planner\n>>>>> > calculated. This is the reason why using a small table as inner is\n>>>>> faster.\n>>>>>\n>>>>> So basically we think that if t_big is on the outer side, we'll do\n>>>>> 100,000,000 probes and each one is going to scan a t_small bucket with\n>>>>> chain length 30, so that looks really expensive. Actually only a\n>>>>> small percentage of its probes find tuples with the right hash value,\n>>>>> but final_cost_hash_join() doesn't know that. So we hash t_big\n>>>>> instead, which we estimated pretty well and it finishes up with\n>>>>> buckets of length 1,000 (which is actually fine in this case, they're\n>>>>> not unwanted hash collisions, they're duplicate keys that we need to\n>>>>> emit) and we probe them 3,000 times (which is also fine in this case),\n>>>>> but we had to do a bunch of memory allocation and/or batch file IO and\n>>>>> that turns out to be slower.\n>>>>>\n>>>>> I am not at all sure about this but I wonder if it would be better to\n>>>>> use something like:\n>>>>>\n>>>>> run_cost += outer_path_rows * some_small_probe_cost;\n>>>>> run_cost += hash_qual_cost.per_tuple * approximate_tuple_count();\n>>>>>\n>>>>> If we can estimate how many tuples will actually match accurately,\n>>>>> that should also be the number of times we have to run the quals,\n>>>>> since we don't usually expect hash collisions (bucket collisions, yes,\n>>>>> but hash collisions where the key doesn't turn out to be equal, no*).\n>>>>>\n>>>>> * ... but also yes as you approach various limits, so you could also\n>>>>> factor in bucket chain length that is due to being prevented from\n>>>>> expanding the number of buckets by arbitrary constraints, and perhaps\n>>>>> also birthday_problem(hash size, key space) to factor in unwanted hash\n>>>>> collisions that start to matter once you get to billions of keys and\n>>>>> expect collisions with short hashes.\n>>>>>\n>>>>\n>> FYI: I tried this on 12.1, and find it use small_table as inner table\n>> already. I didn't look into the details so far.\n>>\n>> postgres=# explain (costs off) select * from join_hash_t_small,\n>> join_hash_t_big where a = b;\n>> QUERY PLAN\n>> --------------------------------------------------------\n>> Hash Join\n>> Hash Cond: (join_hash_t_big.b = join_hash_t_small.a)\n>> -> Seq Scan on join_hash_t_big\n>> -> Hash\n>> -> Seq Scan on join_hash_t_small\n>> (5 rows)\n>>\n>> postgres=# select version();\n>> version\n>>\n>> -----------------------------------------------------------------------------------------------------------------\n>> PostgreSQL 12.1 on x86_64-apple-darwin18.7.0, compiled by Apple LLVM\n>> version 10.0.1 (clang-1001.0.46.4), 64-bit\n>> (1 row)\n>>\n>\n\nOn Thu, Nov 28, 2019 at 7:19 PM Jinbao Chen <jinchen@pivotal.io> wrote:Hi Andy,I just test the query on 12.1. But pg use big_table as inner.demo=# explain (costs off) select * from t_small, t_big where a = b; QUERY PLAN------------------------------------ Hash Join Hash Cond: (t_small.a = t_big.b) -> Seq Scan on t_small -> Hash -> Seq Scan on t_bigDo you insert data and set max_parallel_workers_per_gather to 0 like above?Sorry for the noise.. you are right. I thought I load the data but and run the query immediately without running the analyzing. now it is using big table as inner table. create table t_small(a int);create table t_big(b int);insert into t_small select i%100 from generate_series(0, 3000)i;insert into t_big select i%100000 from generate_series(1, 100000000)i ;analyze t_small;analyze t_big;set max_parallel_workers_per_gather = 0;On Thu, Nov 28, 2019 at 5:46 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:On Fri, Nov 22, 2019 at 6:51 PM Jinbao Chen <jinchen@pivotal.io> wrote:Hi hackers,I have made a patch to fix the problem. Added the selection rate of the inner table non-empty bucketThe planner will use big table as inner table in hash joinif small table have fewer unique values. But this plan ismuch slower than using small table as inner table.In general, the cost of creating a hash table is higherthan the cost of querying a hash table. So we tend to usesmall tables as internal tables. But if the average chainlength of the bucket is large, the situation is just theopposite.If virtualbuckets is much larger than innerndistinct, andouterndistinct is much larger than innerndistinct. Then mosttuples of the outer table will match the empty bucket. So whenwe calculate the cost of traversing the bucket, we need toignore the tuple matching empty bucket.So we add the selection rate of the inner table non-emptybucket. The formula is:(1 - ((outerndistinct - innerndistinct)/outerndistinct)*((virtualbuckets - innerndistinct)/virtualbuckets))On Tue, Nov 19, 2019 at 5:56 PM Jinbao Chen <jinchen@pivotal.io> wrote:I think we have the same understanding of this issue.Sometimes use smaller costs on scanning the chain in bucket like below wouldbe better. run_cost += outer_path_rows * some_small_probe_cost;run_cost += hash_qual_cost.per_tuple * approximate_tuple_count();In some version of GreenPlum(a database based on postgres), we just disabledthe cost on scanning the bucket chain. In most cases, this can get a better queryplan. But I am worried that it will be worse in some cases.Now only the small table's distinct value is much smaller than the bucket number,and much smaller than the distinct value of the large table, the planner will get thewrong plan. For example, if inner table has 100 distinct values, and 3000 rows. Hash table has 1000 buckets. Outer table has 10000 distinct values.We can assume that all the 100 distinct values of the inner table are included in the10000 distinct values of the outer table. So (100/10000)*outer_rows tuples willprobe the buckets has chain. And (9900/10000)*outer_rows tuples will probeall the 1000 buckets randomly. So (9900/10000)*outer_rows*(900/1000) tuples willprobe empty buckets. So the costs on scanning bucket chain ishash_qual_cost.per_tuple*innerbucketsize*outer_rows*(1 - ((outer_distinct - inner_distinct)/outer_distinct)*((buckets_num - inner_disttinct)/buckets_num))Do you think this assumption is reasonable?On Tue, Nov 19, 2019 at 3:46 PM Thomas Munro <thomas.munro@gmail.com> wrote:On Mon, Nov 18, 2019 at 7:48 PM Jinbao Chen <jinchen@pivotal.io> wrote:\n> In the test case above, the small table has 3000 tuples and 100 distinct values on column ‘a’.\n> If we use small table as inner table. The chan length of the bucket is 30. And we need to\n> search the whole chain on probing the hash table. So the cost of probing is bigger than build\n> hash table, and we need to use big table as inner.\n>\n> But in fact this is not true. We initialized 620,000 buckets in hashtable. But only 100 buckets\n> has chains with length 30. Other buckets are empty. Only hash values need to be compared.\n> Its costs are very small. We have 100,000 distinct key and 100,000,000 tuple on outer table.\n> Only (100/100000)* tuple_num tuples will search the whole chain. The other tuples\n> (number = (98900/100000)*tuple_num*) in outer\n> table just compare with the hash value. So the actual cost is much smaller than the planner\n> calculated. This is the reason why using a small table as inner is faster.\n\nSo basically we think that if t_big is on the outer side, we'll do\n100,000,000 probes and each one is going to scan a t_small bucket with\nchain length 30, so that looks really expensive. Actually only a\nsmall percentage of its probes find tuples with the right hash value,\nbut final_cost_hash_join() doesn't know that. So we hash t_big\ninstead, which we estimated pretty well and it finishes up with\nbuckets of length 1,000 (which is actually fine in this case, they're\nnot unwanted hash collisions, they're duplicate keys that we need to\nemit) and we probe them 3,000 times (which is also fine in this case),\nbut we had to do a bunch of memory allocation and/or batch file IO and\nthat turns out to be slower.\n\nI am not at all sure about this but I wonder if it would be better to\nuse something like:\n\n run_cost += outer_path_rows * some_small_probe_cost;\n run_cost += hash_qual_cost.per_tuple * approximate_tuple_count();\n\nIf we can estimate how many tuples will actually match accurately,\nthat should also be the number of times we have to run the quals,\nsince we don't usually expect hash collisions (bucket collisions, yes,\nbut hash collisions where the key doesn't turn out to be equal, no*).\n\n* ... but also yes as you approach various limits, so you could also\nfactor in bucket chain length that is due to being prevented from\nexpanding the number of buckets by arbitrary constraints, and perhaps\nalso birthday_problem(hash size, key space) to factor in unwanted hash\ncollisions that start to matter once you get to billions of keys and\nexpect collisions with short hashes.FYI: I tried this on 12.1, and find it use small_table as inner table already. I didn't look into the details so far.postgres=# explain (costs off) select * from join_hash_t_small, join_hash_t_big where a = b; QUERY PLAN-------------------------------------------------------- Hash Join Hash Cond: (join_hash_t_big.b = join_hash_t_small.a) -> Seq Scan on join_hash_t_big -> Hash -> Seq Scan on join_hash_t_small(5 rows)postgres=# select version(); version----------------------------------------------------------------------------------------------------------------- PostgreSQL 12.1 on x86_64-apple-darwin18.7.0, compiled by Apple LLVM version 10.0.1 (clang-1001.0.46.4), 64-bit(1 row)",
"msg_date": "Thu, 28 Nov 2019 23:21:17 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planner chose a much slower plan in hashjoin, using a large table\n as the inner table."
}
] |
[
{
"msg_contents": "Dear Hackers,\n\nA customer reported a strange behaviour on a PITR restoration. \nAfter a drop database, he tried to recover the data on the last inserted\ntransaction by using the recovery_target_time.\nThe issue is the database is present in the system catalog but the\ndirectory was still deleted.\nHere the technical information of the database\nversion 11\ndefault postgresql.conf except for this options\n wal_level = replica\n archive_mode = on\n archive_command = 'cp %p /tmp/wal_archive/%f '\n log_statement = 'all'\n log_min_messages = debug5\n\n \nThe following method was used \n\n * create cluster\n\n * create database\n\n * create 1 table \n\n * create 1 index on 1 column\n\n * insert 1 rows\n\n * backup with pg_base_backup\n\n * insert 2 rows\n\n * drop database\n\n * stop instance\n\n * found the last inserted transaction timestamp(|'2019-11-13\n 11:49:08.413744+01'|) before drop database\n\n * replace $datadir by a pg_base_backup archive\n\n * edit recovery.conf\n\n * |restore_command = 'cp /tmp/wal_archive/%f \"%p\"'|\n\n * |recovery_target_time = '2019-11-13 11:49:08.413744+01'|\n\n * |recovery_target_inclusive = true|\n\n * |restart cluster|\n\n|\n|\n|\n|\n|I tried to understand what's happening, when we analyse the\npostgresql.log (log_min_message = debug5), we can see that |\n|\n|\n|the recovery stopped before transaction 574 (repository database) so at\ntransaction 573 being the last insert, but the database directory was\nstill deleted.|\n|\n|\n|\n|\n|\n|\n|2019-11-13 11:55:12.732 CET [30666] DEBUG: remove KnownAssignedXid 572|\n|\n|\n|2019-11-13 11:55:12.732 CET [30666] CONTEXT: WAL redo at 0/3000178 for\nTransaction/COMMIT: 2019-11-13 11:49:08.248928+01|\n|\n|\n|2019-11-13 11:55:12.732 CET [30666] DEBUG: record known xact 573\nlatestObservedXid 572|\n|\n|\n|2019-11-13 11:55:12.732 CET [30666] CONTEXT: WAL redo at 0/30001A0 for\nHeap/INSERT: off 3|\n|\n|\n|2019-11-13 11:55:12.732 CET [30666] DEBUG: record known xact 573\nlatestObservedXid 573|\n|\n|\n|2019-11-13 11:55:12.732 CET [30666] CONTEXT: WAL redo at 0/30001F8 for\nBtree/INSERT_LEAF: off 3|\n|\n|\n|2019-11-13 11:55:12.732 CET [30666] DEBUG: record known xact 573\nlatestObservedXid 573|\n|\n|\n|2019-11-13 11:55:12.732 CET [30666] CONTEXT: WAL redo at 0/3000238 for\nTransaction/COMMIT: 2019-11-13 11:49:08.413744+01|\n|\n|\n|2019-11-13 11:55:12.732 CET [30666] DEBUG: record known xact 573\nlatestObservedXid 573|\n|\n|\n|2019-11-13 11:55:12.732 CET [30666] CONTEXT: WAL redo at 0/3000238 for\nTransaction/COMMIT: 2019-11-13 11:49:08.413744+01|\n|\n|\n|2019-11-13 11:55:12.732 CET [30666] DEBUG: remove KnownAssignedXid 573|\n|\n|\n|2019-11-13 11:55:12.732 CET [30666] CONTEXT: WAL redo at 0/3000238 for\nTransaction/COMMIT: 2019-11-13 11:49:08.413744+01|\n|\n|\n|2019-11-13 11:55:12.732 CET [30666] DEBUG: record known xact 574\nlatestObservedXid 573|\n|\n|\n|2019-11-13 11:55:12.732 CET [30666] CONTEXT: WAL redo at 0/3000260 for\nHeap/DELETE: off 4 KEYS_UPDATED|\n|\n|\n|2019-11-13 11:55:12.732 CET [30666] DEBUG: prune KnownAssignedXids to 574|\n|\n|\n|2019-11-13 11:55:12.732 CET [30666] CONTEXT: WAL redo at 0/3000730 for\nStandby/RUNNING_XACTS: nextXid 575 latestCompletedXid 573\noldestRunningXid 574; 1 xacts: 574|\n|\n|\n|2019-11-13 11:55:12.732 CET [30666] DEBUG: record known xact 574\nlatestObservedXid 574|\n|\n|\n|2019-11-13 11:55:12.732 CET [30666] CONTEXT: WAL redo at 0/30007D8 for\nDatabase/DROP: dir 16384/1663|\n|\n|\n|2019-11-13 11:55:12.738 CET [30666] LOG: recovery stopping before\ncommit of transaction 574, time 2019-11-13 11:49:10.683426+01|\n|\n|\n|2019-11-13 11:55:12.738 CET [30666] LOG: recovery has paused|\n|\n|\n|\n|\n|\n|\n\nBy analysing the wal file with pg_waldump \n\n|rmgr: Heap len (rec/tot): 54/ 198, tx: 572, lsn:\n0/03000028, prev 0/020000F8, desc: INSERT off 2, blkref #0: rel\n1663/16384/16385 blk 0 FPW|\n|\n|\n|rmgr: Btree len (rec/tot): 53/ 133, tx: 572, lsn:\n0/030000F0, prev 0/03000028, desc: INSERT_LEAF off 2, blkref #0: rel\n1663/16384/16388 blk 1 FPW|\n|\n|\n|rmgr: Transaction len (rec/tot): 34/ 34, tx: 572, lsn:\n0/03000178, prev 0/030000F0, desc: COMMIT 2019-11-13 11:49:08.248928 CET|\n|\n|\n|rmgr: Heap len (rec/tot): 87/ 87, tx: 573, lsn:\n0/030001A0, prev 0/03000178, desc: INSERT off 3, blkref #0: rel\n1663/16384/16385 blk 0|\n|\n|\n|rmgr: Btree len (rec/tot): 64/ 64, tx: 573, lsn:\n0/030001F8, prev 0/030001A0, desc: INSERT_LEAF off 3, blkref #0: rel\n1663/16384/16388 blk 1|\n|\n|\n|rmgr: Transaction len (rec/tot): 34/ 34, tx: 573, lsn:\n0/03000238, prev 0/030001F8, desc: COMMIT 2019-11-13 11:49:08.413744 CET|\n|\n|\n|rmgr: Heap len (rec/tot): 59/ 1227, tx: 574, lsn:\n0/03000260, prev 0/03000238, desc: DELETE off 4 KEYS_UPDATED , blkref\n#0: rel 1664/0/1262 blk 0 FPW|\n|\n|\n|rmgr: Standby len (rec/tot): 54/ 54, tx: 0, lsn:\n0/03000730, prev 0/03000260, desc: RUNNING_XACTS nextXid 575\nlatestCompletedXid 573 oldestRunningXid 574; 1 xacts: 574|\n|\n|\n|rmgr: XLOG len (rec/tot): 106/ 106, tx: 0, lsn:\n0/03000768, prev 0/03000730, desc: CHECKPOINT_ONLINE redo 0/3000730; tli\n1; prev tli 1; fpw true; xid 0:575; oid 24576; multi 1; offset 0; oldest\nxid 561 in DB 1; oldest multi 1 in DB 1; oldest/newest commit timestamp\nxid: 0/0; oldest running xid 574; online|\n|\n|\n|rmgr: Database len (rec/tot): 34/ 34, tx: 574, lsn:\n0/030007D8, prev 0/03000768, desc: DROP dir 16384/1663|\n|\n|\n|rmgr: Transaction len (rec/tot): 66/ 66, tx: 574, lsn:\n0/03000800, prev 0/030007D8, desc: COMMIT 2019-11-13 11:49:10.683426\nCET; inval msgs: catcache 21; sync|\n|\n|\n\n\nWe notice that the following log\n\n|rmgr: Database len (rec/tot): 34/ 34, tx: 574, lsn:\n0/030007D8, prev 0/03000768, desc: DROP dir 16384/1663|\n\nis executed between the last commit that we are interested inand the\nnext record with a timestamp\n\n|rmgr: Transaction len (rec/tot): 66/ 66, tx: 574, lsn:\n0/03000800, prev 0/030007D8, desc: COMMIT 2019-11-13 11:49:10.683426\nCET; inval msgs: catcache 21; sync|\n|\n|\n\nWe understand that the drop database command is not transactional but\nthe drop dir is attached to the xact whose xid has a commit with\ntimestime out of recovery_target_time bound.\nOn the other hand, DBA role is to determine which at which xact recovery\nshould stop and define recovery_target_xid rather than\nrecovery_target_time.\nHumans are prone to use natural things such as time to define \"when\" to\nstop or start things.\nWe know that this rarely happens in production, because you can't drop a\ndatabase if users are still connected. But with the new force drop\ndatabase option, it might be a reasonable choice to improve the\nsituation with that recovery_target_time directive.\n\nIt turns out there are two different choices we can make : \n \n\n * Change recovery behaviour in that case to prevent all xact\n operation to perform until COMMIT timestamp is checked against\n recovery_time bound (but it seems to be difficult as\n state https://www.postgresql.org/message-id/flat/20141125160629.GC21475%40msg.df7cb.dewhich\n also identifies the problem and tries to give some solutions. Maybe\n another way, as a trivial guess (all apologises) is to buffer\n immediate xacts until we have the commit for each and apply the\n whole buffer xact once the timestamp known (and checked agains\n recovery_target_time value);\n\n * The other way to improve this is to update PostgreSQL\n documentation by specifying that recovery_target_time cannot be\n used in this case.There should be multiple places where it can be\n stated. The best one (if only one) seems to be in \n https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=doc/src/sgml/config.sgml;h=f83770350eda5625179526300c652f23ff29c9fe;hb=HEAD#l3400 \n\n\nWe are willing to help on this case either with code patching or\ndocumentation improvement.\n\nBest regards,\n\n-- \nLOXODATA\nNicolas Lutic\n\n\n\n\n\n\n\n\nDear Hackers,\n\n\nA customer\n reported a strange behaviour on a PITR restoration. \nAfter a drop\n database, he tried to recover the data on the last inserted\n transaction by using the recovery_target_time.\nThe issue is\n the database is present in the system catalog but the directory\n was still deleted.\nHere the\n technical information of the database\nversion 11\ndefault \n postgresql.conf except for this options\n wal_level =\n replica\n \n archive_mode = on\n \n archive_command = 'cp %p /tmp/wal_archive/%f '\n \n log_statement = 'all'\n \n log_min_messages = debug5\n\n\n \nThe following\n method was used \n\n\ncreate\n cluster\n\n\n\n\ncreate\n database\n\n\n\n\ncreate\n 1 table \n\n\n\n\ncreate\n 1 index on 1 column\n\n\n\n\ninsert\n 1 rows\n\n\n\n\nbackup\n with pg_base_backup\n\n\n\n\ninsert\n 2 rows\n\n\n\n\ndrop\n database\n\n\n\n\nstop\n instance\n\n\n\n\nfound\n the last inserted transaction timestamp\n ('2019-11-13\n 11:49:08.413744+01') before drop database\n\n\n\n\n\nreplace\n $datadir by a pg_base_backup archive\n\n\n\n\nedit\n recovery.conf \n\n\n\n\n\nrestore_command\n = 'cp /tmp/wal_archive/%f \"%p\"'\n\n\n\n\nrecovery_target_time\n = '2019-11-13 11:49:08.413744+01'\n\n\n\n\nrecovery_target_inclusive\n = true\n\n\n\n\nrestart\n cluster\n\n\n\n\n\n\nI tried to\n understand what's happening, when we analyse the\n postgresql.log (log_min_message = debug5), we can see that \n\n\nthe recovery\n stopped before transaction 574 (repository database) so at\n transaction 573 being the last insert, but the database\n directory was still deleted.\n\n\n\n\n\n\n2019-11-13\n 11:55:12.732 CET [30666] DEBUG: remove KnownAssignedXid 572\n\n\n2019-11-13\n 11:55:12.732 CET [30666] CONTEXT: WAL redo at 0/3000178 for\n Transaction/COMMIT: 2019-11-13 11:49:08.248928+01\n\n\n2019-11-13\n 11:55:12.732 CET [30666] DEBUG: record known xact 573\n latestObservedXid 572\n\n\n2019-11-13\n 11:55:12.732 CET [30666] CONTEXT: WAL redo at 0/30001A0 for\n Heap/INSERT: off 3\n\n\n2019-11-13\n 11:55:12.732 CET [30666] DEBUG: record known xact 573\n latestObservedXid 573\n\n\n2019-11-13\n 11:55:12.732 CET [30666] CONTEXT: WAL redo at 0/30001F8 for\n Btree/INSERT_LEAF: off 3\n\n\n2019-11-13\n 11:55:12.732 CET [30666] DEBUG: record known xact 573\n latestObservedXid 573\n\n\n2019-11-13\n 11:55:12.732 CET [30666] CONTEXT: WAL redo at 0/3000238 for\n Transaction/COMMIT: 2019-11-13 11:49:08.413744+01\n\n\n2019-11-13\n 11:55:12.732 CET [30666] DEBUG: record known xact 573\n latestObservedXid 573\n\n\n2019-11-13\n 11:55:12.732 CET [30666] CONTEXT: WAL redo at 0/3000238 for\n Transaction/COMMIT: 2019-11-13 11:49:08.413744+01\n\n\n2019-11-13\n 11:55:12.732 CET [30666] DEBUG: remove KnownAssignedXid 573\n\n\n2019-11-13\n 11:55:12.732 CET [30666] CONTEXT: WAL redo at 0/3000238 for\n Transaction/COMMIT: 2019-11-13 11:49:08.413744+01\n\n\n2019-11-13\n 11:55:12.732 CET [30666] DEBUG: record known xact 574\n latestObservedXid 573\n\n\n2019-11-13\n 11:55:12.732 CET [30666] CONTEXT: WAL redo at 0/3000260 for\n Heap/DELETE: off 4 KEYS_UPDATED\n\n\n2019-11-13\n 11:55:12.732 CET [30666] DEBUG: prune KnownAssignedXids to\n 574\n\n\n2019-11-13\n 11:55:12.732 CET [30666] CONTEXT: WAL redo at 0/3000730 for\n Standby/RUNNING_XACTS: nextXid 575 latestCompletedXid 573\n oldestRunningXid 574; 1 xacts: 574\n\n\n2019-11-13\n 11:55:12.732 CET [30666] DEBUG: record known xact 574\n latestObservedXid 574\n\n\n2019-11-13\n 11:55:12.732 CET [30666] CONTEXT: WAL redo at 0/30007D8 for\n Database/DROP: dir 16384/1663\n\n\n2019-11-13\n 11:55:12.738 CET [30666] LOG: recovery stopping before commit\n of transaction 574, time 2019-11-13 11:49:10.683426+01\n\n\n2019-11-13\n 11:55:12.738 CET [30666] LOG: recovery has paused\n\n\n\n\n\n\n\n\nBy analysing\n the wal file with pg_waldump \n\n\nrmgr:\n Heap len (rec/tot): 54/ 198, tx: 572, lsn:\n 0/03000028, prev 0/020000F8, desc: INSERT off 2, blkref #0:\n rel 1663/16384/16385 blk 0 FPW\n\n\nrmgr:\n Btree len (rec/tot): 53/ 133, tx: 572, lsn:\n 0/030000F0, prev 0/03000028, desc: INSERT_LEAF off 2, blkref\n #0: rel 1663/16384/16388 blk 1 FPW\n\n\nrmgr:\n Transaction len (rec/tot): 34/ 34, tx: 572, lsn:\n 0/03000178, prev 0/030000F0, desc: COMMIT 2019-11-13\n 11:49:08.248928 CET\n\n\nrmgr:\n Heap len (rec/tot): 87/ 87, tx: 573, lsn:\n 0/030001A0, prev 0/03000178, desc: INSERT off 3, blkref #0:\n rel 1663/16384/16385 blk 0\n\n\nrmgr:\n Btree len (rec/tot): 64/ 64, tx: 573, lsn:\n 0/030001F8, prev 0/030001A0, desc: INSERT_LEAF off 3, blkref\n #0: rel 1663/16384/16388 blk 1\n\n\nrmgr:\n Transaction len (rec/tot): 34/ 34, tx: 573, lsn:\n 0/03000238, prev 0/030001F8, desc: COMMIT 2019-11-13\n 11:49:08.413744 CET\n\n\nrmgr:\n Heap len (rec/tot): 59/ 1227, tx: 574, lsn:\n 0/03000260, prev 0/03000238, desc: DELETE off 4 KEYS_UPDATED ,\n blkref #0: rel 1664/0/1262 blk 0 FPW\n\n\nrmgr:\n Standby len (rec/tot): 54/ 54, tx: 0, lsn:\n 0/03000730, prev 0/03000260, desc: RUNNING_XACTS nextXid 575\n latestCompletedXid 573 oldestRunningXid 574; 1 xacts: 574\n\n\nrmgr:\n XLOG len (rec/tot): 106/ 106, tx: 0, lsn:\n 0/03000768, prev 0/03000730, desc: CHECKPOINT_ONLINE redo\n 0/3000730; tli 1; prev tli 1; fpw true; xid 0:575; oid 24576;\n multi 1; offset 0; oldest xid 561 in DB 1; oldest multi 1 in\n DB 1; oldest/newest commit timestamp xid: 0/0; oldest running\n xid 574; online\n\n\nrmgr:\n Database len (rec/tot): 34/ 34, tx: 574, lsn:\n 0/030007D8, prev 0/03000768, desc: DROP dir 16384/1663\n\n\nrmgr:\n Transaction len (rec/tot): 66/ 66, tx: 574, lsn:\n 0/03000800, prev 0/030007D8, desc: COMMIT 2019-11-13\n 11:49:10.683426 CET; inval msgs: catcache 21; sync\n\n\n\n\n\n\nWe notice that\n the following log\n\n\nrmgr:\n Database len (rec/tot): 34/ 34, tx: 574, lsn:\n 0/030007D8, prev 0/03000768, desc: DROP dir 16384/1663\n\n\nis executed\n between the last commit that we are interested in and the next\n record with a timestamp\n\n\nrmgr:\n Transaction len (rec/tot): 66/ 66, tx: 574, lsn:\n 0/03000800, prev 0/030007D8, desc: COMMIT 2019-11-13\n 11:49:10.683426 CET; inval msgs: catcache 21; sync\n\n\n\n\nWe understand\n that the drop database command is not transactional but the drop\n dir is attached to the xact whose xid\n has a commit with timestime out of recovery_target_time bound. \n\nOn the other\n hand, DBA role is to determine which at which xact recovery\n should stop and define recovery_target_xid rather than\n recovery_target_time. \n\nHumans are\n prone to use natural things such as time to define \"when\" to\n stop or start things.\nWe know that\n this rarely happens in production, because you can't drop a\n database if users are still connected. But with the new force\n drop database option, it might be a reasonable choice to improve\n the situation with that recovery_target_time directive.\n\n\nIt turns out\n there are two different choices we can make : \n \n\n\n \n Change recovery behaviour in that case to prevent all xact\n operation to perform until COMMIT timestamp is checked\n against recovery_time bound (but it seems to be difficult as\n state https://www.postgresql.org/message-id/flat/20141125160629.GC21475%40msg.df7cb.de which\n also identifies the problem and tries to give some\n solutions. Maybe another way, as a trivial guess (all\n apologises) is to buffer immediate xacts until we have the\n commit for each and apply the whole buffer xact once the\n timestamp known (and checked agains recovery_target_time\n value);\n\n\n\n\n \n The other way to improve this is to update PostgreSQL\n documentation by\n specifying that recovery_target_time cannot be used in this\n case. There\n should be multiple places where it can be stated. The best\n one (if only one) seems to be in https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=doc/src/sgml/config.sgml;h=f83770350eda5625179526300c652f23ff29c9fe;hb=HEAD#l3400 \n\n\n\n\nWe are\n willing to help on this case either with code patching or\n documentation improvement.\n\n\nBest\n regards,\n\n\n-- \n\nLOXODATA \n\nNicolas Lutic",
"msg_date": "Mon, 18 Nov 2019 11:48:37 +0100",
"msg_from": "Nicolas Lutic <n.lutic@loxodata.com>",
"msg_from_op": true,
"msg_subject": "PITR on DROP DATABASE, deleting of the database directory despite the\n recovery_target_time set before."
},
{
"msg_contents": "On Mon, 18 Nov 2019 at 18:48, Nicolas Lutic <n.lutic@loxodata.com> wrote:\n\n> Dear Hackers,\n>\n> After a drop database\n>\n\nwith FORCE?\n\n\n> , he tried to recover the data on the last inserted transaction by using\n> the recovery_target_time.\n> The issue is the database is present in the system catalog but the\n> directory was still deleted.\n> Here the technical information of the database\n> version 11\n> default postgresql.conf except for this options\n> wal_level = replica\n> archive_mode = on\n> archive_command = 'cp %p /tmp/wal_archive/%f '\n> log_statement = 'all'\n> log_min_messages = debug5\n>\n>\n> The following method was used\n>\n> - create cluster\n>\n>\n> - create database\n>\n>\n> - create 1 table\n>\n>\n> - create 1 index on 1 column\n>\n>\n> - insert 1 rows\n>\n>\n> - backup with pg_base_backup\n>\n>\n> - insert 2 rows\n>\n> autocommit?\n\n>\n>\n>\n> - drop database\n>\n> force?\n\n\n>\n> - Change recovery behaviour in that case to prevent all xact\n> operation to perform until COMMIT timestamp is checked against\n> recovery_time bound (but it seems to be difficult as state\n> https://www.postgresql.org/message-id/flat/20141125160629.GC21475%40msg.df7cb.de\n> which also identifies the problem and tries to give some solutions. Maybe\n> another way, as a trivial guess (all apologises) is to buffer immediate\n> xacts until we have the commit for each and apply the whole buffer xact\n> once the timestamp known (and checked agains recovery_target_time value);\n>\n>\n> - The other way to improve this is to update PostgreSQL\n> documentation by specifying that recovery_target_time cannot be used\n> in this case. There should be multiple places where it can be stated.\n> The best one (if only one) seems to be in\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=doc/src/sgml/config.sgml;h=\n> <https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=doc/src/sgml/config.sgml;h=f83770350eda5625179526300c652f23ff29c9fe;hb=HEAD#l3400>\n>\n>\nIf this only happens when a DB is dropped under load with force, I lean\ntoward just documenting it as a corner case.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Mon, 18 Nov 2019 at 18:48, Nicolas Lutic <n.lutic@loxodata.com> wrote:\n\nDear Hackers,\n\n\nAfter a drop\n databasewith FORCE? , he tried to recover the data on the last inserted\n transaction by using the recovery_target_time.\nThe issue is\n the database is present in the system catalog but the directory\n was still deleted.\nHere the\n technical information of the database\nversion 11\ndefault \n postgresql.conf except for this options\n wal_level =\n replica\n \n archive_mode = on\n \n archive_command = 'cp %p /tmp/wal_archive/%f '\n \n log_statement = 'all'\n \n log_min_messages = debug5\n\n\n \nThe following\n method was used \n\n\ncreate\n cluster\n\n\n\n\ncreate\n database\n\n\n\n\ncreate\n 1 table \n\n\n\n\ncreate\n 1 index on 1 column\n\n\n\n\ninsert\n 1 rows\n\n\n\n\nbackup\n with pg_base_backup\n\n\n\n\ninsert\n 2 rowsautocommit? \n\n\n\n\ndrop\n databaseforce? \n\n\n \n Change recovery behaviour in that case to prevent all xact\n operation to perform until COMMIT timestamp is checked\n against recovery_time bound (but it seems to be difficult as\n state https://www.postgresql.org/message-id/flat/20141125160629.GC21475%40msg.df7cb.de which\n also identifies the problem and tries to give some\n solutions. Maybe another way, as a trivial guess (all\n apologises) is to buffer immediate xacts until we have the\n commit for each and apply the whole buffer xact once the\n timestamp known (and checked agains recovery_target_time\n value);\n\n\n\n\n \n The other way to improve this is to update PostgreSQL\n documentation by\n specifying that recovery_target_time cannot be used in this\n case. There\n should be multiple places where it can be stated. The best\n one (if only one) seems to be in https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=doc/src/sgml/config.sgml;h=\n\nIf this only happens when a DB is dropped under load with force, I lean toward just documenting it as a corner case.-- Craig Ringer http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Tue, 19 Nov 2019 08:40:39 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PITR on DROP DATABASE, deleting of the database directory despite\n the recovery_target_time set before."
},
{
"msg_contents": "\nOn 11/19/19 1:40 AM, Craig Ringer wrote:\n> On Mon, 18 Nov 2019 at 18:48, Nicolas Lutic <n.lutic@loxodata.com\n> <mailto:n.lutic@loxodata.com>> wrote:\n> \n> Dear Hackers,\n> \n> After a drop database\n> \n> \n> with FORCE?\nNo, we tested with PostgreSQL v 11 and we don't have this option.\n> \n> \n> , he tried to recover the data on the last inserted transaction by\n> using the recovery_target_time.\n> The issue is the database is present in the system catalog but the\n> directory was still deleted.\n> Here the technical information of the database\n> version 11\n> default postgresql.conf except for this options\n> wal_level = replica\n> archive_mode = on\n> archive_command = 'cp %p /tmp/wal_archive/%f '\n> log_statement = 'all'\n> log_min_messages = debug5\n> \n> \n> The following method was used \n> \n> * create cluster\n> \n> * create database\n> \n> * create 1 table \n> \n> * create 1 index on 1 column\n> \n> * insert 1 rows\n> \n> * backup with pg_base_backup\n> \n> * insert 2 rows\n> \n> autocommit? \n\nYes, I forgot to mention it.\n\n> \n> * drop database\n> \n> force?\n> \n> \n> * Change recovery behaviour in that case to prevent all xact\n> operation to perform until COMMIT timestamp is checked against\n> recovery_time bound (but it seems to be difficult as\n> state https://www.postgresql.org/message-id/flat/20141125160629.GC21475%40msg.df7cb.dewhich\n> also identifies the problem and tries to give some solutions. \n> Maybe another way, as a trivial guess (all apologises) is to\n> buffer immediate xacts until we have the commit for each and\n> apply the whole buffer xact once the timestamp known (and\n> checked agains recovery_target_time value);\n> \n> * The other way to improve this is to update PostgreSQL\n> documentation by specifying that recovery_target_time cannot be\n> used in this case.There should be multiple places where it can\n> be stated. The best one (if only one) seems to be in \n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=doc/src/sgml/config.sgml;h=\n> <https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=doc/src/sgml/config.sgml;h=f83770350eda5625179526300c652f23ff29c9fe;hb=HEAD#l3400>\n> \n> \n> If this only happens when a DB is dropped under load with force, I lean\n> toward just documenting it as a corner case.\n\nThis can happen in the case of a non-transactional instruction, DROP\nDATABASE (with or without FORCE) is one case but there may be other cases ?\n\nThe documentation modification have to mention this case and list the\nother most likely operations.\n\nAn idea, without insight knowledge of the code, in case of\nrecovery_target_time (only), would be to move forward each record for an\nxact.\n\nEach record that is «timestamped» can be applied but once we encounter a\nnon timestamped record we could buffer the following records for any\nxaxts until a timestamped commit/rollback for the transaction where that\nnon transactionnal op appearsin. Once the commit/rollback records are\nfound, there's two options :\n\t1) the commit/rollback timestamp is inside the \"replay\" bound, then the\nwhole buffer can be applied\n\t2) the commit/rollback timestamp is beyond the upper time bound for\n\"replay\", then the whole buffer for that transaction could be canceled.\nThis can only be done on DROP DATABASE \"DELETE\" operation ?\nMaybe, this will lead to skewed pages and this is a wrong way to do such\na thing.\n\nAnother assumption is that \"DROP DATABASE\" sequence can be changed for\nthis operation to perform correctly.\n\nWe are aware that this part is tricky and will have little effects on\nnormal operations, as best practices are to use xid_target or lsn_target.\n\n\n> \n> -- \n> Craig Ringer http://www.2ndQuadrant.com/\n> 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nBest regards\n\n-- \nLOXODATA https://www.loxodata.com/\nConsulting - Training - Support\nNicolas Lutic\nConsultant trainer\n\n\n\n",
"msg_date": "Tue, 19 Nov 2019 16:15:14 +0100",
"msg_from": "Nicolas Lutic <n.lutic@loxodata.com>",
"msg_from_op": true,
"msg_subject": "Re: PITR on DROP DATABASE, deleting of the database directory despite\n the recovery_target_time set before."
},
{
"msg_contents": "Hello,\n\nLe mar. 19 nov. 2019 à 16:15, Nicolas Lutic <n.lutic@loxodata.com> a écrit :\n\n>\n> We are aware that this part is tricky and will have little effects on\n> normal operations, as best practices are to use xid_target or lsn_target.\n>\n> I'm working with Nicolas and we made some further testing. If we use xid\ntarget with inclusive to false at the next xid after the insert, we end up\nwith the same DELETE/DROP directory behaviour which is quite confusing. One\nhave to choose the xid-1 value with inclusive behaviour to lake it work.\n\nI assume this is the right first thing to document the behaviour. And give\nsome examples on this.\n\nMaybe we could add some documentation in the xlog explanation and a warning\nin the recovery_target_time and xid in guc doc ?\n\nIf there are better places in the docs let us know.\n\nThanks\n\n\n-- \nJean-Christophe Arnu\n\nHello,Le mar. 19 nov. 2019 à 16:15, Nicolas Lutic <n.lutic@loxodata.com> a écrit :\n\nWe are aware that this part is tricky and will have little effects on\nnormal operations, as best practices are to use xid_target or lsn_target.I'm working with Nicolas and \nwe made some further testing. If we use xid target with inclusive to \nfalse at the next xid after the insert, we end up with the same \nDELETE/DROP directory behaviour which is quite confusing. One have to \nchoose the xid-1 value with inclusive behaviour to lake it work.I assume this is the right first thing to document the behaviour. And give some examples on this.Maybe we could add some documentation in the xlog explanation and a warning in the recovery_target_time and xid in guc doc ?If there are better places in the docs let us know.Thanks -- Jean-Christophe Arnu",
"msg_date": "Fri, 13 Dec 2019 10:09:53 +0100",
"msg_from": "Jean-Christophe Arnu <jcarnu@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PITR on DROP DATABASE, deleting of the database directory despite\n the recovery_target_time set before."
}
] |
[
{
"msg_contents": "We had a scenario today that was new to us. We had a logical replication\nslot that was severely far behind. Before dropping this logical slot, we\nmade a physical point-in-time-recovery snapshot of the system with this\nlogical slot.\n\nThis logical slot was causing severe catalog bloat. We proceeded to drop\nthe logical slot which was over 12000 WAL segments behind. The physical\nslot was only a few 100 segments behind and still in place.\n\nBut now proceeding to VAC FULL the catalog tables did not recover any bloat\nbeyond the now-dropped logical slot. Eventually to our surprise, we found\nthat dropping the physical slot allowed us to recover the bloat.\n\nWe saw in forensics after the fact that xmin of the physical slot equaled\nthe catalog_xmin of the logical slot. Is there some dependency here where\nphysical slots made of a system retain all transactions of logical slots it\ncontains as well? If so, could someone help us understand this, and is\nthere documentation around this? Is this by design?\n\nWe had thought that the physical slot would only retain the WAL it needed\nfor its own restart_lsn, not the segments needed by only logical slots as\nwell. Any explanation would be much appreciated!\n\nThanks,\nJeremy\n\nWe had a scenario today that was new to us. We had a logical replication slot that was severely far behind. Before dropping this logical slot, we made a physical point-in-time-recovery snapshot of the system with this logical slot.This logical slot was causing severe catalog bloat. We proceeded to drop the logical slot which was over 12000 WAL segments behind. The physical slot was only a few 100 segments behind and still in place.But now proceeding to VAC FULL the catalog tables did not recover any bloat beyond the now-dropped logical slot. Eventually to our surprise, we found that dropping the physical slot allowed us to recover the bloat.We saw in forensics after the fact that xmin of the physical slot equaled the catalog_xmin of the logical slot. Is there some dependency here where physical slots made of a system retain all transactions of logical slots it contains as well? If so, could someone help us understand this, and is there documentation around this? Is this by design?We had thought that the physical slot would only retain the WAL it needed for its own restart_lsn, not the segments needed by only logical slots as well. Any explanation would be much appreciated!Thanks,Jeremy",
"msg_date": "Mon, 18 Nov 2019 15:36:47 -0600",
"msg_from": "Jeremy Finzel <finzelj@gmail.com>",
"msg_from_op": true,
"msg_subject": "physical slot xmin dependency on logical slot?"
},
{
"msg_contents": "Hi,\n\nOn 2019-11-18 15:36:47 -0600, Jeremy Finzel wrote:\n> We had a scenario today that was new to us. We had a logical replication\n> slot that was severely far behind. Before dropping this logical slot, we\n> made a physical point-in-time-recovery snapshot of the system with this\n> logical slot.\n\n> This logical slot was causing severe catalog bloat. We proceeded to drop\n> the logical slot which was over 12000 WAL segments behind. The physical\n> slot was only a few 100 segments behind and still in place.\n> \n> But now proceeding to VAC FULL the catalog tables did not recover any bloat\n> beyond the now-dropped logical slot. Eventually to our surprise, we found\n> that dropping the physical slot allowed us to recover the bloat.\n> \n> We saw in forensics after the fact that xmin of the physical slot equaled\n> the catalog_xmin of the logical slot. Is there some dependency here where\n> physical slots made of a system retain all transactions of logical slots it\n> contains as well? If so, could someone help us understand this, and is\n> there documentation around this? Is this by design?\n> \n> We had thought that the physical slot would only retain the WAL it needed\n> for its own restart_lsn, not the segments needed by only logical slots as\n> well. Any explanation would be much appreciated!\n\nThe logical slot on the standby affects hot_standby_feedback, which in\nturn means that the physical slot also transports xmin horizons to the\nprimary.\n\nNote that our docs suggest to drop slots when cloning a node (and\npg_basebackup/basebackup on the server side do so automatically):\n <para>\n It is often a good idea to also omit from the backup the files\n within the cluster's <filename>pg_replslot/</filename> directory, so that\n replication slots that exist on the master do not become part of the\n backup. Otherwise, the subsequent use of the backup to create a standby\n may result in indefinite retention of WAL files on the standby, and\n possibly bloat on the master if hot standby feedback is enabled, because\n the clients that are using those replication slots will still be connecting\n to and updating the slots on the master, not the standby. Even if the\n backup is only intended for use in creating a new master, copying the\n replication slots isn't expected to be particularly useful, since the\n contents of those slots will likely be badly out of date by the time\n the new master comes on line.\n </para>\n\nIt's generally useful to look at pg_stat_replication for these kinds of\nthings...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 18 Nov 2019 14:12:54 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: physical slot xmin dependency on logical slot?"
},
{
"msg_contents": "On Tue, 19 Nov 2019 at 05:37, Jeremy Finzel <finzelj@gmail.com> wrote:\n\n> We had a scenario today that was new to us. We had a logical replication\n> slot that was severely far behind. Before dropping this logical slot, we\n> made a physical point-in-time-recovery snapshot of the system with this\n> logical slot.\n>\n> This logical slot was causing severe catalog bloat. We proceeded to drop\n> the logical slot which was over 12000 WAL segments behind. The physical\n> slot was only a few 100 segments behind and still in place.\n>\n> But now proceeding to VAC FULL the catalog tables did not recover any\n> bloat beyond the now-dropped logical slot. Eventually to our surprise, we\n> found that dropping the physical slot allowed us to recover the bloat.\n>\n> We saw in forensics after the fact that xmin of the physical slot equaled\n> the catalog_xmin of the logical slot. Is there some dependency here where\n> physical slots made of a system retain all transactions of logical slots it\n> contains as well? If so, could someone help us understand this, and is\n> there documentation around this? Is this by design?\n>\n\nI expect that you created the replica in a manner that preserved the\nlogical replication slot on it. You also had hot_standby_feedback enabled.\n\nPostgreSQL standbys send the global xmin and (in Pg10+) catalog_xmin to the\nupstream when hot_standby_feedback is enabled. If there's a slot holding\nthe catalog_xmin on the replica down, that'll be passed on via\nhot_standby_feedback to the upstream. On Pg 9.6 or older, or if the replica\nisn't using a physical replication slot, the catalog_xmin is treated as a\nregular xmin since there's nowhere in PGPROC or PGXACT to track the\nseparate catalog_xmin. If the standby uses a physical slot, then on pg10+\nthe catalog_xmin sent by the replica is stored as the catalog_xmin on the\nphysical slot instead.\n\nEither way, if you have hot_standby_feedback enabled on a standby, that\nfeedback includes the requirements of any replication slots on the standby.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Tue, 19 Nov 2019 at 05:37, Jeremy Finzel <finzelj@gmail.com> wrote:We had a scenario today that was new to us. We had a logical replication slot that was severely far behind. Before dropping this logical slot, we made a physical point-in-time-recovery snapshot of the system with this logical slot.This logical slot was causing severe catalog bloat. We proceeded to drop the logical slot which was over 12000 WAL segments behind. The physical slot was only a few 100 segments behind and still in place.But now proceeding to VAC FULL the catalog tables did not recover any bloat beyond the now-dropped logical slot. Eventually to our surprise, we found that dropping the physical slot allowed us to recover the bloat.We saw in forensics after the fact that xmin of the physical slot equaled the catalog_xmin of the logical slot. Is there some dependency here where physical slots made of a system retain all transactions of logical slots it contains as well? If so, could someone help us understand this, and is there documentation around this? Is this by design?I expect that you created the replica in a manner that preserved the logical replication slot on it. You also had hot_standby_feedback enabled.PostgreSQL standbys send the global xmin and (in Pg10+) catalog_xmin to the upstream when hot_standby_feedback is enabled. If there's a slot holding the catalog_xmin on the replica down, that'll be passed on via hot_standby_feedback to the upstream. On Pg 9.6 or older, or if the replica isn't using a physical replication slot, the catalog_xmin is treated as a regular xmin since there's nowhere in PGPROC or PGXACT to track the separate catalog_xmin. If the standby uses a physical slot, then on pg10+ the catalog_xmin sent by the replica is stored as the catalog_xmin on the physical slot instead.Either way, if you have hot_standby_feedback enabled on a standby, that feedback includes the requirements of any replication slots on the standby. -- Craig Ringer http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Tue, 19 Nov 2019 08:23:30 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: physical slot xmin dependency on logical slot?"
},
{
"msg_contents": ">\n> I expect that you created the replica in a manner that preserved the\n> logical replication slot on it. You also had hot_standby_feedback enabled.\n>\n\nAs per both you and Andres' replies, we wanted the backup to have the\nlogical slots on it, because we wanted to allow decoding from the slots on\nour backup. However, what we should have done is drop the slot of the\nbackup on the master.\n\n\n> PostgreSQL standbys send the global xmin and (in Pg10+) catalog_xmin to\n> the upstream when hot_standby_feedback is enabled. If there's a slot\n> holding the catalog_xmin on the replica down, that'll be passed on via\n> hot_standby_feedback to the upstream. On Pg 9.6 or older, or if the replica\n> isn't using a physical replication slot, the catalog_xmin is treated as a\n> regular xmin since there's nowhere in PGPROC or PGXACT to track the\n> separate catalog_xmin. If the standby uses a physical slot, then on pg10+\n> the catalog_xmin sent by the replica is stored as the catalog_xmin on the\n> physical slot instead.\n>\n> Either way, if you have hot_standby_feedback enabled on a standby, that\n> feedback includes the requirements of any replication slots on the standby.\n>\n\nThank you for the thorough explanation. As I noted in my reply to Andres,\nwe routinely and intentionally create snapshots with replication slots\nintact (but we normally drop the slot on the master immediately), so our\nown use case is rare and it's not surprising that we don't find a thorough\nexplanation of this scenario in the docs.\n\nThanks,\nJeremy\n\nI expect that you created the replica in a manner that preserved the logical replication slot on it. You also had hot_standby_feedback enabled.As per both you and Andres' replies, we wanted the backup to have the logical slots on it, because we wanted to allow decoding from the slots on our backup. However, what we should have done is drop the slot of the backup on the master. PostgreSQL standbys send the global xmin and (in Pg10+) catalog_xmin to the upstream when hot_standby_feedback is enabled. If there's a slot holding the catalog_xmin on the replica down, that'll be passed on via hot_standby_feedback to the upstream. On Pg 9.6 or older, or if the replica isn't using a physical replication slot, the catalog_xmin is treated as a regular xmin since there's nowhere in PGPROC or PGXACT to track the separate catalog_xmin. If the standby uses a physical slot, then on pg10+ the catalog_xmin sent by the replica is stored as the catalog_xmin on the physical slot instead.Either way, if you have hot_standby_feedback enabled on a standby, that feedback includes the requirements of any replication slots on the standby.Thank you for the thorough explanation. As I noted in my reply to Andres, we routinely and intentionally create snapshots with replication slots intact (but we normally drop the slot on the master immediately), so our own use case is rare and it's not surprising that we don't find a thorough explanation of this scenario in the docs.Thanks,Jeremy",
"msg_date": "Tue, 19 Nov 2019 08:13:49 -0600",
"msg_from": "Jeremy Finzel <finzelj@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: physical slot xmin dependency on logical slot?"
}
] |
[
{
"msg_contents": "I (finally) noticed this morning on a server running PG12.1:\n\n< 2019-11-15 22:16:07.098 EST >PANIC: could not fsync file \"base/16491/1731839470.2\": No such file or directory\n< 2019-11-15 22:16:08.751 EST >LOG: checkpointer process (PID 27388) was terminated by signal 6: Aborted\n\n/dev/vdb on /var/lib/pgsql type ext4 (rw,relatime,seclabel,data=ordered)\nCentos 7.7 qemu/KVM\nLinux database 3.10.0-1062.1.1.el7.x86_64 #1 SMP Fri Sep 13 22:55:44 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux\nThere's no added tablespaces.\n\nCopying Thomas since I wonder if this is related:\n3eb77eba Refactor the fsync queue for wider use.\n\nI can't find any relation with filenode nor OID matching 1731839470, nor a file\nnamed like that (which is maybe no surprise, since it's exactly the issue\ncheckpointer had last week).\n\nA backup job would've started at 22:00 and probably would've run until 22:27,\nexcept that its backend was interrupted \"because of crash of another server\nprocess\". That uses pg_dump --snapshot.\n\nThis shows a gap of OIDs between 1721850297 and 1746136569; the tablenames\nindicate that would've been between 2019-11-15 01:30:03,192 and 04:31:19,348.\n|SELECT oid, relname FROM pg_class ORDER BY 1 DESC;\n\nAh, I found a maybe relevant log:\n|2019-11-15 22:15:59.592-05 | duration: 220283.831 ms statement: ALTER TABLE child.eric_enodeb_cell_201811 ALTER pmradiothpvolul\n\nSo we altered that table (and 100+ others) with a type-promoting alter,\nstarting at 2019-11-15 21:20:51,942. That involves DETACHing all but the most\nrecent partitions, altering the parent, and then iterating over historic\nchildren to ALTER and reATTACHing them. (We do this to avoid locking the table\nfor long periods, and to avoid worst-case disk usage).\n\nFYI, that server ran PG12.0 since Oct 7 with no issue.\nI installed pg12.1 at:\n$ ps -O lstart 27384\n PID STARTED S TTY TIME COMMAND\n27384 Fri Nov 15 08:13:08 2019 S ? 00:05:54 /usr/pgsql-12/bin/postmaster -D /var/lib/pgsql/12/data/\n\nCore was generated by `postgres: checkpointer '.\nProgram terminated with signal 6, Aborted.\n\n(gdb) bt\n#0 0x00007efc9d8b3337 in raise () from /lib64/libc.so.6\n#1 0x00007efc9d8b4a28 in abort () from /lib64/libc.so.6\n#2 0x000000000087752a in errfinish (dummy=<optimized out>) at elog.c:552\n#3 0x000000000075c8ec in ProcessSyncRequests () at sync.c:398\n#4 0x0000000000734dd9 in CheckPointBuffers (flags=flags@entry=256) at bufmgr.c:2588\n#5 0x00000000005095e1 in CheckPointGuts (checkPointRedo=26082542473320, flags=flags@entry=256) at xlog.c:9006\n#6 0x000000000050ff86 in CreateCheckPoint (flags=flags@entry=256) at xlog.c:8795\n#7 0x00000000006e4092 in CheckpointerMain () at checkpointer.c:481\n#8 0x000000000051fcd5 in AuxiliaryProcessMain (argc=argc@entry=2, argv=argv@entry=0x7ffda8a24340) at bootstrap.c:461\n#9 0x00000000006ee680 in StartChildProcess (type=CheckpointerProcess) at postmaster.c:5392\n#10 0x00000000006ef9ca in reaper (postgres_signal_arg=<optimized out>) at postmaster.c:2973\n#11 <signal handler called>\n#12 0x00007efc9d972933 in __select_nocancel () from /lib64/libc.so.6\n#13 0x00000000004833d4 in ServerLoop () at postmaster.c:1668\n#14 0x00000000006f106f in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x1601280) at postmaster.c:1377\n#15 0x0000000000484cd3 in main (argc=3, argv=0x1601280) at main.c:228\n\nbt f:\n#3 0x000000000075c8ec in ProcessSyncRequests () at sync.c:398\n path = \"base/16491/1731839470.2\\000m\\000e\\001\\000\\000\\000\\000hЭи\\027\\000\\000\\032\\364q\\000\\000\\000\\000\\000\\251\\202\\214\\000\\000\\000\\000\\000\\004\\000\\000\\000\\000\\000\\000\\000\\251\\202\\214\\000\\000\\000\\000\\000@S`\\001\\000\\000\\000\\000\\000>\\242\\250\\375\\177\\000\\000\\002\\000\\000\\000\\000\\000\\000\\000\\340\\211b\\001\\000\\000\\000\\000\\002\\346\\241\\000\\000\\000\\000\\000\\200>\\242\\250\\375\\177\\000\\000\\225ኝ\\374~\\000\\000C\\000_US.UTF-8\\000\\374~\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\306ފ\\235\\374~\\000\\000LC_MESSAGES/postgres-12.mo\\000\\000\\000\\000\\000\\000\\001\\000\\000\\000\\000\\000\\000\\000.ފ\\235\\374~\"...\n failures = 1\n sync_in_progress = true\n hstat = {hashp = 0x1629e00, curBucket = 122, curEntry = 0x0}\n entry = 0x1658590\n absorb_counter = <optimized out>\n processed = 43\n\n\n\n",
"msg_date": "Tue, 19 Nov 2019 05:57:59 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "checkpointer: PANIC: could not fsync file: No such file or directory"
},
{
"msg_contents": "On Wed, Nov 20, 2019 at 12:58 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> < 2019-11-15 22:16:07.098 EST >PANIC: could not fsync file \"base/16491/1731839470.2\": No such file or directory\n> < 2019-11-15 22:16:08.751 EST >LOG: checkpointer process (PID 27388) was terminated by signal 6: Aborted\n>\n> /dev/vdb on /var/lib/pgsql type ext4 (rw,relatime,seclabel,data=ordered)\n> Centos 7.7 qemu/KVM\n> Linux database 3.10.0-1062.1.1.el7.x86_64 #1 SMP Fri Sep 13 22:55:44 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux\n> There's no added tablespaces.\n>\n> Copying Thomas since I wonder if this is related:\n> 3eb77eba Refactor the fsync queue for wider use.\n\nIt could be, since it changed some details about the way that queue\nworked, and another relevant change is:\n\n9ccdd7f6 PANIC on fsync() failure.\n\nPerhaps we should not panic if we failed to open (not fsync) the file,\nbut it's not the root problem here which is that somehow we thought we\nshould be fsyncing a file that had apparently been removed already\n(due to CLUSTER, VACUUM FULL, DROP, rewriting ALTER etc). Usually, if\na file is in the fsync queue and then is later removed, we handle that\nby searching the queue for cancellation messages (since one should\nalways be sent before the file is unlinked), and I think your core\nfile with \"failures = 1\" tells us that it didn't find a cancellation\nmessage. So this seems to indicate a problem, somewhere, in that\nprotocol. That could either be a defect in 3eb77eba or it could have\nbeen a pre-existing problem that became a bigger problem due to\n9ccdd7f6.\n\nLooking into it.\n\n> I can't find any relation with filenode nor OID matching 1731839470, nor a file\n> named like that (which is maybe no surprise, since it's exactly the issue\n> checkpointer had last week).\n>\n> A backup job would've started at 22:00 and probably would've run until 22:27,\n> except that its backend was interrupted \"because of crash of another server\n> process\". That uses pg_dump --snapshot.\n>\n> This shows a gap of OIDs between 1721850297 and 1746136569; the tablenames\n> indicate that would've been between 2019-11-15 01:30:03,192 and 04:31:19,348.\n> |SELECT oid, relname FROM pg_class ORDER BY 1 DESC;\n\nBy the way, it's relfilenode, not oid, that is used in these names\n(though they start out the same). In a rewrite, the relfilenode\nchanges but the oid stays the same.\n\n> Ah, I found a maybe relevant log:\n> |2019-11-15 22:15:59.592-05 | duration: 220283.831 ms statement: ALTER TABLE child.eric_enodeb_cell_201811 ALTER pmradiothpvolul\n>\n> So we altered that table (and 100+ others) with a type-promoting alter,\n> starting at 2019-11-15 21:20:51,942. That involves DETACHing all but the most\n> recent partitions, altering the parent, and then iterating over historic\n> children to ALTER and reATTACHing them. (We do this to avoid locking the table\n> for long periods, and to avoid worst-case disk usage).\n\nHmm.\n\n\n",
"msg_date": "Wed, 20 Nov 2019 09:26:53 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: checkpointer: PANIC: could not fsync file: No such file or\n directory"
},
{
"msg_contents": "On Wed, Nov 20, 2019 at 09:26:53AM +1300, Thomas Munro wrote:\n> Perhaps we should not panic if we failed to open (not fsync) the file,\n> but it's not the root problem here which is that somehow we thought we\n> should be fsyncing a file that had apparently been removed already\n> (due to CLUSTER, VACUUM FULL, DROP, rewriting ALTER etc).\n\nFYI, I *do* have scripts which CLUSTER and(or) VAC FULL (and REINDEX), but they\nwere disabled due to crash in 12.0 (which was resolved in 12.1) and it wouldn't\nhave run anyway, since they run after the backup script (which failed) in a\nshell with set -e.\n\nI think TRUNCATE does that too..but I don't think that any processes which do\nTRUNCATE are running on that server.\n\nThanks,\nJustin\n\n\n",
"msg_date": "Tue, 19 Nov 2019 16:49:10 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: checkpointer: PANIC: could not fsync file: No such file or\n directory"
},
{
"msg_contents": "On Tue, Nov 19, 2019 at 04:49:10PM -0600, Justin Pryzby wrote:\n> On Wed, Nov 20, 2019 at 09:26:53AM +1300, Thomas Munro wrote:\n> > Perhaps we should not panic if we failed to open (not fsync) the file,\n> > but it's not the root problem here which is that somehow we thought we\n> > should be fsyncing a file that had apparently been removed already\n> > (due to CLUSTER, VACUUM FULL, DROP, rewriting ALTER etc).\n\nNote, the ALTER was (I think) building index in a parallel process:\n\n 2019-11-15 22:16:08.752-05 | 5976 | terminating connection because of crash of another server process\n 2019-11-15 22:16:08.751-05 | 27384 | checkpointer process (PID 27388) was terminated by signal 6: Aborted\n 2019-11-15 22:16:08.751-05 | 27384 | terminating any other active server processes\n 2019-11-15 22:16:07.098-05 | 27388 | could not fsync file \"base/16491/1731839470.2\": No such file or directory\n 2019-11-15 22:15:59.592-05 | 19860 | duration: 220283.831 ms statement: ALTER TABLE child.eric_enodeb_cell_201811 ALTER pmradiothpvolulscell TYPE integer USING pmradiothpvolulscell::integer\n 2019-11-15 22:15:59.459-05 | 19860 | temporary file: path \"base/pgsql_tmp/pgsql_tmp19860.82.sharedfileset/1.0\", size 5144576\n 2019-11-15 22:15:59.458-05 | 19860 | temporary file: path \"base/pgsql_tmp/pgsql_tmp19860.82.sharedfileset/2.0\", size 6463488\n 2019-11-15 22:15:59.456-05 | 19860 | temporary file: path \"base/pgsql_tmp/pgsql_tmp19860.82.sharedfileset/0.0\", size 4612096\n\nFYI, that table is *currently* (5 days later):\nts=# \\dti+ child.eric_enodeb_cell_201811*\n child | eric_enodeb_cell_201811 | table | telsasoft | | 2595 MB |\n child | eric_enodeb_cell_201811_idx | index | telsasoft | eric_enodeb_cell_201811 | 120 kB |\n child | eric_enodeb_cell_201811_site_idx | index | telsasoft | eric_enodeb_cell_201811 | 16 MB |\n\nI don't know if that table is likely to be the one with relfilenode 1731839470\n(but it certainly wasn't its index), or if that was maybe a table (or index)\nfrom an earlier ALTER. I tentatively think we wouldn't have had any other\ntables being dropped, partitions pruned or maintenance commands running.\n\nCheckpoint logs for good measure:\n 2019-11-15 22:18:26.168-05 | 10388 | checkpoint complete: wrote 2915 buffers (3.0%); 0 WAL file(s) added, 0 removed, 18 recycled; write=30.022 s, sync=0.472 s, total=32.140 s; sync files=107, longest=0.364 s, average=0.004 s; \ndistance=297471 kB, estimate=297471 kB\n 2019-11-15 22:17:54.028-05 | 10388 | checkpoint starting: time\n 2019-11-15 22:16:53.753-05 | 10104 | checkpoint complete: wrote 98275 buffers (100.0%); 0 WAL file(s) added, 0 removed, 43 recycled; write=11.040 s, sync=0.675 s, total=11.833 s; sync files=84, longest=0.335 s, average=0.008 s\n; distance=698932 kB, estimate=698932 kB\n 2019-11-15 22:16:41.921-05 | 10104 | checkpoint starting: end-of-recovery immediate\n 2019-11-15 22:16:08.751-05 | 27384 | checkpointer process (PID 27388) was terminated by signal 6: Aborted\n 2019-11-15 22:15:33.03-05 | 27388 | checkpoint starting: time\n 2019-11-15 22:15:03.62-05 | 27388 | checkpoint complete: wrote 5436 buffers (5.5%); 0 WAL file(s) added, 0 removed, 45 recycled; write=28.938 s, sync=0.355 s, total=29.711 s; sync files=22, longest=0.174 s, average=0.016 s; d\nistance=740237 kB, estimate=740237 kB\n 2019-11-15 22:14:33.908-05 | 27388 | checkpoint starting: time\n\nI was trying to reproduce what was happening:\nset -x; psql postgres -txc \"DROP TABLE IF EXISTS t\" -c \"CREATE TABLE t(i int unique); INSERT INTO t SELECT generate_series(1,999999)\"; echo \"begin;SELECT pg_export_snapshot(); SELECT pg_sleep(9)\" |psql postgres -At >/tmp/snapshot& sleep 3; snap=`sed \"1{/BEGIN/d}; q\" /tmp/snapshot`; PGOPTIONS='-cclient_min_messages=debug' psql postgres -txc \"ALTER TABLE t ALTER i TYPE bigint\" -c CHECKPOINT; pg_dump -d postgres -t t --snap=\"$snap\" |head -44;\n\nUnder v12, with or without the CHECKPOINT command, it fails:\n|pg_dump: error: query failed: ERROR: cache lookup failed for index 0\nBut under v9.5.2 (which I found quickly), without CHECKPOINT, it instead fails like:\n|pg_dump: [archiver (db)] query failed: ERROR: cache lookup failed for index 16391\nWith the CHECKPOINT command, 9.5.2 works, but I don't see why it should be\nneeded, or why it would behave differently (or if it's related to this crash).\n\n\n",
"msg_date": "Tue, 19 Nov 2019 19:22:26 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: checkpointer: PANIC: could not fsync file: No such file or\n directory"
},
{
"msg_contents": "On Tue, Nov 19, 2019 at 07:22:26PM -0600, Justin Pryzby wrote:\n> I was trying to reproduce what was happening:\n> set -x; psql postgres -txc \"DROP TABLE IF EXISTS t\" -c \"CREATE TABLE t(i int unique); INSERT INTO t SELECT generate_series(1,999999)\"; echo \"begin;SELECT pg_export_snapshot(); SELECT pg_sleep(9)\" |psql postgres -At >/tmp/snapshot& sleep 3; snap=`sed \"1{/BEGIN/d}; q\" /tmp/snapshot`; PGOPTIONS='-cclient_min_messages=debug' psql postgres -txc \"ALTER TABLE t ALTER i TYPE bigint\" -c CHECKPOINT; pg_dump -d postgres -t t --snap=\"$snap\" |head -44;\n> \n> Under v12, with or without the CHECKPOINT command, it fails:\n> |pg_dump: error: query failed: ERROR: cache lookup failed for index 0\n> But under v9.5.2 (which I found quickly), without CHECKPOINT, it instead fails like:\n> |pg_dump: [archiver (db)] query failed: ERROR: cache lookup failed for index 16391\n> With the CHECKPOINT command, 9.5.2 works, but I don't see why it should be\n> needed, or why it would behave differently (or if it's related to this crash).\n\nActually, I think that's at least related to documented behavior:\n\nhttps://www.postgresql.org/docs/12/mvcc-caveats.html\n|Some DDL commands, currently only TRUNCATE and the table-rewriting forms of ALTER TABLE, are not MVCC-safe. This means that after the truncation or rewrite commits, the table will appear empty to concurrent transactions, if they are using a snapshot taken before the DDL command committed.\n\nI don't know why CHECKPOINT allows it to work under 9.5, or if it's even\nrelated to the PANIC ..\n\nJustin\n\n\n",
"msg_date": "Wed, 20 Nov 2019 19:07:03 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: checkpointer: PANIC: could not fsync file: No such file or\n directory"
},
{
"msg_contents": "On Thu, 21 Nov 2019 at 09:07, Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Tue, Nov 19, 2019 at 07:22:26PM -0600, Justin Pryzby wrote:\n> > I was trying to reproduce what was happening:\n> > set -x; psql postgres -txc \"DROP TABLE IF EXISTS t\" -c \"CREATE TABLE t(i\n> int unique); INSERT INTO t SELECT generate_series(1,999999)\"; echo\n> \"begin;SELECT pg_export_snapshot(); SELECT pg_sleep(9)\" |psql postgres -At\n> >/tmp/snapshot& sleep 3; snap=`sed \"1{/BEGIN/d}; q\" /tmp/snapshot`;\n> PGOPTIONS='-cclient_min_messages=debug' psql postgres -txc \"ALTER TABLE t\n> ALTER i TYPE bigint\" -c CHECKPOINT; pg_dump -d postgres -t t --snap=\"$snap\"\n> |head -44;\n> >\n> > Under v12, with or without the CHECKPOINT command, it fails:\n> > |pg_dump: error: query failed: ERROR: cache lookup failed for index 0\n> > But under v9.5.2 (which I found quickly), without CHECKPOINT, it instead\n> fails like:\n> > |pg_dump: [archiver (db)] query failed: ERROR: cache lookup failed for\n> index 16391\n> > With the CHECKPOINT command, 9.5.2 works, but I don't see why it should\n> be\n> > needed, or why it would behave differently (or if it's related to this\n> crash).\n>\n> Actually, I think that's at least related to documented behavior:\n>\n> https://www.postgresql.org/docs/12/mvcc-caveats.html\n> |Some DDL commands, currently only TRUNCATE and the table-rewriting forms\n> of ALTER TABLE, are not MVCC-safe. This means that after the truncation or\n> rewrite commits, the table will appear empty to concurrent transactions, if\n> they are using a snapshot taken before the DDL command committed.\n>\n> I don't know why CHECKPOINT allows it to work under 9.5, or if it's even\n> related to the PANIC ..\n\n\nThe PANIC is a defense against potential corruptions that can be caused by\nsome kinds of disk errors. It's likely that we used to just ERROR and\nretry, then the retry would succeed without getting upset.\n\nfsync_fname() is supposed to ignore errors for files that cannot be opened.\nBut that same message may be emitted by a number of other parts of the\ncode, and it looks like you didn't have log_error_verbosity = verbose so we\ndon't have file/line info.\n\nThe only other place I see that emits that error where a relation path\ncould be a valid argument is in rewriteheap.c\nin logical_end_heap_rewrite(). That calls the vfd layer's FileSync() and\nassumes that any failure is a fsync() syscall failure. But FileSync() can\nreturn failure if we fail to reopen the underlying file managed by the vfd\ntoo, per FileAccess().\n\nWould there be a legitimate case where a logical rewrite file mapping could\nvanish without that being a problem? If so, we should probably be more\ntolerante there.\n\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Thu, 21 Nov 2019 at 09:07, Justin Pryzby <pryzby@telsasoft.com> wrote:On Tue, Nov 19, 2019 at 07:22:26PM -0600, Justin Pryzby wrote:\n> I was trying to reproduce what was happening:\n> set -x; psql postgres -txc \"DROP TABLE IF EXISTS t\" -c \"CREATE TABLE t(i int unique); INSERT INTO t SELECT generate_series(1,999999)\"; echo \"begin;SELECT pg_export_snapshot(); SELECT pg_sleep(9)\" |psql postgres -At >/tmp/snapshot& sleep 3; snap=`sed \"1{/BEGIN/d}; q\" /tmp/snapshot`; PGOPTIONS='-cclient_min_messages=debug' psql postgres -txc \"ALTER TABLE t ALTER i TYPE bigint\" -c CHECKPOINT; pg_dump -d postgres -t t --snap=\"$snap\" |head -44;\n> \n> Under v12, with or without the CHECKPOINT command, it fails:\n> |pg_dump: error: query failed: ERROR: cache lookup failed for index 0\n> But under v9.5.2 (which I found quickly), without CHECKPOINT, it instead fails like:\n> |pg_dump: [archiver (db)] query failed: ERROR: cache lookup failed for index 16391\n> With the CHECKPOINT command, 9.5.2 works, but I don't see why it should be\n> needed, or why it would behave differently (or if it's related to this crash).\n\nActually, I think that's at least related to documented behavior:\n\nhttps://www.postgresql.org/docs/12/mvcc-caveats.html\n|Some DDL commands, currently only TRUNCATE and the table-rewriting forms of ALTER TABLE, are not MVCC-safe. This means that after the truncation or rewrite commits, the table will appear empty to concurrent transactions, if they are using a snapshot taken before the DDL command committed.\n\nI don't know why CHECKPOINT allows it to work under 9.5, or if it's even\nrelated to the PANIC ..The PANIC is a defense against potential corruptions that can be caused by some kinds of disk errors. It's likely that we used to just ERROR and retry, then the retry would succeed without getting upset.fsync_fname() is supposed to ignore errors for files that cannot be opened. But that same message may be emitted by a number of other parts of the code, and it looks like you didn't have log_error_verbosity = verbose so we don't have file/line info.The only other place I see that emits that error where a relation path could be a valid argument is in rewriteheap.c in logical_end_heap_rewrite(). That calls the vfd layer's FileSync() and assumes that any failure is a fsync() syscall failure. But FileSync() can return failure if we fail to reopen the underlying file managed by the vfd too, per FileAccess().Would there be a legitimate case where a logical rewrite file mapping could vanish without that being a problem? If so, we should probably be more tolerante there.-- Craig Ringer http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Fri, 22 Nov 2019 13:17:02 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: checkpointer: PANIC: could not fsync file: No such file or\n directory"
},
{
"msg_contents": "I looked and found a new \"hint\".\n\nOn Tue, Nov 19, 2019 at 05:57:59AM -0600, Justin Pryzby wrote:\n> < 2019-11-15 22:16:07.098 EST >PANIC: could not fsync file \"base/16491/1731839470.2\": No such file or directory\n> < 2019-11-15 22:16:08.751 EST >LOG: checkpointer process (PID 27388) was terminated by signal 6: Aborted\n\nAn earlier segment of that relation had been opened successfully and was \n*still* opened:\n\n$ sudo grep 1731839470 /var/spool/abrt/ccpp-2019-11-15-22:16:08-27388/open_fds \n63:/var/lib/pgsql/12/data/base/16491/1731839470\n\nFor context:\n\n$ sudo grep / /var/spool/abrt/ccpp-2019-11-15-22:16:08-27388/open_fds |tail -3\n61:/var/lib/pgsql/12/data/base/16491/1757077748\n62:/var/lib/pgsql/12/data/base/16491/1756223121.2\n63:/var/lib/pgsql/12/data/base/16491/1731839470\n\nSo this may be an issue only with relations>segment (but, that interpretation \ncould also be very naive).\n\n\n\n",
"msg_date": "Mon, 25 Nov 2019 22:21:24 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: checkpointer: PANIC: could not fsync file: No such file or\n directory"
},
{
"msg_contents": "On Tue, Nov 26, 2019 at 5:21 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> I looked and found a new \"hint\".\n>\n> On Tue, Nov 19, 2019 at 05:57:59AM -0600, Justin Pryzby wrote:\n> > < 2019-11-15 22:16:07.098 EST >PANIC: could not fsync file \"base/16491/1731839470.2\": No such file or directory\n> > < 2019-11-15 22:16:08.751 EST >LOG: checkpointer process (PID 27388) was terminated by signal 6: Aborted\n>\n> An earlier segment of that relation had been opened successfully and was\n> *still* opened:\n>\n> $ sudo grep 1731839470 /var/spool/abrt/ccpp-2019-11-15-22:16:08-27388/open_fds\n> 63:/var/lib/pgsql/12/data/base/16491/1731839470\n>\n> For context:\n>\n> $ sudo grep / /var/spool/abrt/ccpp-2019-11-15-22:16:08-27388/open_fds |tail -3\n> 61:/var/lib/pgsql/12/data/base/16491/1757077748\n> 62:/var/lib/pgsql/12/data/base/16491/1756223121.2\n> 63:/var/lib/pgsql/12/data/base/16491/1731839470\n>\n> So this may be an issue only with relations>segment (but, that interpretation\n> could also be very naive).\n\nFTR I have been trying to reproduce this but failing so far. I'm\nplanning to dig some more in the next couple of days. Yeah, it's a .2\nfile, which means that it's one that would normally be unlinked after\nyou commit your transaction (unlike a no-suffix file, which would\nnormally be dropped at the next checkpoint after the commit, as our\nstrategy to prevent the relfilenode from being reused before the next\ncheckpoint cycle), but should normally have had a SYNC_FORGET_REQUEST\nenqueued for it first. So the question is, how did it come to pass\nthat a .2 file was ENOENT but there was no forget request? Diificult,\ngiven the definition of mdunlinkfork(). I wondered if something was\ngoing wrong in queue compaction or something like that, but I don't\nsee it. I need to dig into the exactly flow with the ALTER case to\nsee if there is something I'm missing there, and perhaps try\nreproducing it with a tiny segment size to exercise some more\nmultisegment-related code paths.\n\n\n",
"msg_date": "Tue, 26 Nov 2019 17:55:55 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: checkpointer: PANIC: could not fsync file: No such file or\n directory"
},
{
"msg_contents": "This same crash occured on a 2nd server.\nAlso qemu/KVM, but this time on a 2ndary ZFS tablespaces which (fails to) include the missing relfilenode.\nLinux database7 3.10.0-957.10.1.el7.x86_64 #1 SMP Mon Mar 18 15:06:45 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux\n\nThis is postgresql12-12.1-1PGDG.rhel7.x86_64 (same as first crash), running since:\n|30515 Tue Nov 19 10:04:33 2019 S ? 00:09:54 /usr/pgsql-12/bin/postmaster -D /var/lib/pgsql/12/data/\n\nBefore that, this server ran v12.0 since Oct 30 (without crashing).\n\nIn this case, the pg_dump --snap finished and released its snapshot at 21:50,\nand there were no ALTERed tables. I see a temp file written since the previous\ncheckpoint, but not by a parallel worker, as in the previous server's crash.\n\nThe crash happened while reindexing, though. The \"DROP INDEX CONCURRENTLY\" is\nfrom pg_repack -i, and completed successfully, but is followed immediately by\nthe abort log. The folllowing \"REINDEX toast...\" failed. In this case, I\n*guess* that the missing filenode is due to a dropped index (570627937 or\notherwise). I don't see any other CLUSTER, VACUUM FULL, DROP, TRUNCATE or\nALTER within that checkpoint interval (note, we have 1 minute checkpoints).\n\nNote, I double checked on the first server which crashed, it definitely wasn't\nrunning pg_repack or the reindex script, since I removed pg_repack12 from our\nservers until 12.1 was installed, to avoid the \"concurrently\" progress\nreporting crash fixed at 1cd5bc3c. So I think ALTER TABLE TYPE and REINDEX can\nboth trigger this crash, at least on v12.1.\n\nNote I actually have *full logs*, which I've now saved. But here's an excerpt:\n\npostgres=# SELECT log_time, message FROM ckpt_crash WHERE log_time BETWEEN '2019-11-26 23:40:20' AND '2019-11-26 23:48:58' AND user_name IS NULL ORDER BY 1;\n 2019-11-26 23:40:20.139-05 | checkpoint starting: time\n 2019-11-26 23:40:50.069-05 | checkpoint complete: wrote 11093 buffers (5.6%); 0 WAL file(s) added, 0 removed, 12 recycled; write=29.885 s, sync=0.008 s, total=29.930 s; sync files=71, longest=0.001 s, average=0.000 s; distance\n=193388 kB, estimate=550813 kB\n 2019-11-26 23:41:16.234-05 | automatic analyze of table \"postgres.public.postgres_log_2019_11_26_2300\" system usage: CPU: user: 3.00 s, system: 0.19 s, elapsed: 10.92 s\n 2019-11-26 23:41:20.101-05 | checkpoint starting: time\n 2019-11-26 23:41:50.009-05 | could not fsync file \"pg_tblspc/16401/PG_12_201909212/16460/973123799.10\": No such file or directory\n 2019-11-26 23:42:04.397-05 | checkpointer process (PID 30560) was terminated by signal 6: Aborted\n 2019-11-26 23:42:04.397-05 | terminating any other active server processes\n 2019-11-26 23:42:04.397-05 | terminating connection because of crash of another server process\n 2019-11-26 23:42:04.42-05 | terminating connection because of crash of another server process\n 2019-11-26 23:42:04.493-05 | all server processes terminated; reinitializing\n 2019-11-26 23:42:05.651-05 | database system was interrupted; last known up at 2019-11-27 00:40:50 -04\n 2019-11-26 23:47:30.404-05 | database system was not properly shut down; automatic recovery in progress\n 2019-11-26 23:47:30.435-05 | redo starts at 3450/1B202938\n 2019-11-26 23:47:54.501-05 | redo done at 3450/205CE960\n 2019-11-26 23:47:54.501-05 | invalid record length at 3450/205CEA18: wanted 24, got 0\n 2019-11-26 23:47:54.567-05 | checkpoint starting: end-of-recovery immediate\n 2019-11-26 23:47:57.365-05 | checkpoint complete: wrote 3287 buffers (1.7%); 0 WAL file(s) added, 0 removed, 5 recycled; write=2.606 s, sync=0.183 s, total=2.798 s; sync files=145, longest=0.150 s, average=0.001 s; distance=85\n808 kB, estimate=85808 kB\n 2019-11-26 23:47:57.769-05 | database system is ready to accept connections\n 2019-11-26 23:48:57.774-05 | checkpoint starting: time\n\n< 2019-11-27 00:42:04.342 -04 postgres >LOG: duration: 13.028 ms statement: DROP INDEX CONCURRENTLY \"child\".\"index_570627937\"\n< 2019-11-27 00:42:04.397 -04 >LOG: checkpointer process (PID 30560) was terminated by signal 6: Aborted\n< 2019-11-27 00:42:04.397 -04 >LOG: terminating any other active server processes\n< 2019-11-27 00:42:04.397 -04 >WARNING: terminating connection because of crash of another server process\n< 2019-11-27 00:42:04.397 -04 >DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.\n< 2019-11-27 00:42:04.397 -04 >HINT: In a moment you should be able to reconnect to the database and repeat your command.\n...\n< 2019-11-27 00:42:04.421 -04 postgres >STATEMENT: begin; LOCK TABLE child.ericsson_sgsn_ss7_remote_sp_201911 IN SHARE MODE;REINDEX INDEX pg_toast.pg_toast_570627929_index;commit\n\nHere's all the nondefault settings which seem plausibly relevant or interesting:\n autovacuum_analyze_scale_factor | 0.005 | | configuration file\n autovacuum_analyze_threshold | 2 | | configuration file\n checkpoint_timeout | 60 | s | configuration file\n max_files_per_process | 1000 | | configuration file\n max_stack_depth | 2048 | kB | environment variable\n max_wal_size | 4096 | MB | configuration file\n min_wal_size | 4096 | MB | configuration file\n shared_buffers | 196608 | 8kB | configuration file\n shared_preload_libraries | pg_stat_statements | | configuration file\n wal_buffers | 2048 | 8kB | override\n wal_compression | on | | configuration file\n wal_segment_size | 16777216 | B | override\n\n(gdb) bt\n#0 0x00007f07c0070207 in raise () from /lib64/libc.so.6\n#1 0x00007f07c00718f8 in abort () from /lib64/libc.so.6\n#2 0x000000000087752a in errfinish (dummy=<optimized out>) at elog.c:552\n#3 0x000000000075c8ec in ProcessSyncRequests () at sync.c:398\n#4 0x0000000000734dd9 in CheckPointBuffers (flags=flags@entry=256) at bufmgr.c:2588\n#5 0x00000000005095e1 in CheckPointGuts (checkPointRedo=57518713529016, flags=flags@entry=256) at xlog.c:9006\n#6 0x000000000050ff86 in CreateCheckPoint (flags=flags@entry=256) at xlog.c:8795\n#7 0x00000000006e4092 in CheckpointerMain () at checkpointer.c:481\n#8 0x000000000051fcd5 in AuxiliaryProcessMain (argc=argc@entry=2, argv=argv@entry=0x7ffd82122400) at bootstrap.c:461\n#9 0x00000000006ee680 in StartChildProcess (type=CheckpointerProcess) at postmaster.c:5392\n#10 0x00000000006ef9ca in reaper (postgres_signal_arg=<optimized out>) at postmaster.c:2973\n#11 <signal handler called>\n#12 0x00007f07c012ef53 in __select_nocancel () from /lib64/libc.so.6\n#13 0x00000000004833d4 in ServerLoop () at postmaster.c:1668\n#14 0x00000000006f106f in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x19d4280) at postmaster.c:1377\n#15 0x0000000000484cd3 in main (argc=3, argv=0x19d4280) at main.c:228\n\n#3 0x000000000075c8ec in ProcessSyncRequests () at sync.c:398\n path = \"pg_tblspc/16401/PG_12_201909212/16460/973123799.10\", '\\000' <repeats 14 times>, ...\n failures = 1\n sync_in_progress = true\n hstat = {hashp = 0x19fd2f0, curBucket = 1443, curEntry = 0x0}\n entry = 0x1a61260\n absorb_counter = <optimized out>\n processed = 23\n sync_start = {tv_sec = 21582125, tv_nsec = 303557162}\n sync_end = {tv_sec = 21582125, tv_nsec = 303536006}\n sync_diff = <optimized out>\n elapsed = <optimized out>\n longest = 1674\n total_elapsed = 7074\n __func__ = \"ProcessSyncRequests\"\n\n\n",
"msg_date": "Wed, 27 Nov 2019 00:53:13 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: checkpointer: PANIC: could not fsync file: No such file or\n directory"
},
{
"msg_contents": "On Wed, Nov 27, 2019 at 7:53 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> 2019-11-26 23:41:50.009-05 | could not fsync file \"pg_tblspc/16401/PG_12_201909212/16460/973123799.10\": No such file or directory\n\nI managed to reproduce this (see below). I think I know what the\nproblem is: mdsyncfiletag() uses _mdfd_getseg() to open the segment to\nbe fsync'd, but that function opens all segments up to the one you\nrequested, so if a lower-numbered segment has already been unlinked,\nit can fail. Usually that's unlikely because it's hard to get the\nrequest queue to fill up and therefore hard to split up the cancel\nrequests for all the segments for a relation, but your workload and\nthe repro below do it. In fact, the path it shows in the error\nmessage is not even the problem file, that's the one it really wanted,\nbut first it was trying to open lower-numbered ones. I can see a\ncouple of solutions to the problem (unlink in reverse order, send all\nthe forget messages first before unlinking anything, or go back to\nusing a single atomic \"forget everything for this rel\" message instead\nof per-segment messages), but I'll have to think more about that\ntomorrow.\n\n=== repro ===\n\nRecompile with RELSEG_SIZE 2 in pg_config.h. Run with\ncheckpoint_timeout=30s and shared_buffers=128kB. Then:\n\ncreate table t (i int primary key);\ncluster t using t_pkey;\ninsert into t select generate_series(1, 10000);\n\nSession 1:\ncluster t;\n\\watch 1\n\nSession 2:\nupdate t set i = i;\n\\watch 1.1\n\n\n",
"msg_date": "Fri, 29 Nov 2019 03:13:23 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: checkpointer: PANIC: could not fsync file: No such file or\n directory"
},
{
"msg_contents": "On Fri, Nov 29, 2019 at 3:13 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, Nov 27, 2019 at 7:53 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > 2019-11-26 23:41:50.009-05 | could not fsync file \"pg_tblspc/16401/PG_12_201909212/16460/973123799.10\": No such file or directory\n>\n> I managed to reproduce this (see below). I think I know what the\n> problem is: mdsyncfiletag() uses _mdfd_getseg() to open the segment to\n> be fsync'd, but that function opens all segments up to the one you\n> requested, so if a lower-numbered segment has already been unlinked,\n> it can fail. Usually that's unlikely because it's hard to get the\n> request queue to fill up and therefore hard to split up the cancel\n> requests for all the segments for a relation, but your workload and\n> the repro below do it. In fact, the path it shows in the error\n> message is not even the problem file, that's the one it really wanted,\n> but first it was trying to open lower-numbered ones. I can see a\n> couple of solutions to the problem (unlink in reverse order, send all\n> the forget messages first before unlinking anything, or go back to\n> using a single atomic \"forget everything for this rel\" message instead\n> of per-segment messages), but I'll have to think more about that\n> tomorrow.\n\nHere is a patch that fixes the problem by sending all the\nSYNC_FORGET_REQUEST messages up front.",
"msg_date": "Fri, 29 Nov 2019 10:50:36 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: checkpointer: PANIC: could not fsync file: No such file or\n directory"
},
{
"msg_contents": "On Fri, Nov 29, 2019 at 10:50:36AM +1300, Thomas Munro wrote:\n> On Fri, Nov 29, 2019 at 3:13 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > On Wed, Nov 27, 2019 at 7:53 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > 2019-11-26 23:41:50.009-05 | could not fsync file \"pg_tblspc/16401/PG_12_201909212/16460/973123799.10\": No such file or directory\n> >\n> > I managed to reproduce this (see below). I think I know what the\n> \n> Here is a patch that fixes the problem by sending all the\n> SYNC_FORGET_REQUEST messages up front.\n\nI managed to reproduce it too, but my recipe is crummy enough that I'm not even\ngoing to send it..\n\nI confirmed that patch also seems to work for my worse recipe.\n\nThanks,\nJustin\n\n\n",
"msg_date": "Thu, 28 Nov 2019 16:14:50 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: checkpointer: PANIC: could not fsync file: No such file or\n directory"
},
{
"msg_contents": "On Fri, Nov 29, 2019 at 11:14 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Fri, Nov 29, 2019 at 10:50:36AM +1300, Thomas Munro wrote:\n> > On Fri, Nov 29, 2019 at 3:13 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > On Wed, Nov 27, 2019 at 7:53 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > > 2019-11-26 23:41:50.009-05 | could not fsync file \"pg_tblspc/16401/PG_12_201909212/16460/973123799.10\": No such file or directory\n> > >\n> > > I managed to reproduce this (see below). I think I know what the\n> >\n> > Here is a patch that fixes the problem by sending all the\n> > SYNC_FORGET_REQUEST messages up front.\n>\n> I managed to reproduce it too, but my recipe is crummy enough that I'm not even\n> going to send it..\n>\n> I confirmed that patch also seems to work for my worse recipe.\n\nThanks.\n\nOne thing I'm wondering about is what happens if you encounter EPERM,\nEIO etc while probing the existing files, so that we give up early and\ndon't deal with some higher numbered files.\n\n(1) We'll still have unlinked the lower numbered files, and we'll\nleave the higher numbered files where they are, which might confuse\nmatters if the relfilenode number if later recycled so the zombie\nsegments appear to belong to the new relfilenode. That is\nlongstanding PostgreSQL behaviour not changed by commit 3eb77eba or\nthis patch IIUC (despite moving a few things around), and I guess it's\nunlikely to bite you considering all the coincidences required, but if\nthere's a transient EPERM (think Windows virus checker opening files\nwithout the shared read flag), it's not inconceivable. One solution\nto that would be not to queue the unlink request for segment 0 if\nanything goes wrong while unlinking the other segments (in other\nwords: if you can't unlink all the segments, deliberately leak segment\n0 and thus the *whole relfilenode*, not just the problem file(s)).\n\n(2) Even with the fix I just proposed, if you give up early due to\nEPERM, EIO etc, there might still be sync requests queued for high\nnumbered segments, so you could reach the PANIC case. It's likely\nthat, once you've hit such an error, the checkpointer is going to be\npanicking anyway when it starts seeing similar errors, but still, it'd\nbe possible to do better here (especially if the error was transient\nand short lived). If we don't accept that behaviour, we could switch\n(back) to a single cancel message that can whack every request\nrelating to the relation (essentially as it was in 11, though it\nrequires a bit more work for the new design), or stop using\n_mdfd_getseg() for this so that you can remove segments independently\nwithout worrying about sync requests for other segments (it was\nactually like that in an earlier version of the patch for commit\n3eb77eba, but someone complained that it didn't benifit from fd\ncaching).\n\n\n",
"msg_date": "Fri, 29 Nov 2019 12:34:47 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: checkpointer: PANIC: could not fsync file: No such file or\n directory"
},
{
"msg_contents": "On Fri, Nov 29, 2019 at 12:34 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> ... or stop using\n> _mdfd_getseg() for this so that you can remove segments independently\n> without worrying about sync requests for other segments (it was\n> actually like that in an earlier version of the patch for commit\n> 3eb77eba, but someone complained that it didn't benifit from fd\n> caching).\n\nNot sure which approach I prefer yet, but here's a patch showing that\nalternative.",
"msg_date": "Sat, 30 Nov 2019 10:57:24 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: checkpointer: PANIC: could not fsync file: No such file or\n directory"
},
{
"msg_contents": "On Sat, Nov 30, 2019 at 10:57 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Fri, Nov 29, 2019 at 12:34 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > ... or stop using\n> > _mdfd_getseg() for this so that you can remove segments independently\n> > without worrying about sync requests for other segments (it was\n> > actually like that in an earlier version of the patch for commit\n> > 3eb77eba, but someone complained that it didn't benifit from fd\n> > caching).\n>\n> Not sure which approach I prefer yet, but here's a patch showing that\n> alternative.\n\nHere's a better version: it uses the existing fd if we have it already\nin md_seg_fds, but opens and closes a transient one if not.",
"msg_date": "Fri, 13 Dec 2019 17:41:56 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: checkpointer: PANIC: could not fsync file: No such file or\n directory"
},
{
"msg_contents": "On Fri, Dec 13, 2019 at 5:41 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Here's a better version: it uses the existing fd if we have it already\n> in md_seg_fds, but opens and closes a transient one if not.\n\nPushed.\n\n\n",
"msg_date": "Sat, 14 Dec 2019 16:49:22 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: checkpointer: PANIC: could not fsync file: No such file or\n directory"
},
{
"msg_contents": "On Sat, Dec 14, 2019 at 4:49 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Fri, Dec 13, 2019 at 5:41 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Here's a better version: it uses the existing fd if we have it already\n> > in md_seg_fds, but opens and closes a transient one if not.\n>\n> Pushed.\n\nBuild farm not happy... checking...\n\n\n",
"msg_date": "Sat, 14 Dec 2019 17:05:51 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: checkpointer: PANIC: could not fsync file: No such file or\n directory"
},
{
"msg_contents": "On Sat, Dec 14, 2019 at 5:05 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Pushed.\n>\n> Build farm not happy... checking...\n\nHrmph. FileGetRawDesc() does not contain a call to FileAccess(), so\nthis is failing on low-fd-limit systems. Looking into a way to fix\nthat...\n\n\n",
"msg_date": "Sat, 14 Dec 2019 17:32:12 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: checkpointer: PANIC: could not fsync file: No such file or\n directory"
},
{
"msg_contents": "On Sat, Dec 14, 2019 at 5:32 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Sat, Dec 14, 2019 at 5:05 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > Pushed.\n> >\n> > Build farm not happy... checking...\n>\n> Hrmph. FileGetRawDesc() does not contain a call to FileAccess(), so\n> this is failing on low-fd-limit systems. Looking into a way to fix\n> that...\n\nSeemed best not to use FileGetRawDesc(). Rewritten to use only File,\nand tested with the torture-test mentioned upthread under ulimit -n\n50.\n\n\n",
"msg_date": "Sat, 14 Dec 2019 19:15:31 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: checkpointer: PANIC: could not fsync file: No such file or\n directory"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile checking initdb code, I found one segmentation fault, stack\ntrace for the same is:\nCore was generated by `./initdb -D data6'.\nProgram terminated with signal 11, Segmentation fault.\n#0 0x000000000040ea22 in main (argc=3, argv=0x7ffc82237308) at initdb.c:3340\n3340 printf(_(\"\\nSuccess. You can now start the database server\nusing:\\n\\n\"\n\nAnalysis for the same is given below:\ncreatePQExpBuffer allocates memory and returns the pointer, there is a\npossibility that createPQExpBuffer can return NULL pointer in case of\nmalloc failiure, but initdb's main function does not check this\ncondition. During malloc failure when pointer is accessed it results\nin segmentation fault. Made changes to check and exit if\ncreatePQExpBuffer return's NULL pointer. Patch for the same is\nattached.\n\nLet me know your thoughts for the same. Similar issue exists in few\nother places, if changes are ok, I can check and fix the issue in\nother places also.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 19 Nov 2019 20:04:04 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "initdb SegFault"
},
{
"msg_contents": "vignesh C <vignesh21@gmail.com> writes:\n> createPQExpBuffer allocates memory and returns the pointer, there is a\n> possibility that createPQExpBuffer can return NULL pointer in case of\n> malloc failiure, but initdb's main function does not check this\n> condition. During malloc failure when pointer is accessed it results\n> in segmentation fault. Made changes to check and exit if\n> createPQExpBuffer return's NULL pointer. Patch for the same is\n> attached.\n\nI can't get excited about this, for several reasons.\n\n1) The probability of it happening in the field is not\ndistinguishable from zero, surely. I imagine you forced this\nfailure by making a debugging malloc fail occasionally.\n\n2) If we really are out of memory at this point, we'd have just as good\nodds that some allocation request inside pg_log_error() would fail.\nThere's no practical way to ensure that that code path remains free\nof malloc attempts. (Not to mention cleanup_directories_atexit().)\n\n3) In the end, an initdb failure is an initdb failure. This change\ndoesn't improve robustness by any useful metric, it just adds an\nuntestable code path. If we could recover somehow, it'd be more\ninteresting to spend time on.\n\nBTW, looking at the small minority of places that bother to test\nfor createPQExpBuffer failure, the correct test for that seems\nto be PQExpBufferBroken().\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 19 Nov 2019 10:16:02 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: initdb SegFault"
},
{
"msg_contents": "Hi,\n\nOn 2019-11-19 10:16:02 -0500, Tom Lane wrote:\n> vignesh C <vignesh21@gmail.com> writes:\n> > createPQExpBuffer allocates memory and returns the pointer, there is a\n> > possibility that createPQExpBuffer can return NULL pointer in case of\n> > malloc failiure, but initdb's main function does not check this\n> > condition. During malloc failure when pointer is accessed it results\n> > in segmentation fault. Made changes to check and exit if\n> > createPQExpBuffer return's NULL pointer. Patch for the same is\n> > attached.\n> \n> I can't get excited about this, for several reasons.\n> \n> 1) The probability of it happening in the field is not\n> distinguishable from zero, surely. I imagine you forced this\n> failure by making a debugging malloc fail occasionally.\n\nAgreed wrt this specific failure scenario. It does however seem not\ngreat that callsites for PQExpBuffer ought to check every call for\nallocation failures, in the general case.\n\nI do think it'd be reasonable to move the cases where \"graceful\" dealing\nwith OOM isn't necessary ought to really use an interface that\ninternally errors out on memory allocation failures. Kinda thinking we\nought to slowly move such paths towards stringinfo...\n\n\n> 2) If we really are out of memory at this point, we'd have just as good\n> odds that some allocation request inside pg_log_error() would fail.\n> There's no practical way to ensure that that code path remains free\n> of malloc attempts. (Not to mention cleanup_directories_atexit().)\n\nI wonder if, for frontend paths, a simplified error handling path would\nbe worthwhile for OOM paths. Doing only a write() or such to print an\nerror message.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 19 Nov 2019 08:10:59 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: initdb SegFault"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Agreed wrt this specific failure scenario. It does however seem not\n> great that callsites for PQExpBuffer ought to check every call for\n> allocation failures, in the general case.\n\nIt is possible to check just once at the end, using the PQExpBufferBroken\nAPI, and I believe that libpq for instance is fairly careful about that.\n\nI agree that programs that just need to print something and exit could\nperhaps ask pqexpbuffer.c to handle that for them. (But initdb still\ndoesn't fall in that category, because of its very nontrivial atexit\nhandler :-(.)\n\n> I wonder if, for frontend paths, a simplified error handling path would\n> be worthwhile for OOM paths. Doing only a write() or such to print an\n> error message.\n\nPerhaps. You wouldn't get any translation --- but then, gettext is\nprobably going to fail anyway under such conditions.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 19 Nov 2019 12:06:50 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: initdb SegFault"
},
{
"msg_contents": "At Tue, 19 Nov 2019 12:06:50 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Andres Freund <andres@anarazel.de> writes:\n> > Agreed wrt this specific failure scenario. It does however seem not\n> > great that callsites for PQExpBuffer ought to check every call for\n> > allocation failures, in the general case.\n> \n> It is possible to check just once at the end, using the PQExpBufferBroken\n> API, and I believe that libpq for instance is fairly careful about that.\n\nFWIW, I looked though the callers of PQExpBuffer.\n\npqGetErrorNotice3 seems ingoring OOM on message buffer when !isError,\nthen sets NULL to res->errMsg. getParameterStatus doesn't check that\nbefore use, too.\n\nMost of the callers of PQExpBufferDataBroken use libpq_gettext(\"out of\nmemory\"). And some of them do strdup(libpq_gettext()).\n\nNot restricting to libpq functions, \n\ndblink_connstr_check complains as \"password is required\" when\nPQconninfoParse hits OOM.\n\nlibpqrcv_check_conninfo() will show '(null)' or maybe get SEGV on some\nplatforms when PQconninfoParse() hits OOM, since it uses err without\nnull checking. pg_basebackup, pg_dumpall and pg_isready is doing the\nsame thing.\n\n\n> I agree that programs that just need to print something and exit could\n> perhaps ask pqexpbuffer.c to handle that for them. (But initdb still\n> doesn't fall in that category, because of its very nontrivial atexit\n> handler :-(.)\n> \n> > I wonder if, for frontend paths, a simplified error handling path would\n> > be worthwhile for OOM paths. Doing only a write() or such to print an\n> > error message.\n> \n> Perhaps. You wouldn't get any translation --- but then, gettext is\n> probably going to fail anyway under such conditions.\n\nI think we should refrain from translating in the cases.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 20 Nov 2019 11:11:16 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: initdb SegFault"
}
] |
[
{
"msg_contents": "Tom implemented \"Planner support functions\":\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=a391ff3c3d418e404a2c6e4ff0865a107752827b\nhttps://www.postgresql.org/docs/12/xfunc-optimization.html\n\nI wondered whether there was any consideration to extend that to allow\nproviding improved estimates of \"group by\". That currently requires manually\nby creating an expression index, if the function is IMMUTABLE (which is not\ntrue for eg. date_trunc of timestamptz).\n\nts=# explain analyze SELECT date_trunc('day', start_time) FROM child.alu_amms_201911 GROUP BY 1;\n HashAggregate (cost=87.34..98.45 rows=889 width=8) (actual time=1.476..1.482 rows=19 loops=1)\n\nts=# explain analyze SELECT date_trunc('year', start_time) FROM child.alu_amms_201911 GROUP BY 1;\n HashAggregate (cost=87.34..98.45 rows=889 width=8) (actual time=1.499..1.500 rows=1 loops=1)\n\nts=# CREATE INDEX ON child.alu_amms_201911 (date_trunc('year',start_time));\nts=# ANALYZE child.alu_amms_201911;\nts=# explain analyze SELECT date_trunc('year', start_time) FROM child.alu_amms_201911 GROUP BY 1;\n HashAggregate (cost=87.34..87.35 rows=1 width=8) (actual time=1.414..1.414 rows=1 loops=1)\n\n\n",
"msg_date": "Tue, 19 Nov 2019 13:34:21 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "planner support functions: handle GROUP BY estimates ?"
},
{
"msg_contents": "On Tue, Nov 19, 2019 at 01:34:21PM -0600, Justin Pryzby wrote:\n> Tom implemented \"Planner support functions\":\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=a391ff3c3d418e404a2c6e4ff0865a107752827b\n> https://www.postgresql.org/docs/12/xfunc-optimization.html\n> \n> I wondered whether there was any consideration to extend that to allow\n> providing improved estimates of \"group by\". That currently requires manually\n> by creating an expression index, if the function is IMMUTABLE (which is not\n> true for eg. date_trunc of timestamptz).\n\nI didn't hear back so tried implementing this for date_trunc(). Currently, the\nplanner assumes that functions output equally many groups as their input\nvariables. Most invocations of our reports use date_trunc (or similar), so my\nearlier attempt to alert on rowcount misestimates was very brief.\n\nI currently assume that the input data has 1 second granularity:\n|postgres=# CREATE TABLE t(i) AS SELECT date_trunc('second',a)a FROM generate_series(now(), now()+'7 day'::interval, '1 seconds')a; ANALYZE t;\n|postgres=# explain analyze SELECT date_trunc('hour',i) i FROM t GROUP BY 1;\n| Group (cost=9021.85..9042.13 rows=169 width=8) (actual time=1365.934..1366.453 rows=169 loops=1)\n|\n|postgres=# explain analyze SELECT date_trunc('minute',i) i FROM t GROUP BY 1;\n| Finalize HashAggregate (cost=10172.79..10298.81 rows=10081 width=8) (actual time=1406.057..1413.413 rows=10081 loops=1)\n|\n|postgres=# explain analyze SELECT date_trunc('day',i) i FROM t GROUP BY 1;\n| Group (cost=9013.71..9014.67 rows=8 width=8) (actual time=1582.998..1583.030 rows=8 loops=1)\n\nIf the input timestamps have (say) hourly granularity, rowcount will be\n*underestimated* by 3600x, which is worse than the behavior in master of\noverestimating by (for \"day\") 24x.\n\nI'm trying to think of ways to address that:\n\n0) Add a fudge factor of 4x or maybe 30x;\n\n1) Avoid applying a corrective factor for seconds or minutes that makes the\nrowcount less than (say) 2 or 100. That would divide 24 but might then avoid\nthe last /60 or /60/60. Ultimately, that's more \"fudge\" than anything else;\n\n2) Leave alone pg_catalog.date_trunc(), but provide \"template\" support\nfunctions like timestamp_support_10pow1, 10pow2, 10pow3, etc, which include the\ngiven corrective factor, which should allow more accurate rowcount for input\ndata with granularity of the given number of seconds.\n\nIdeally, that would be user-specified factor, but I don't think that's possible\nto specify in SQL; the constant has to be built into the C function. At\ntelsasoft, our data mostly has 15minute granularity (900sec), so we'd maybe\nmake a \"date_trunc\" function in the user schema which calls the\npg_catalog.date_trunc with support function timestamp_support_10pow3;\n\nThere could be a \"base\" support function that accepts a multiplier argument,\nand then any user-provided C extension would be a one-liner specifing an\narbitrary value;\n\n3) Maybe there are better functions than date_trunc() to address;\n\n4) Leave it as a patch in the archives for people to borrow from;\n\nJustin",
"msg_date": "Sun, 22 Dec 2019 18:16:48 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: planner support functions: handle GROUP BY estimates ?"
},
{
"msg_contents": "On Sun, Dec 22, 2019 at 06:16:48PM -0600, Justin Pryzby wrote:\n> On Tue, Nov 19, 2019 at 01:34:21PM -0600, Justin Pryzby wrote:\n> > Tom implemented \"Planner support functions\":\n> > https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=a391ff3c3d418e404a2c6e4ff0865a107752827b\n> > https://www.postgresql.org/docs/12/xfunc-optimization.html\n> > \n> > I wondered whether there was any consideration to extend that to allow\n> > providing improved estimates of \"group by\". That currently requires manually\n> > by creating an expression index, if the function is IMMUTABLE (which is not\n> > true for eg. date_trunc of timestamptz).\n> \n> I didn't hear back so tried implementing this for date_trunc(). Currently, the\n\n> I currently assume that the input data has 1 second granularity:\n...\n> If the input timestamps have (say) hourly granularity, rowcount will be\n> *underestimated* by 3600x, which is worse than the behavior in master of\n> overestimating by (for \"day\") 24x.\n> \n> I'm trying to think of ways to address that:\n\nIn the attached, I handled that by using histogram and variable's initial\nndistinct estimate, giving good estimates even for intermediate granularities\nof input timestamps.\n\n|postgres=# DROP TABLE IF EXISTS t; CREATE TABLE t(i) AS SELECT a FROM generate_series(now(), now()+'11 day'::interval, '15 minutes')a,generate_series(1,9)b; ANALYZE t;\n|\n|postgres=# explain analyze SELECT date_trunc('hour',i) i FROM t GROUP BY 1;\n| HashAggregate (cost=185.69..188.99 rows=264 width=8) (actual time=42.110..42.317 rows=265 loops=1)\n|\n|postgres=# explain analyze SELECT date_trunc('minute',i) i FROM t GROUP BY 1;\n| HashAggregate (cost=185.69..198.91 rows=1057 width=8) (actual time=41.685..42.264 rows=1057 loops=1)\n|\n|postgres=# explain analyze SELECT date_trunc('day',i) i FROM t GROUP BY 1;\n| HashAggregate (cost=185.69..185.83 rows=11 width=8) (actual time=46.672..46.681 rows=12 loops=1)\n|\n|postgres=# explain analyze SELECT date_trunc('second',i) i FROM t GROUP BY 1;\n| HashAggregate (cost=185.69..198.91 rows=1057 width=8) (actual time=41.816..42.435 rows=1057 loops=1)",
"msg_date": "Thu, 26 Dec 2019 15:32:50 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: planner support functions: handle GROUP BY estimates ?"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Sun, Dec 22, 2019 at 06:16:48PM -0600, Justin Pryzby wrote:\n>> On Tue, Nov 19, 2019 at 01:34:21PM -0600, Justin Pryzby wrote:\n>>> Tom implemented \"Planner support functions\":\n>>> https://www.postgresql.org/docs/12/xfunc-optimization.html\n>>> I wondered whether there was any consideration to extend that to allow\n>>> providing improved estimates of \"group by\". That currently requires manually\n>>> by creating an expression index, if the function is IMMUTABLE (which is not\n>>> true for eg. date_trunc of timestamptz).\n\n>> I didn't hear back so tried implementing this for date_trunc(). Currently, the\n>> ...\n>> If the input timestamps have (say) hourly granularity, rowcount will be\n>> *underestimated* by 3600x, which is worse than the behavior in master of\n>> overestimating by (for \"day\") 24x.\n\nWhile I don't have any objection in principle to extending the set of\nthings planner support functions can do, it doesn't seem like the idea is\ngiving you all that much traction for this problem. There isn't that much\nknowledge that's specific to date_trunc in this, and instead you've got a\nbunch of generic problems (that would have to be solved again in every\nother function's planner support).\n\nAnother issue is that it seems like this doesn't compose nicely ---\nif the GROUP BY expression is \"f(g(x))\", how do f's support function\nand g's support function interact?\n\nThe direction that I've been wanting to go in for this kind of problem\nis to allow CREATE STATISTICS on an expression, ie if you were concerned\nabout the estimation accuracy for GROUP BY or anything else, you could do\nsomething like\n\nCREATE STATISTICS foo ON date_trunc('day', mod_time) FROM my_table;\n\nThis would have the effect of cueing ANALYZE to gather stats on the\nvalue of that expression, which the planner could then use, very much\nas if you'd created an index on the expression. The advantages of\ndoing this rather than making an index are\n\n(1) you don't have to pay the maintenance costs for an index,\n\n(2) we don't have to restrict it to immutable expressions. (Volatile\nexpressions would have to be disallowed, if only because of fear of\nside-effects; but I think we could allow stable expressions just fine.\nWorst case problem is that the stats are stale, but so what?)\n\nWith a solution like this, we don't have to solve any of the difficult\nproblems of how the pieces of the expression interact with each other\nor with the statistics of the underlying column(s). We just use the\nstats if available, and the estimate will be as good as it'd be for\na plain column reference.\n\nI'm not sure how much new infrastructure would have to be built\nfor this. We designed the CREATE STATISTICS syntax to support\nthis (partly at my insistence IIRC) but I do not think any of the\nexisting plumbing is ready for it. I don't think it'd be very\nhard to plug this into ANALYZE or the planner, but there might be\nquite some work to be done on the catalog infrastructure, pg_dump,\netc.\n\ncc'ing Tomas in case he has any thoughts about it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 Jan 2020 15:12:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: planner support functions: handle GROUP BY estimates ?"
},
{
"msg_contents": "On Tue, Jan 14, 2020 at 03:12:21PM -0500, Tom Lane wrote:\n>Justin Pryzby <pryzby@telsasoft.com> writes:\n>> On Sun, Dec 22, 2019 at 06:16:48PM -0600, Justin Pryzby wrote:\n>>> On Tue, Nov 19, 2019 at 01:34:21PM -0600, Justin Pryzby wrote:\n>>>> Tom implemented \"Planner support functions\":\n>>>> https://www.postgresql.org/docs/12/xfunc-optimization.html\n>>>> I wondered whether there was any consideration to extend that to allow\n>>>> providing improved estimates of \"group by\". That currently requires manually\n>>>> by creating an expression index, if the function is IMMUTABLE (which is not\n>>>> true for eg. date_trunc of timestamptz).\n>\n>>> I didn't hear back so tried implementing this for date_trunc(). Currently, the\n>>> ...\n>>> If the input timestamps have (say) hourly granularity, rowcount will be\n>>> *underestimated* by 3600x, which is worse than the behavior in master of\n>>> overestimating by (for \"day\") 24x.\n>\n>While I don't have any objection in principle to extending the set of\n>things planner support functions can do, it doesn't seem like the idea is\n>giving you all that much traction for this problem. There isn't that much\n>knowledge that's specific to date_trunc in this, and instead you've got a\n>bunch of generic problems (that would have to be solved again in every\n>other function's planner support).\n>\n>Another issue is that it seems like this doesn't compose nicely ---\n>if the GROUP BY expression is \"f(g(x))\", how do f's support function\n>and g's support function interact?\n>\n>The direction that I've been wanting to go in for this kind of problem\n>is to allow CREATE STATISTICS on an expression, ie if you were concerned\n>about the estimation accuracy for GROUP BY or anything else, you could do\n>something like\n>\n>CREATE STATISTICS foo ON date_trunc('day', mod_time) FROM my_table;\n>\n>This would have the effect of cueing ANALYZE to gather stats on the\n>value of that expression, which the planner could then use, very much\n>as if you'd created an index on the expression. The advantages of\n>doing this rather than making an index are\n>\n>(1) you don't have to pay the maintenance costs for an index,\n>\n>(2) we don't have to restrict it to immutable expressions. (Volatile\n>expressions would have to be disallowed, if only because of fear of\n>side-effects; but I think we could allow stable expressions just fine.\n>Worst case problem is that the stats are stale, but so what?)\n>\n>With a solution like this, we don't have to solve any of the difficult\n>problems of how the pieces of the expression interact with each other\n>or with the statistics of the underlying column(s). We just use the\n>stats if available, and the estimate will be as good as it'd be for\n>a plain column reference.\n>\n>I'm not sure how much new infrastructure would have to be built\n>for this. We designed the CREATE STATISTICS syntax to support\n>this (partly at my insistence IIRC) but I do not think any of the\n>existing plumbing is ready for it. I don't think it'd be very\n>hard to plug this into ANALYZE or the planner, but there might be\n>quite some work to be done on the catalog infrastructure, pg_dump,\n>etc.\n>\n>cc'ing Tomas in case he has any thoughts about it.\n>\n\nWell, I certainly do thoughts about this - it's pretty much exactly what\nI proposed yesterday in this thread:\n\n https://www.postgresql.org/message-id/flat/20200113230008.g67iyk4cs3xbnjju@development\n\nThe third part of that patch series is exactly about supporting extended\nstatistics on expressions, about the way you described here. The current\nstatus of the WIP patch is that grammar + ANALYZE mostly works, but\nthere is no support in the planner. It's obviously still very hackish.\n\nThe main thing I'm not sure about is how to represent this in catalogs,\nwhether to have two fields (like for indexes) or maybe a single list of\nexpressions.\n\n\nI'm also wondering if we could/should 100% rely on extended statistics,\nbecause those are really meant to track correlations between columns,\nwhich means we currently require at least two attributes in CREATE\nSTATISTICS and so on. So maybe what we want is collecting \"regular\"\nper-column stats just like we do for indexes, but without the index\nmaintenance overhead?\n\nThe advantage would be we'd get exactly the same stats as for indexes,\nand we could use them in the same places out of the box. While with\nextended stats we'll have to tweak those places.\n\nNow, the trouble is we can't store stuff in pg_statistic without having\na relation (i.e. table / index / ...) but maybe we could invent a new \nrelation type for this purpose. Of course, it'd require some catalog\nwork to represent this ...\n\n\nUltimately I think we'd want both things, it's not one or the other.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Tue, 14 Jan 2020 21:53:49 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: planner support functions: handle GROUP BY estimates ?"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Tue, Jan 14, 2020 at 03:12:21PM -0500, Tom Lane wrote:\n>> cc'ing Tomas in case he has any thoughts about it.\n\n> Well, I certainly do thoughts about this - it's pretty much exactly what\n> I proposed yesterday in this thread:\n> https://www.postgresql.org/message-id/flat/20200113230008.g67iyk4cs3xbnjju@development\n> The third part of that patch series is exactly about supporting extended\n> statistics on expressions, about the way you described here. The current\n> status of the WIP patch is that grammar + ANALYZE mostly works, but\n> there is no support in the planner. It's obviously still very hackish.\n\nCool. We should probably take the discussion to that thread, then.\n\n> I'm also wondering if we could/should 100% rely on extended statistics,\n> because those are really meant to track correlations between columns,\n\nYeah, it seems likely to me that the infrastructure for this would be\nsomewhat different --- the user-facing syntax could be basically the\nsame, but ultimately we want to generate entries in pg_statistic not\npg_statistic_ext_data. Or at least entries that look the same as what\nyou could find in pg_statistic.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 Jan 2020 16:21:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: planner support functions: handle GROUP BY estimates ?"
},
{
"msg_contents": "On Tue, Jan 14, 2020 at 04:21:57PM -0500, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> On Tue, Jan 14, 2020 at 03:12:21PM -0500, Tom Lane wrote:\n>>> cc'ing Tomas in case he has any thoughts about it.\n>\n>> Well, I certainly do thoughts about this - it's pretty much exactly what\n>> I proposed yesterday in this thread:\n>> https://www.postgresql.org/message-id/flat/20200113230008.g67iyk4cs3xbnjju@development\n>> The third part of that patch series is exactly about supporting extended\n>> statistics on expressions, about the way you described here. The current\n>> status of the WIP patch is that grammar + ANALYZE mostly works, but\n>> there is no support in the planner. It's obviously still very hackish.\n>\n>Cool. We should probably take the discussion to that thread, then.\n>\n>> I'm also wondering if we could/should 100% rely on extended statistics,\n>> because those are really meant to track correlations between columns,\n>\n>Yeah, it seems likely to me that the infrastructure for this would be\n>somewhat different --- the user-facing syntax could be basically the\n>same, but ultimately we want to generate entries in pg_statistic not\n>pg_statistic_ext_data. Or at least entries that look the same as what\n>you could find in pg_statistic.\n>\n\nYeah. I think we could invent a new type of statistics \"expressions\"\nwhich would simply built this per-column stats. So for example\n\n CREATE STATISTICS s (expressions) ON (a*b), sqrt(c) FROM t;\n\nwould build per-column stats stored in pg_statistics, while\n\n CREATE STATISTICS s (mcv) ON (a*b), sqrt(c) FROM t;\n\nwould build the multi-column MCV list on expressions.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Tue, 14 Jan 2020 22:45:21 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: planner support functions: handle GROUP BY estimates ?"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Tue, Jan 14, 2020 at 04:21:57PM -0500, Tom Lane wrote:\n>> Yeah, it seems likely to me that the infrastructure for this would be\n>> somewhat different --- the user-facing syntax could be basically the\n>> same, but ultimately we want to generate entries in pg_statistic not\n>> pg_statistic_ext_data. Or at least entries that look the same as what\n>> you could find in pg_statistic.\n\n> Yeah. I think we could invent a new type of statistics \"expressions\"\n> which would simply built this per-column stats. So for example\n> CREATE STATISTICS s (expressions) ON (a*b), sqrt(c) FROM t;\n\nI was imagining the type keyword as being \"standard\" or something\nlike that, since what it's going to build are the \"standard\" kinds\nof stats for the expression's datatype. But yeah, has to be some other\nkeyword than the existing ones.\n\nThe main issue for sticking the results into pg_statistic is that\nthe primary key there is (starelid, staattnum), and we haven't got\na suitable attnum. I wouldn't much object to putting the data into\npg_statistic_ext_data, but it doesn't really have a suitable\nrowtype ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 Jan 2020 16:52:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: planner support functions: handle GROUP BY estimates ?"
},
{
"msg_contents": "On Tue, Jan 14, 2020 at 04:52:44PM -0500, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> On Tue, Jan 14, 2020 at 04:21:57PM -0500, Tom Lane wrote:\n>>> Yeah, it seems likely to me that the infrastructure for this would be\n>>> somewhat different --- the user-facing syntax could be basically the\n>>> same, but ultimately we want to generate entries in pg_statistic not\n>>> pg_statistic_ext_data. Or at least entries that look the same as what\n>>> you could find in pg_statistic.\n>\n>> Yeah. I think we could invent a new type of statistics \"expressions\"\n>> which would simply built this per-column stats. So for example\n>> CREATE STATISTICS s (expressions) ON (a*b), sqrt(c) FROM t;\n>\n>I was imagining the type keyword as being \"standard\" or something\n>like that, since what it's going to build are the \"standard\" kinds\n>of stats for the expression's datatype. But yeah, has to be some other\n>keyword than the existing ones.\n>\n>The main issue for sticking the results into pg_statistic is that\n>the primary key there is (starelid, staattnum), and we haven't got\n>a suitable attnum. I wouldn't much object to putting the data into\n>pg_statistic_ext_data, but it doesn't really have a suitable\n>rowtype ...\n\nWell, that's why I proposed to essentially build a fake \"relation\" just\nfor this purpose. So we'd have a pg_class entry with a special relkind,\nattnums and all that. And the expressions would be stored either in\npg_statistic_ext or in a new catalog. But maybe that's nonsense.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Tue, 14 Jan 2020 23:12:47 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: planner support functions: handle GROUP BY estimates ?"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Tue, Jan 14, 2020 at 04:52:44PM -0500, Tom Lane wrote:\n>> The main issue for sticking the results into pg_statistic is that\n>> the primary key there is (starelid, staattnum), and we haven't got\n>> a suitable attnum. I wouldn't much object to putting the data into\n>> pg_statistic_ext_data, but it doesn't really have a suitable\n>> rowtype ...\n\n> Well, that's why I proposed to essentially build a fake \"relation\" just\n> for this purpose. So we'd have a pg_class entry with a special relkind,\n> attnums and all that. And the expressions would be stored either in\n> pg_statistic_ext or in a new catalog. But maybe that's nonsense.\n\nSeems pretty yucky. I realize we've already got \"fake relations\" like\nforeign tables and composite types, but the number of special cases\nthose create is very annoying. And you still don't have anyplace to\nput the expressions themselves in such a structure --- I hope you\nweren't going to propose fake pg_index rows for that.\n\nI wonder just how messy it would be to add a column to pg_statistic_ext\nwhose type is the composite type \"pg_statistic\", and drop the required\ndata into that. We've not yet used any composite types in the system\ncatalogs, AFAIR, but since pg_statistic_ext isn't a bootstrap catalog\nit seems like we might be able to get away with it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 Jan 2020 17:37:53 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: planner support functions: handle GROUP BY estimates ?"
},
{
"msg_contents": "On Tue, Jan 14, 2020 at 05:37:53PM -0500, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> On Tue, Jan 14, 2020 at 04:52:44PM -0500, Tom Lane wrote:\n>>> The main issue for sticking the results into pg_statistic is that\n>>> the primary key there is (starelid, staattnum), and we haven't got\n>>> a suitable attnum. I wouldn't much object to putting the data into\n>>> pg_statistic_ext_data, but it doesn't really have a suitable\n>>> rowtype ...\n>\n>> Well, that's why I proposed to essentially build a fake \"relation\" just\n>> for this purpose. So we'd have a pg_class entry with a special relkind,\n>> attnums and all that. And the expressions would be stored either in\n>> pg_statistic_ext or in a new catalog. But maybe that's nonsense.\n>\n>Seems pretty yucky. I realize we've already got \"fake relations\" like\n>foreign tables and composite types, but the number of special cases\n>those create is very annoying. And you still don't have anyplace to\n>put the expressions themselves in such a structure --- I hope you\n>weren't going to propose fake pg_index rows for that.\n>\n\nNo, I wasn't going to propose fake pg_index rows, because - I actually\nwrote \"stored either in pg_statistic_ext or in a new catalog\" so I was\nthinking about a new catalog (so a dedicated and simplified copy of\npg_index).\n\n>I wonder just how messy it would be to add a column to pg_statistic_ext\n>whose type is the composite type \"pg_statistic\", and drop the required\n>data into that. We've not yet used any composite types in the system\n>catalogs, AFAIR, but since pg_statistic_ext isn't a bootstrap catalog\n>it seems like we might be able to get away with it.\n>\n\nI don't know, but feels a bit awkward to store this type of stats into\npg_statistic_ext, which was meant for multi-column stats. Maybe it'd\nwork fine, not sure.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Wed, 15 Jan 2020 00:19:13 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: planner support functions: handle GROUP BY estimates ?"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Tue, Jan 14, 2020 at 05:37:53PM -0500, Tom Lane wrote:\n>> I wonder just how messy it would be to add a column to pg_statistic_ext\n>> whose type is the composite type \"pg_statistic\", and drop the required\n>> data into that. We've not yet used any composite types in the system\n>> catalogs, AFAIR, but since pg_statistic_ext isn't a bootstrap catalog\n>> it seems like we might be able to get away with it.\n\n[ I meant pg_statistic_ext_data, obviously ]\n\n> I don't know, but feels a bit awkward to store this type of stats into\n> pg_statistic_ext, which was meant for multi-column stats. Maybe it'd\n> work fine, not sure.\n\nIf we wanted to allow a single statistics object to contain data for\nmultiple expressions, we'd actually need that to be array-of-pg_statistic\nnot just pg_statistic. Seems do-able, but on the other hand we could\njust prohibit having more than one output column in the \"query\" for this\ntype of extended statistic. Either way, this seems far less invasive\nthan either a new catalog or a new relation relkind (to say nothing of\nneeding both, which is where you seemed to be headed).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 Jan 2020 18:44:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: planner support functions: handle GROUP BY estimates ?"
},
{
"msg_contents": "On 1/15/20 12:44 AM, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> On Tue, Jan 14, 2020 at 05:37:53PM -0500, Tom Lane wrote:\n>>> I wonder just how messy it would be to add a column to pg_statistic_ext\n>>> whose type is the composite type \"pg_statistic\", and drop the required\n>>> data into that. We've not yet used any composite types in the system\n>>> catalogs, AFAIR, but since pg_statistic_ext isn't a bootstrap catalog\n>>> it seems like we might be able to get away with it.\n> \n> [ I meant pg_statistic_ext_data, obviously ]\n> \n>> I don't know, but feels a bit awkward to store this type of stats into\n>> pg_statistic_ext, which was meant for multi-column stats. Maybe it'd\n>> work fine, not sure.\n> \n> If we wanted to allow a single statistics object to contain data for\n> multiple expressions, we'd actually need that to be array-of-pg_statistic\n> not just pg_statistic. Seems do-able, but on the other hand we could\n> just prohibit having more than one output column in the \"query\" for this\n> type of extended statistic. Either way, this seems far less invasive\n> than either a new catalog or a new relation relkind (to say nothing of\n> needing both, which is where you seemed to be headed).\n> \n\nI've started looking at statistics on expressions too, mostly because it\nseems the extended stats improvements (as discussed in [1]) need that.\n\nThe \"stash pg_statistic records into pg_statistics_ext_data\" approach\nseems simple, but it's not clear to me how to make it work, so I'd\nappreciate some guidance.\n\n\n1) Considering we don't have any composite types in any catalog yet, and\nnaive attempts to just use something like\n\n pg_statistic stxdexprs[1];\n\ndid not work. So I suppose this will require changes to genbki.pl, but\nhonestly, my Perl-fu is non-existent :-(\n\n\n2) Won't it be an issue that pg_statistic contains pseudo-types? That\nis, this does not work, for example:\n\n test=# create table t (a pg_statistic[]);\n ERROR: column \"stavalues1\" has pseudo-type anyarray\n\nand it seems unlikely just using this in a catalog would make it work.\n\n\nregards\n\n\n[1]\nhttps://www.postgresql.org/message-id/ad7891d2-e90c-b446-9fe2-7419143847d7%40enterprisedb.com\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 16 Nov 2020 18:24:41 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: planner support functions: handle GROUP BY estimates ?"
},
{
"msg_contents": "On Mon, Nov 16, 2020 at 06:24:41PM +0100, Tomas Vondra wrote:\n> On 1/15/20 12:44 AM, Tom Lane wrote:\n> > Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> >> On Tue, Jan 14, 2020 at 05:37:53PM -0500, Tom Lane wrote:\n> >>> I wonder just how messy it would be to add a column to pg_statistic_ext\n> >>> whose type is the composite type \"pg_statistic\", and drop the required\n> >>> data into that. We've not yet used any composite types in the system\n> >>> catalogs, AFAIR, but since pg_statistic_ext isn't a bootstrap catalog\n> >>> it seems like we might be able to get away with it.\n> > \n> > [ I meant pg_statistic_ext_data, obviously ]\n> > \n> >> I don't know, but feels a bit awkward to store this type of stats into\n> >> pg_statistic_ext, which was meant for multi-column stats. Maybe it'd\n> >> work fine, not sure.\n> \n> I've started looking at statistics on expressions too, mostly because it\n> seems the extended stats improvements (as discussed in [1]) need that.\n> \n> The \"stash pg_statistic records into pg_statistics_ext_data\" approach\n> seems simple, but it's not clear to me how to make it work, so I'd\n> appreciate some guidance.\n> \n> \n> 1) Considering we don't have any composite types in any catalog yet, and\n> naive attempts to just use something like\n> \n> pg_statistic stxdexprs[1];\n> \n> did not work. So I suppose this will require changes to genbki.pl, but\n> honestly, my Perl-fu is non-existent :-(\n\nIn the attached, I didn't need to mess with perl.\n\n> 2) Won't it be an issue that pg_statistic contains pseudo-types? That\n> is, this does not work, for example:\n> \n> test=# create table t (a pg_statistic[]);\n> ERROR: column \"stavalues1\" has pseudo-type anyarray\n\nIt works during initdb for the reasons that it's allowed for pg_statistic.\n\n-- \nJustin",
"msg_date": "Tue, 17 Nov 2020 10:18:14 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: planner support functions: handle GROUP BY estimates ?"
},
{
"msg_contents": "\n\nOn 11/17/20 5:18 PM, Justin Pryzby wrote:\n> On Mon, Nov 16, 2020 at 06:24:41PM +0100, Tomas Vondra wrote:\n>> On 1/15/20 12:44 AM, Tom Lane wrote:\n>>> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>>>> On Tue, Jan 14, 2020 at 05:37:53PM -0500, Tom Lane wrote:\n>>>>> I wonder just how messy it would be to add a column to pg_statistic_ext\n>>>>> whose type is the composite type \"pg_statistic\", and drop the required\n>>>>> data into that. We've not yet used any composite types in the system\n>>>>> catalogs, AFAIR, but since pg_statistic_ext isn't a bootstrap catalog\n>>>>> it seems like we might be able to get away with it.\n>>>\n>>> [ I meant pg_statistic_ext_data, obviously ]\n>>>\n>>>> I don't know, but feels a bit awkward to store this type of stats into\n>>>> pg_statistic_ext, which was meant for multi-column stats. Maybe it'd\n>>>> work fine, not sure.\n>>\n>> I've started looking at statistics on expressions too, mostly because it\n>> seems the extended stats improvements (as discussed in [1]) need that.\n>>\n>> The \"stash pg_statistic records into pg_statistics_ext_data\" approach\n>> seems simple, but it's not clear to me how to make it work, so I'd\n>> appreciate some guidance.\n>>\n>>\n>> 1) Considering we don't have any composite types in any catalog yet, and\n>> naive attempts to just use something like\n>>\n>> pg_statistic stxdexprs[1];\n>>\n>> did not work. So I suppose this will require changes to genbki.pl, but\n>> honestly, my Perl-fu is non-existent :-(\n> \n> In the attached, I didn't need to mess with perl.\n> \n>> 2) Won't it be an issue that pg_statistic contains pseudo-types? That\n>> is, this does not work, for example:\n>>\n>> test=# create table t (a pg_statistic[]);\n>> ERROR: column \"stavalues1\" has pseudo-type anyarray\n> \n> It works during initdb for the reasons that it's allowed for pg_statistic.\n> \n\nOh, wow! I haven't expected a patch implementing this, that's great. I\nowe you a beer or a drink of your choice.\n\nThanks!\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 17 Nov 2020 17:46:38 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: planner support functions: handle GROUP BY estimates ?"
}
] |
[
{
"msg_contents": "To mitigate the need for per-table tuning of autovacuum configuration, I'd\nlike to propose a new GUC for autovacuum_vacuum_threshold_max.\n\nCurrently, it seems that I can either set autovacuum_vacuum_scale_factor\nmuch smaller than default on tables with millions of rows, or set a value\nglobally that means small tables are auto vacuumed rarely.\n\nThe default value for this new setting value could be -1 or 0 to disable\nthe feature, or something like 100,000 perhaps so that tables with more\nthan 500,0000 tuples are candidates for an autovacuum before they would\nwith current default values.\n\nTo mitigate the need for per-table tuning of autovacuum configuration, I'd like to propose a new GUC for autovacuum_vacuum_threshold_max.Currently, it seems that I can either set autovacuum_vacuum_scale_factor much smaller than default on tables with millions of rows, or set a value globally that means small tables are auto vacuumed rarely.The default value for this new setting value could be -1 or 0 to disable the feature, or something like 100,000 perhaps so that tables with more than 500,0000 tuples are candidates for an autovacuum before they would with current default values.",
"msg_date": "Tue, 19 Nov 2019 15:35:50 -0700",
"msg_from": "Michael Lewis <mlewis@entrata.com>",
"msg_from_op": true,
"msg_subject": "Proposal- GUC for max dead rows before autovacuum"
},
{
"msg_contents": "On Tue, 2019-11-19 at 15:35 -0700, Michael Lewis wrote:\n> To mitigate the need for per-table tuning of autovacuum configuration, I'd like to propose a new GUC for autovacuum_vacuum_threshold_max.\n> \n> Currently, it seems that I can either set autovacuum_vacuum_scale_factor much smaller than default on tables with millions of rows,\n> or set a value globally that means small tables are auto vacuumed rarely.\n> \n> The default value for this new setting value could be -1 or 0 to disable the feature, or something like 100,000 perhaps\n> so that tables with more than 500,0000 tuples are candidates for an autovacuum before they would with current default values.\n\nI think this is unnecessary.\nUsually you have problems only with a few tables, and it is no problem\nto set autovacuum parameters on these individually.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Wed, 20 Nov 2019 00:01:42 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Proposal- GUC for max dead rows before autovacuum"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nWorking on global temporary table I need to define function which \nreturns set of pg_statistic records.\nUnfortunately I failed to declare such function!\nType pg_statistic is defined in postgres.bki so I was not able to refer \nto it in pg_proc.dat file.\nAnd if I explicitly enumerate columns of this type:\n\n\n{ oid => '3434',\n descr => 'show local statistics for global temp table',\n proname => 'pg_gtt_statistic_for_relation', provolatile => 'v', \nproparallel => 'u',\n prorettype => 'record', proretset => 't', proargtypes => 'oid',\n proallargtypes => \n'{oid,oid,int2,bool,float4,int4,float4,int2,int2,int2,int2,int2,oid,oid,oid,oid,oid,oid,oid,oid,oid,oid,_float4,_float4,_float4,_float4,_float4,anyarray,anyarray,anyarray,anyarray,anyarray}',\n proargmodes => \n'{i,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o}',\n proargnames => \n'{relid,starelid,staattnum,stainherit,stanullfrac,stawidth,stadistinct,stakind1,stakind2,stakind3,stakind4,stakind5,staop1,staop2,staop3,staop4,staop5,stacoll1,stacoll2,stacoll3,stacoll4,stacoll5,stanumbers1,stanumbers2,stanumbers3,stanumbers4,stanumbers5,stavalues1,stavalues2,stavalues3,stavalues4,stavalues5}',\n prosrc => 'pg_gtt_statistic_for_relation' },\n\nthen I go the following error when try to use this function:\n\n a column definition list is required for functions returning \"record\" \nat character 111\n\nThe column definition list provided in pg_proc.dat was rejected because \nit contains reference to anyarray which can not be resolved.\n\nIf I try to declare function in system_views.sql as returning setof \npg_statistic then I got error \"cannot change return type of existing \nfunction\".\n\nCREATE OR REPLACE FUNCTION\n pg_gtt_statistic_for_relation(relid oid) returns setof pg_statistic\nLANGUAGE INTERNAL STRICT\nAS 'pg_gtt_statistic_by_relation';\n\nAnd if I try to declare it as returning record and explicitly cast it to \npg_statistic, then reported error is \"cannot cast type record to \npg_statistic\".\n\nSo the only possible way I found is to create extension and define \nfunction in this extension.\nI wonder if there is some better solution?\n\nThanks in advance,\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Wed, 20 Nov 2019 12:59:01 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Internal function returning pg_statistic"
},
{
"msg_contents": "Hi\n\nst 20. 11. 2019 v 10:59 odesílatel Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> napsal:\n\n> Hi hackers,\n>\n> Working on global temporary table I need to define function which\n> returns set of pg_statistic records.\n> Unfortunately I failed to declare such function!\n> Type pg_statistic is defined in postgres.bki so I was not able to refer\n> to it in pg_proc.dat file.\n> And if I explicitly enumerate columns of this type:\n>\n>\nyou can define your function in postgres.bki.\n\nit will not be first\n\nPavel\n\n\n> { oid => '3434',\n> descr => 'show local statistics for global temp table',\n> proname => 'pg_gtt_statistic_for_relation', provolatile => 'v',\n> proparallel => 'u',\n> prorettype => 'record', proretset => 't', proargtypes => 'oid',\n> proallargtypes =>\n>\n> '{oid,oid,int2,bool,float4,int4,float4,int2,int2,int2,int2,int2,oid,oid,oid,oid,oid,oid,oid,oid,oid,oid,_float4,_float4,_float4,_float4,_float4,anyarray,anyarray,anyarray,anyarray,anyarray}',\n> proargmodes =>\n> '{i,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o}',\n> proargnames =>\n>\n> '{relid,starelid,staattnum,stainherit,stanullfrac,stawidth,stadistinct,stakind1,stakind2,stakind3,stakind4,stakind5,staop1,staop2,staop3,staop4,staop5,stacoll1,stacoll2,stacoll3,stacoll4,stacoll5,stanumbers1,stanumbers2,stanumbers3,stanumbers4,stanumbers5,stavalues1,stavalues2,stavalues3,stavalues4,stavalues5}',\n> prosrc => 'pg_gtt_statistic_for_relation' },\n>\n> then I go the following error when try to use this function:\n>\n> a column definition list is required for functions returning \"record\"\n> at character 111\n>\n> The column definition list provided in pg_proc.dat was rejected because\n> it contains reference to anyarray which can not be resolved.\n>\n> If I try to declare function in system_views.sql as returning setof\n> pg_statistic then I got error \"cannot change return type of existing\n> function\".\n>\n> CREATE OR REPLACE FUNCTION\n> pg_gtt_statistic_for_relation(relid oid) returns setof pg_statistic\n> LANGUAGE INTERNAL STRICT\n> AS 'pg_gtt_statistic_by_relation';\n>\n> And if I try to declare it as returning record and explicitly cast it to\n> pg_statistic, then reported error is \"cannot cast type record to\n> pg_statistic\".\n>\n> So the only possible way I found is to create extension and define\n> function in this extension.\n> I wonder if there is some better solution?\n>\n> Thanks in advance,\n>\n> --\n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n>\n>\n\nHist 20. 11. 2019 v 10:59 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:Hi hackers,\n\nWorking on global temporary table I need to define function which \nreturns set of pg_statistic records.\nUnfortunately I failed to declare such function!\nType pg_statistic is defined in postgres.bki so I was not able to refer \nto it in pg_proc.dat file.\nAnd if I explicitly enumerate columns of this type:\nyou can define your function in postgres.bki.it will not be firstPavel\n\n{ oid => '3434',\n descr => 'show local statistics for global temp table',\n proname => 'pg_gtt_statistic_for_relation', provolatile => 'v', \nproparallel => 'u',\n prorettype => 'record', proretset => 't', proargtypes => 'oid',\n proallargtypes => \n'{oid,oid,int2,bool,float4,int4,float4,int2,int2,int2,int2,int2,oid,oid,oid,oid,oid,oid,oid,oid,oid,oid,_float4,_float4,_float4,_float4,_float4,anyarray,anyarray,anyarray,anyarray,anyarray}',\n proargmodes => \n'{i,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o}',\n proargnames => \n'{relid,starelid,staattnum,stainherit,stanullfrac,stawidth,stadistinct,stakind1,stakind2,stakind3,stakind4,stakind5,staop1,staop2,staop3,staop4,staop5,stacoll1,stacoll2,stacoll3,stacoll4,stacoll5,stanumbers1,stanumbers2,stanumbers3,stanumbers4,stanumbers5,stavalues1,stavalues2,stavalues3,stavalues4,stavalues5}',\n prosrc => 'pg_gtt_statistic_for_relation' },\n\nthen I go the following error when try to use this function:\n\n a column definition list is required for functions returning \"record\" \nat character 111\n\nThe column definition list provided in pg_proc.dat was rejected because \nit contains reference to anyarray which can not be resolved.\n\nIf I try to declare function in system_views.sql as returning setof \npg_statistic then I got error \"cannot change return type of existing \nfunction\".\n\nCREATE OR REPLACE FUNCTION\n pg_gtt_statistic_for_relation(relid oid) returns setof pg_statistic\nLANGUAGE INTERNAL STRICT\nAS 'pg_gtt_statistic_by_relation';\n\nAnd if I try to declare it as returning record and explicitly cast it to \npg_statistic, then reported error is \"cannot cast type record to \npg_statistic\".\n\nSo the only possible way I found is to create extension and define \nfunction in this extension.\nI wonder if there is some better solution?\n\nThanks in advance,\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Wed, 20 Nov 2019 11:26:16 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Internal function returning pg_statistic"
},
{
"msg_contents": "At Wed, 20 Nov 2019 11:26:16 +0100, Pavel Stehule <pavel.stehule@gmail.com> wrote in \n> Hi\n> \n> st 20. 11. 2019 v 10:59 odesílatel Konstantin Knizhnik <\n> k.knizhnik@postgrespro.ru> napsal:\n> \n> > Hi hackers,\n> >\n> > Working on global temporary table I need to define function which\n> > returns set of pg_statistic records.\n> > Unfortunately I failed to declare such function!\n> > Type pg_statistic is defined in postgres.bki so I was not able to refer\n> > to it in pg_proc.dat file.\n> > And if I explicitly enumerate columns of this type:\n> >\n> >\n> you can define your function in postgres.bki.\n\nMmm. AFAIK it's the old practice. Nowadays we edit pg_proc.dat.\n\n> > { oid => '3434',\n\nWe are encouraged to use OIDs in the range 8000-9999 for\ndevelopment. unused_oids should have suggested some OID above 8000 to\nyou.\n\n> > descr => 'show local statistics for global temp table',\n> > proname => 'pg_gtt_statistic_for_relation', provolatile => 'v',\n> > proparallel => 'u',\n> > prorettype => 'record', proretset => 't', proargtypes => 'oid',\n> > proallargtypes =>\n> >\n> > '{oid,oid,int2,bool,float4,int4,float4,int2,int2,int2,int2,int2,oid,oid,oid,oid,oid,oid,oid,oid,oid,oid,_float4,_float4,_float4,_float4,_float4,anyarray,anyarray,anyarray,anyarray,anyarray}',\n> > proargmodes =>\n> > '{i,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o}',\n> > proargnames =>\n> >\n> > '{relid,starelid,staattnum,stainherit,stanullfrac,stawidth,stadistinct,stakind1,stakind2,stakind3,stakind4,stakind5,staop1,staop2,staop3,staop4,staop5,stacoll1,stacoll2,stacoll3,stacoll4,stacoll5,stanumbers1,stanumbers2,stanumbers3,stanumbers4,stanumbers5,stavalues1,stavalues2,stavalues3,stavalues4,stavalues5}',\n> > prosrc => 'pg_gtt_statistic_for_relation' },\n> >\n> > then I go the following error when try to use this function:\n> >\n> > a column definition list is required for functions returning \"record\"\n> > at character 111\n> >\n> > The column definition list provided in pg_proc.dat was rejected because\n> > it contains reference to anyarray which can not be resolved.\n\nYeah, the reason for the error is anyarray in proallargtypes, which\nprevents the record from resolved as a composite type since any hint\nfor the type is given.\n\nIf one additional INPUT argument is allowed, you can define the\nfuntion as follows.\n\n{...\n proargtypes => 'oid anyarray',\n proallargtypes => '{oid,anyarray,oid,int2,bool,float4,int4,...}',\n proargmodes => '{i,i,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,...}',\n proargnames => '{relid,type,starelid,staattnum,stainherit,...}',\n\nThe second argument tells parser the returning type for the\nanyarrays. I think I saw the same technic somewhere in core but I\ndon't recall.\n\nselect * from pg_gtt_statistic_for_relation(1262, NULL::anyarray) limit 1;\n starelid | staattnum | stainherit | stanullfrac | stawidth | stadistinct | stak\nind1 | stakind2 | stakind3 | stakind4 | stakind5 | staop1 | staop2 | staop3 | st\naop4 | staop5 | stacoll1 | stacoll2 | stacoll3 | stacoll4 | stacoll5 | stanumber\ns1 | stanumbers2 | stanumbers3 | stanumbers4 | stanumbers5 | stavalues1 | staval\nues2 | stavalues3 | stavalues4 | stavalues5 \n----------+-----------+------------+-------------+----------+-------------+-----\n-----+----------+----------+----------+----------+--------+--------+--------+---\n-----+--------+----------+----------+----------+----------+----------+----------\n---+-------------+-------------+-------------+-------------+------------+-------\n-----+------------+------------+------------\n(0 rows)\n\nOr you could hide anyarray in a new type.\n\n> > If I try to declare function in system_views.sql as returning setof\n> > pg_statistic then I got error \"cannot change return type of existing\n> > function\".\n> >\n> > CREATE OR REPLACE FUNCTION\n> > pg_gtt_statistic_for_relation(relid oid) returns setof pg_statistic\n> > LANGUAGE INTERNAL STRICT\n> > AS 'pg_gtt_statistic_by_relation';\n> >\n> > And if I try to declare it as returning record and explicitly cast it to\n> > pg_statistic, then reported error is \"cannot cast type record to\n> > pg_statistic\".\n> >\n> > So the only possible way I found is to create extension and define\n> > function in this extension.\n> > I wonder if there is some better solution?\n> > \n> > Thanks in advance,\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 21 Nov 2019 12:32:30 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Internal function returning pg_statistic"
}
] |
[
{
"msg_contents": "Hi Hackers:\n\n First I found the following queries running bad on pg.\n\n select count(*) from part2 p1 where p_size > 40 and p_retailprice >\n(select avg(p_retailprice) from part2 p2 where p2.p_brand=p1.p_brand);\n\nthe plan is\n QUERY PLAN\n------------------------------------------------------------------------------------\n Aggregate (cost=1899310537.28..1899310537.29 rows=1 width=8)\n -> Seq Scan on part2 p1 (cost=0.00..1899310456.00 rows=32513 width=0)\n Filter: ((p_size > 40) AND (p_retailprice > (SubPlan 1)))\n SubPlan 1\n -> Aggregate (cost=6331.00..6331.01 rows=1 width=32)\n -> Seq Scan on part2 p2 (cost=0.00..5956.00 rows=150000\nwidth=4)\n Filter: (p_brand = p1.p_brand)\n\nhowever if we change it to the following format, it runs pretty quick.\n\nselect count(*) from part2,\n(select p_brand, avg(p_retailprice) as avg_price from part2 where p_size >\n40 group by p_brand) p2\nwhere p_retailprice > p2.avg_price\nand p_size > 40\nand part2.p_brand = p2.p_brand;\n\nThe above example comes from\nhttps://community.pivotal.io/s/article/Pivotal-Query-Optimizer-Explained with\na litter modification.\n\n1. why pg can't translate the query 1 to query 2. after some checking\non pull_up_sublinks_qual_recurse, I still doesn't get the idea.\n2. why pg can't do it, while greenplum can?\n\nThanks\n\nHi Hackers: First I found the following queries running bad on pg. select count(*) from part2 p1 where p_size > 40 and p_retailprice > (select avg(p_retailprice) from part2 p2 where p2.p_brand=p1.p_brand);the plan is QUERY PLAN------------------------------------------------------------------------------------ Aggregate (cost=1899310537.28..1899310537.29 rows=1 width=8) -> Seq Scan on part2 p1 (cost=0.00..1899310456.00 rows=32513 width=0) Filter: ((p_size > 40) AND (p_retailprice > (SubPlan 1))) SubPlan 1 -> Aggregate (cost=6331.00..6331.01 rows=1 width=32) -> Seq Scan on part2 p2 (cost=0.00..5956.00 rows=150000 width=4) Filter: (p_brand = p1.p_brand)however if we change it to the following format, it runs pretty quick. select count(*) from part2,(select p_brand, avg(p_retailprice) as avg_price from part2 where p_size > 40 group by p_brand) p2where p_retailprice > p2.avg_priceand p_size > 40and part2.p_brand = p2.p_brand;The above example comes from https://community.pivotal.io/s/article/Pivotal-Query-Optimizer-Explained with a litter modification. 1. why pg can't translate the query 1 to query 2. after some checking on pull_up_sublinks_qual_recurse, I still doesn't get the idea. 2. why pg can't do it, while greenplum can? Thanks",
"msg_date": "Wed, 20 Nov 2019 20:15:19 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "why doesn't optimizer can pull up where a > ( ... )"
},
{
"msg_contents": "On Wed, Nov 20, 2019 at 8:15 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n> Hi Hackers:\n>\n> First I found the following queries running bad on pg.\n>\n> select count(*) from part2 p1 where p_size > 40 and p_retailprice >\n> (select avg(p_retailprice) from part2 p2 where p2.p_brand=p1.p_brand);\n>\n> the plan is\n> QUERY PLAN\n>\n> ------------------------------------------------------------------------------------\n> Aggregate (cost=1899310537.28..1899310537.29 rows=1 width=8)\n> -> Seq Scan on part2 p1 (cost=0.00..1899310456.00 rows=32513 width=0)\n> Filter: ((p_size > 40) AND (p_retailprice > (SubPlan 1)))\n> SubPlan 1\n> -> Aggregate (cost=6331.00..6331.01 rows=1 width=32)\n> -> Seq Scan on part2 p2 (cost=0.00..5956.00 rows=150000\n> width=4)\n> Filter: (p_brand = p1.p_brand)\n>\n> however if we change it to the following format, it runs pretty quick.\n>\n> select count(*) from part2,\n> (select p_brand, avg(p_retailprice) as avg_price from part2 where p_size >\n> 40 group by p_brand) p2\n> where p_retailprice > p2.avg_price\n> and p_size > 40\n> and part2.p_brand = p2.p_brand;\n>\n> The above example comes from\n> https://community.pivotal.io/s/article/Pivotal-Query-Optimizer-Explained with\n> a litter modification.\n>\n> 1. why pg can't translate the query 1 to query 2. after some checking\n> on pull_up_sublinks_qual_recurse, I still doesn't get the idea.\n> 2. why pg can't do it, while greenplum can?\n>\n> Thanks\n>\n>\nadd the sql I used for testing for reference.\n\nCREATE TABLE part2 (\n p_partkey integer NOT NULL,\n p_brand character(10) NOT NULL,\n p_size integer NOT NULL,\n p_retailprice numeric(15,2) NOT NULL\n);\ninsert into part2 select 1, 'brand1', random_between(0, 40),\n random_between(0, 40) from generate_series(1, 100000);\ninsert into part2 select 2, 'brand2', random_between(40, 80),\n random_between(0, 40) from generate_series(1, 100000);\ninsert into part2 select 3, 'brand1', random_between(0, 40),\n random_between(0, 40) from generate_series(1, 100000);\n\nOn Wed, Nov 20, 2019 at 8:15 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:Hi Hackers: First I found the following queries running bad on pg. select count(*) from part2 p1 where p_size > 40 and p_retailprice > (select avg(p_retailprice) from part2 p2 where p2.p_brand=p1.p_brand);the plan is QUERY PLAN------------------------------------------------------------------------------------ Aggregate (cost=1899310537.28..1899310537.29 rows=1 width=8) -> Seq Scan on part2 p1 (cost=0.00..1899310456.00 rows=32513 width=0) Filter: ((p_size > 40) AND (p_retailprice > (SubPlan 1))) SubPlan 1 -> Aggregate (cost=6331.00..6331.01 rows=1 width=32) -> Seq Scan on part2 p2 (cost=0.00..5956.00 rows=150000 width=4) Filter: (p_brand = p1.p_brand)however if we change it to the following format, it runs pretty quick. select count(*) from part2,(select p_brand, avg(p_retailprice) as avg_price from part2 where p_size > 40 group by p_brand) p2where p_retailprice > p2.avg_priceand p_size > 40and part2.p_brand = p2.p_brand;The above example comes from https://community.pivotal.io/s/article/Pivotal-Query-Optimizer-Explained with a litter modification. 1. why pg can't translate the query 1 to query 2. after some checking on pull_up_sublinks_qual_recurse, I still doesn't get the idea. 2. why pg can't do it, while greenplum can? Thanksadd the sql I used for testing for reference. CREATE TABLE part2 ( p_partkey integer NOT NULL, p_brand character(10) NOT NULL, p_size integer NOT NULL, p_retailprice numeric(15,2) NOT NULL);insert into part2 select 1, 'brand1', random_between(0, 40), random_between(0, 40) from generate_series(1, 100000);insert into part2 select 2, 'brand2', random_between(40, 80), random_between(0, 40) from generate_series(1, 100000);insert into part2 select 3, 'brand1', random_between(0, 40), random_between(0, 40) from generate_series(1, 100000);",
"msg_date": "Wed, 20 Nov 2019 20:16:33 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: why doesn't optimizer can pull up where a > ( ... )"
},
{
"msg_contents": "> On 20 Nov 2019, at 13:15, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n> 2. why pg can't do it, while greenplum can? \n\nIt's worth noting that Greenplum, the example you're referring to, is using a\ncompletely different query planner, and different planners have different\ncharacteristics and capabilities.\n\ncheers ./daniel\n\n\n",
"msg_date": "Wed, 20 Nov 2019 14:19:32 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: why doesn't optimizer can pull up where a > ( ... )"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 20 Nov 2019, at 13:15, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>> 2. why pg can't do it, while greenplum can? \n\n> It's worth noting that Greenplum, the example you're referring to, is using a\n> completely different query planner, and different planners have different\n> characteristics and capabilities.\n\nYeah. TBH, I think the described transformation is well out of scope\nfor what PG's planner tries to do. Greenplum is oriented to use-cases\nwhere it might be worth spending lots of planner cycles looking for\noptimizations like this one, but in a wider environment it's much\nharder to make the argument that this would be a profitable use of\nplanner effort. I'm content to say that the application should have\nwritten the query with a GROUP BY to begin with.\n\nHaving said that, the best form of criticism is a patch. If somebody\nactually wrote the code to do something like this, we could look at\nhow much time it wasted in which unsuccessful cases and then have\nan informed discussion about whether it was worth adopting.\n\n(BTW, I do not think the transformation as described is even formally\ncorrect, at least not without some unstated assumptions. How is it\nokay to push down the \"p_size > 40\" condition into the subquery?\nThe aggregation in the original query will include rows where that\nisn't true.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 20 Nov 2019 11:12:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: why doesn't optimizer can pull up where a > ( ... )"
},
{
"msg_contents": "On Wed, Nov 20, 2019 at 08:15:19PM +0800, Andy Fan wrote:\n>Hi Hackers:\n>\n> First I found the following queries running bad on pg.\n>\n> select count(*) from part2 p1 where p_size > 40 and p_retailprice >\n>(select avg(p_retailprice) from part2 p2 where p2.p_brand=p1.p_brand);\n>\n>the plan is\n> QUERY PLAN\n>------------------------------------------------------------------------------------\n> Aggregate (cost=1899310537.28..1899310537.29 rows=1 width=8)\n> -> Seq Scan on part2 p1 (cost=0.00..1899310456.00 rows=32513 width=0)\n> Filter: ((p_size > 40) AND (p_retailprice > (SubPlan 1)))\n> SubPlan 1\n> -> Aggregate (cost=6331.00..6331.01 rows=1 width=32)\n> -> Seq Scan on part2 p2 (cost=0.00..5956.00 rows=150000\n>width=4)\n> Filter: (p_brand = p1.p_brand)\n>\n>however if we change it to the following format, it runs pretty quick.\n>\n>select count(*) from part2,\n>(select p_brand, avg(p_retailprice) as avg_price from part2 where p_size >\n>40 group by p_brand) p2\n>where p_retailprice > p2.avg_price\n>and p_size > 40\n>and part2.p_brand = p2.p_brand;\n>\n>The above example comes from\n>https://community.pivotal.io/s/article/Pivotal-Query-Optimizer-Explained with\n>a litter modification.\n>\n>1. why pg can't translate the query 1 to query 2. after some checking\n>on pull_up_sublinks_qual_recurse, I still doesn't get the idea.\n>2. why pg can't do it, while greenplum can?\n>\n\nI don't know the exact place(s) in the optimizer that would need to\nconsider this optimization, but the primary difference between the two\nqueries is that the first one is correlated subquery, while the second\none is not.\n\nSo I guess our optimizer is not smart enough to recognize this pattern,\nand can't do the transformation from\n\n ... FROM p WHERE x > (SELECT avg(x) FROM q WHERE p.id = q.id) ...\n\nto\n\n ... FROM p, (SELECT q.id, avg(x) x FROM q) q2 WHERE q2.id = p.id\n AND q2.x < p.x\n\nI.e. we don't have the code to consider this optimization, because no\none considered it interesting enough to submit a patch.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Wed, 20 Nov 2019 18:20:01 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: why doesn't optimizer can pull up where a > ( ... )"
},
{
"msg_contents": "On Wed, Nov 20, 2019 at 11:12:56AM -0500, Tom Lane wrote:\n>Daniel Gustafsson <daniel@yesql.se> writes:\n>>> On 20 Nov 2019, at 13:15, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>>> 2. why pg can't do it, while greenplum can?\n>\n>> It's worth noting that Greenplum, the example you're referring to, is using a\n>> completely different query planner, and different planners have different\n>> characteristics and capabilities.\n>\n>Yeah. TBH, I think the described transformation is well out of scope\n>for what PG's planner tries to do. Greenplum is oriented to use-cases\n>where it might be worth spending lots of planner cycles looking for\n>optimizations like this one, but in a wider environment it's much\n>harder to make the argument that this would be a profitable use of\n>planner effort.\n\nTrue.\n\n>I'm content to say that the application should have written the query\n>with a GROUP BY to begin with.\n>\n\nI'm not sure I agree with that. The problem is this really depends on\nthe number of rows that will need the subquery result (i.e. based on\nselectivity of conditions in the outer query). For small number of rows\nit's fine to execute the subplan repeatedly, for large number of rows\nit's better to rewrite it to the GROUP BY form. It's hard to make those\njudgements in the application, I think.\n\n>Having said that, the best form of criticism is a patch. If somebody\n>actually wrote the code to do something like this, we could look at how\n>much time it wasted in which unsuccessful cases and then have an\n>informed discussion about whether it was worth adopting.\n>\n\nRight.\n\n>(BTW, I do not think the transformation as described is even formally\n>correct, at least not without some unstated assumptions. How is it\n>okay to push down the \"p_size > 40\" condition into the subquery? The\n>aggregation in the original query will include rows where that isn't\n>true.)\n\nYeah. I think the examples are a bit messed up, and surely there are\nother restrictions on applicability of this optimization.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Wed, 20 Nov 2019 18:25:37 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: why doesn't optimizer can pull up where a > ( ... )"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Wed, Nov 20, 2019 at 11:12:56AM -0500, Tom Lane wrote:\n>> I'm content to say that the application should have written the query\n>> with a GROUP BY to begin with.\n\n> I'm not sure I agree with that. The problem is this really depends on\n> the number of rows that will need the subquery result (i.e. based on\n> selectivity of conditions in the outer query). For small number of rows\n> it's fine to execute the subplan repeatedly, for large number of rows\n> it's better to rewrite it to the GROUP BY form. It's hard to make those\n> judgements in the application, I think.\n\nHm. That actually raises the stakes a great deal, because if that's\nwhat you're expecting, it would require planning out both the transformed\nand untransformed versions of the query before you could make a cost\ncomparison. That's a *lot* harder to do in the context of our\noptimizer's structure, and it also means that the feature would consume\neven more planner cycles, than what I was envisioning (namely, a fixed\njointree-prep-stage transformation similar to subquery pullup).\n\nI have no idea whether Greenplum really does it like that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 20 Nov 2019 12:36:50 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: why doesn't optimizer can pull up where a > ( ... )"
},
{
"msg_contents": "On Wed, Nov 20, 2019 at 12:36:50PM -0500, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> On Wed, Nov 20, 2019 at 11:12:56AM -0500, Tom Lane wrote:\n>>> I'm content to say that the application should have written the query\n>>> with a GROUP BY to begin with.\n>\n>> I'm not sure I agree with that. The problem is this really depends on\n>> the number of rows that will need the subquery result (i.e. based on\n>> selectivity of conditions in the outer query). For small number of rows\n>> it's fine to execute the subplan repeatedly, for large number of rows\n>> it's better to rewrite it to the GROUP BY form. It's hard to make those\n>> judgements in the application, I think.\n>\n>Hm. That actually raises the stakes a great deal, because if that's\n>what you're expecting, it would require planning out both the transformed\n>and untransformed versions of the query before you could make a cost\n>comparison. That's a *lot* harder to do in the context of our\n>optimizer's structure, and it also means that the feature would consume\n>even more planner cycles, than what I was envisioning (namely, a fixed\n>jointree-prep-stage transformation similar to subquery pullup).\n>\n>I have no idea whether Greenplum really does it like that.\n>\n\nTrue. I'm not really sure how exactly would the planning logic work or\nhow Greenplum does it. It might be the case that based on the use cases\nthey target they simply assume the rewritten query is the right one in\n99% of the cases, so they do the transformation always. Not sure.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Wed, 20 Nov 2019 20:18:19 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: why doesn't optimizer can pull up where a > ( ... )"
},
{
"msg_contents": "On Wed, Nov 20, 2019 at 11:18 AM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> On Wed, Nov 20, 2019 at 12:36:50PM -0500, Tom Lane wrote:\n> >Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> >> On Wed, Nov 20, 2019 at 11:12:56AM -0500, Tom Lane wrote:\n> >>> I'm content to say that the application should have written the query\n> >>> with a GROUP BY to begin with.\n> >\n> >> I'm not sure I agree with that. The problem is this really depends on\n> >> the number of rows that will need the subquery result (i.e. based on\n> >> selectivity of conditions in the outer query). For small number of rows\n> >> it's fine to execute the subplan repeatedly, for large number of rows\n> >> it's better to rewrite it to the GROUP BY form. It's hard to make those\n> >> judgements in the application, I think.\n> >\n> >Hm. That actually raises the stakes a great deal, because if that's\n> >what you're expecting, it would require planning out both the transformed\n> >and untransformed versions of the query before you could make a cost\n> >comparison. That's a *lot* harder to do in the context of our\n> >optimizer's structure, and it also means that the feature would consume\n> >even more planner cycles, than what I was envisioning (namely, a fixed\n> >jointree-prep-stage transformation similar to subquery pullup).\n> >\n> >I have no idea whether Greenplum really does it like that.\n> >\n>\n> True. I'm not really sure how exactly would the planning logic work or\n> how Greenplum does it. It might be the case that based on the use cases\n> they target they simply assume the rewritten query is the right one in\n> 99% of the cases, so they do the transformation always. Not sure.\n>\n>\nThe Greenplum page mentions they also added \"join-aggregates reordering\",\nin addition to subquery unnesting.\nCosting pushing joins below aggregates could probably help.\nIt does increase plan search space quite a bit.\n\nRegards,\nXun\n\nOn Wed, Nov 20, 2019 at 11:18 AM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:On Wed, Nov 20, 2019 at 12:36:50PM -0500, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> On Wed, Nov 20, 2019 at 11:12:56AM -0500, Tom Lane wrote:\n>>> I'm content to say that the application should have written the query\n>>> with a GROUP BY to begin with.\n>\n>> I'm not sure I agree with that. The problem is this really depends on\n>> the number of rows that will need the subquery result (i.e. based on\n>> selectivity of conditions in the outer query). For small number of rows\n>> it's fine to execute the subplan repeatedly, for large number of rows\n>> it's better to rewrite it to the GROUP BY form. It's hard to make those\n>> judgements in the application, I think.\n>\n>Hm. That actually raises the stakes a great deal, because if that's\n>what you're expecting, it would require planning out both the transformed\n>and untransformed versions of the query before you could make a cost\n>comparison. That's a *lot* harder to do in the context of our\n>optimizer's structure, and it also means that the feature would consume\n>even more planner cycles, than what I was envisioning (namely, a fixed\n>jointree-prep-stage transformation similar to subquery pullup).\n>\n>I have no idea whether Greenplum really does it like that.\n>\n\nTrue. I'm not really sure how exactly would the planning logic work or\nhow Greenplum does it. It might be the case that based on the use cases\nthey target they simply assume the rewritten query is the right one in\n99% of the cases, so they do the transformation always. Not sure.The Greenplum page mentions they also added \"join-aggregates reordering\", in addition to subquery unnesting.Costing pushing joins below aggregates could probably help.It does increase plan search space quite a bit.Regards,Xun",
"msg_date": "Wed, 20 Nov 2019 12:34:25 -0800",
"msg_from": "Xun Cheng <xuncheng@google.com>",
"msg_from_op": false,
"msg_subject": "Re: why doesn't optimizer can pull up where a > ( ... )"
},
{
"msg_contents": "On Wed, Nov 20, 2019 at 12:34:25PM -0800, Xun Cheng wrote:\n>On Wed, Nov 20, 2019 at 11:18 AM Tomas Vondra <tomas.vondra@2ndquadrant.com>\n>wrote:\n>\n>> On Wed, Nov 20, 2019 at 12:36:50PM -0500, Tom Lane wrote:\n>> >Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> >> On Wed, Nov 20, 2019 at 11:12:56AM -0500, Tom Lane wrote:\n>> >>> I'm content to say that the application should have written the query\n>> >>> with a GROUP BY to begin with.\n>> >\n>> >> I'm not sure I agree with that. The problem is this really depends on\n>> >> the number of rows that will need the subquery result (i.e. based on\n>> >> selectivity of conditions in the outer query). For small number of rows\n>> >> it's fine to execute the subplan repeatedly, for large number of rows\n>> >> it's better to rewrite it to the GROUP BY form. It's hard to make those\n>> >> judgements in the application, I think.\n>> >\n>> >Hm. That actually raises the stakes a great deal, because if that's\n>> >what you're expecting, it would require planning out both the transformed\n>> >and untransformed versions of the query before you could make a cost\n>> >comparison. That's a *lot* harder to do in the context of our\n>> >optimizer's structure, and it also means that the feature would consume\n>> >even more planner cycles, than what I was envisioning (namely, a fixed\n>> >jointree-prep-stage transformation similar to subquery pullup).\n>> >\n>> >I have no idea whether Greenplum really does it like that.\n>> >\n>>\n>> True. I'm not really sure how exactly would the planning logic work or\n>> how Greenplum does it. It might be the case that based on the use cases\n>> they target they simply assume the rewritten query is the right one in\n>> 99% of the cases, so they do the transformation always. Not sure.\n>>\n>>\n>The Greenplum page mentions they also added \"join-aggregates reordering\",\n>in addition to subquery unnesting.\n>Costing pushing joins below aggregates could probably help.\n>It does increase plan search space quite a bit.\n>\n\nWe actually do have a patch for aggregate push-down [1]. But I don't\nthink it's directly relevant to this thread - the main trick here is\ntransforming the correlated subquery to aggregation, not moving the\naggregation down. That seems like a separate optimization.\n\n[1] https://commitfest.postgresql.org/25/1247/\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Wed, 20 Nov 2019 22:28:51 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: why doesn't optimizer can pull up where a > ( ... )"
},
{
"msg_contents": ">\n>\n> Hm. That actually raises the stakes a great deal, because if that's\n> what you're expecting, it would require planning out both the transformed\n> and untransformed versions of the query before you could make a cost\n> comparison.\n\n\nI don't know an official name, let's call it as \"bloom filter push down\n(BFPD)\" for reference. this algorithm may be helpful on this case with\nsome extra effort.\n\nFirst, Take . \"select ... from t1, t2 where t1.a = t2.a and t1.b = 100\"\nfor example, and assume t1 is scanned before t2 scanning, like hash\njoin/sort merge and take t1's result as inner table.\n\n1. it first scan t1 with filter t1.b = 100;\n2. during the above scan, it build a bloom filter *based on the join key\n(t1.a) for the \"selected\" rows.*\n3. during scan t2.a, it filters t2.a with the bloom filter.\n4. probe the the hash table with the filtered rows from the above step.\n\nBack to this problem, if we have a chance to get the p_brand we are\ninterested, we can use the same logic to only group by the p_brand.\n\nAnother option may be we just keep the N versions, and search them\ndifferently and compare their cost at last.\n\n> The Greenplum page mentions they also added \"join-aggregates\nreordering\", in addition to subquery unnesting.\nThanks, I will search more about this.\n\n>Having said that, the best form of criticism is a patch. If somebody\n>actually wrote the code to do something like this, we could look at how\n>much time it wasted in which unsuccessful cases and then have an\n>informed discussion about whether it was worth adopting.\n>\n\nI would try to see how far I can get.\n\nHm. That actually raises the stakes a great deal, because if that'swhat you're expecting, it would require planning out both the transformedand untransformed versions of the query before you could make a costcomparison. I don't know an official name, let's call it as \"bloom filter push down (BFPD)\" for reference. this algorithm may be helpful on this case with some extra effort. First, Take . \"select ... from t1, t2 where t1.a = t2.a and t1.b = 100\" for example, and assume t1 is scanned before t2 scanning, like hash join/sort merge and take t1's result as inner table. 1. it first scan t1 with filter t1.b = 100;2. during the above scan, it build a bloom filter based on the join key (t1.a) for the \"selected\" rows.3. during scan t2.a, it filters t2.a with the bloom filter. 4. probe the the hash table with the filtered rows from the above step. Back to this problem, if we have a chance to get the p_brand we are interested, we can use the same logic to only group by the p_brand. Another option may be we just keep the N versions, and search them differently and compare their cost at last. > The Greenplum page mentions they also added \"join-aggregates reordering\", in addition to subquery unnesting.Thanks, I will search more about this. >Having said that, the best form of criticism is a patch. If somebody>actually wrote the code to do something like this, we could look at how>much time it wasted in which unsuccessful cases and then have an>informed discussion about whether it was worth adopting.>I would try to see how far I can get.",
"msg_date": "Thu, 21 Nov 2019 08:30:51 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: why doesn't optimizer can pull up where a > ( ... )"
},
{
"msg_contents": "On Thu, Nov 21, 2019 at 08:30:51AM +0800, Andy Fan wrote:\n>>\n>>\n>> Hm. That actually raises the stakes a great deal, because if that's\n>> what you're expecting, it would require planning out both the transformed\n>> and untransformed versions of the query before you could make a cost\n>> comparison.\n>\n>\n>I don't know an official name, let's call it as \"bloom filter push down\n>(BFPD)\" for reference. this algorithm may be helpful on this case with\n>some extra effort.\n>\n>First, Take . \"select ... from t1, t2 where t1.a = t2.a and t1.b = 100\"\n>for example, and assume t1 is scanned before t2 scanning, like hash\n>join/sort merge and take t1's result as inner table.\n>\n>1. it first scan t1 with filter t1.b = 100;\n>2. during the above scan, it build a bloom filter *based on the join key\n>(t1.a) for the \"selected\" rows.*\n>3. during scan t2.a, it filters t2.a with the bloom filter.\n>4. probe the the hash table with the filtered rows from the above step.\n>\n\nSo essentially just a hash join with a bloom filter? That doesn't seem\nvery relevant to this thread (at least I don't see any obvious link),\nbut note that this has been discussed in the past - see [1]. And in some\ncases building a bloom filter did result in nice speedups, but in other\ncases it was just an extra overhead. But it does not require change of\nplan shape, unlike the optimization discussed here.\n\n[1] https://www.postgresql.org/message-id/flat/5670946E.8070705%402ndquadrant.com\n\nUltimately there were discussions about pushing the bloom filter much\ndeeper on the non-hash side, but that was never implemented.\n\n>Back to this problem, if we have a chance to get the p_brand we are\n>interested, we can use the same logic to only group by the p_brand.\n>\n>Another option may be we just keep the N versions, and search them\n>differently and compare their cost at last.\n>\n\nMaybe. I think the problem is going to be that with multiple such\ncorrelated queries you may significantly increase the number of plan\nvariants to consider - each subquery may be transformed or not, so the\nspace splits into 2. With 6 such subqueries you suddenly have 64x the\nnumber of plan variants you have to consider (I don't think you can just\nelimiate those early on).\n\n>> The Greenplum page mentions they also added \"join-aggregates\n>reordering\", in addition to subquery unnesting.\n>Thanks, I will search more about this.\n>\n>>Having said that, the best form of criticism is a patch. If somebody\n>>actually wrote the code to do something like this, we could look at how\n>>much time it wasted in which unsuccessful cases and then have an\n>>informed discussion about whether it was worth adopting.\n>>\n>\n>I would try to see how far I can get.\n\n+1\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Thu, 21 Nov 2019 11:11:55 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: why doesn't optimizer can pull up where a > ( ... )"
},
{
"msg_contents": "On Thu, Nov 21, 2019 at 6:12 PM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> On Thu, Nov 21, 2019 at 08:30:51AM +0800, Andy Fan wrote:\n> >>\n> >>\n> >> Hm. That actually raises the stakes a great deal, because if that's\n> >> what you're expecting, it would require planning out both the\n> transformed\n> >> and untransformed versions of the query before you could make a cost\n> >> comparison.\n> >\n> >\n> >I don't know an official name, let's call it as \"bloom filter push down\n> >(BFPD)\" for reference. this algorithm may be helpful on this case with\n> >some extra effort.\n> >\n> >First, Take . \"select ... from t1, t2 where t1.a = t2.a and t1.b = 100\"\n> >for example, and assume t1 is scanned before t2 scanning, like hash\n> >join/sort merge and take t1's result as inner table.\n> >\n> >1. it first scan t1 with filter t1.b = 100;\n> >2. during the above scan, it build a bloom filter *based on the join key\n> >(t1.a) for the \"selected\" rows.*\n> >3. during scan t2.a, it filters t2.a with the bloom filter.\n> >4. probe the the hash table with the filtered rows from the above step.\n> >\n>\n> So essentially just a hash join with a bloom filter?\n\n\nYes, the idea is exactly same but we treat the value differently (both are\nvalid, and your point is more common) . In my opinion in some\nenvironment like oracle exadata, it is much more powerful since it\ntransfers much less data from data node to compute node.\n\nOf course, the benefit is not always, but it is a good beginning to make\nit smarter.\n\n\n> That doesn't seem very relevant to this thread (at least I don't see any\n> obvious link),\n>\n\nThe original problem \"group by p_brand\" for \"all the rows\" maybe not a\ngood idea all the time, and if we can do some filter before the group\nby, the result would be better.\n\nAnd in some\n> cases building a bloom filter did result in nice speedups, but in other\n> cases it was just an extra overhead. But it does not require change of\n> plan shape, unlike the optimization discussed here.\n>\n\nI thought we could add a step named \"build the filter\" and another step as\n\"apply the filter\". If so, the plan shape is changed. anyway I don't\nthink this is a key point.\n\n\n>\n> Ultimately there were discussions about pushing the bloom filter much\n> deeper on the non-hash side, but that was never implemented.\n\n\nDo you still have any plan about this feature since I see you raised the\nidea and and the idea was very welcomed also?\n\n>Back to this problem, if we have a chance to get the p_brand we are\n> >interested, we can use the same logic to only group by the p_brand.\n> >\n> >Another option may be we just keep the N versions, and search them\n> >differently and compare their cost at last.\n> >\n>\n> Maybe. I think the problem is going to be that with multiple such\n> correlated queries you may significantly increase the number of plan\n> variants to consider - each subquery may be transformed or not, so the\n> space splits into 2. With 6 such subqueries you suddenly have 64x the\n> number of plan variants you have to consider (I don't think you can just\n> elimiate those early on).\n>\n> >> The Greenplum page mentions they also added \"join-aggregates\n> >reordering\", in addition to subquery unnesting.\n> >Thanks, I will search more about this.\n> >\n> >>Having said that, the best form of criticism is a patch. If somebody\n> >>actually wrote the code to do something like this, we could look at how\n> >>much time it wasted in which unsuccessful cases and then have an\n> >>informed discussion about whether it was worth adopting.\n> >>\n> >\n> >I would try to see how far I can get.\n>\n> +1\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nOn Thu, Nov 21, 2019 at 6:12 PM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:On Thu, Nov 21, 2019 at 08:30:51AM +0800, Andy Fan wrote:\n>>\n>>\n>> Hm. That actually raises the stakes a great deal, because if that's\n>> what you're expecting, it would require planning out both the transformed\n>> and untransformed versions of the query before you could make a cost\n>> comparison.\n>\n>\n>I don't know an official name, let's call it as \"bloom filter push down\n>(BFPD)\" for reference. this algorithm may be helpful on this case with\n>some extra effort.\n>\n>First, Take . \"select ... from t1, t2 where t1.a = t2.a and t1.b = 100\"\n>for example, and assume t1 is scanned before t2 scanning, like hash\n>join/sort merge and take t1's result as inner table.\n>\n>1. it first scan t1 with filter t1.b = 100;\n>2. during the above scan, it build a bloom filter *based on the join key\n>(t1.a) for the \"selected\" rows.*\n>3. during scan t2.a, it filters t2.a with the bloom filter.\n>4. probe the the hash table with the filtered rows from the above step.\n>\n\nSo essentially just a hash join with a bloom filter? Yes, the idea is exactly same but we treat the value differently (both are valid, and your point is more common) . In my opinion in some environment like oracle exadata, it is much more powerful since it transfers much less data from data node to compute node. Of course, the benefit is not always, but it is a good beginning to make it smarter. That doesn't seem very relevant to this thread (at least I don't see any obvious link),The original problem \"group by p_brand\" for \"all the rows\" maybe not a good idea all the time, and if we can do some filter before the group by, the result would be better. And in some\ncases building a bloom filter did result in nice speedups, but in other\ncases it was just an extra overhead. But it does not require change of\nplan shape, unlike the optimization discussed here.I thought we could add a step named \"build the filter\" and another step as \"apply the filter\". If so, the plan shape is changed. anyway I don't think this is a key point. \n\nUltimately there were discussions about pushing the bloom filter much\ndeeper on the non-hash side, but that was never implemented. Do you still have any plan about this feature since I see you raised the idea and and the idea was very welcomed also? \n>Back to this problem, if we have a chance to get the p_brand we are\n>interested, we can use the same logic to only group by the p_brand.\n>\n>Another option may be we just keep the N versions, and search them\n>differently and compare their cost at last.\n>\n\nMaybe. I think the problem is going to be that with multiple such\ncorrelated queries you may significantly increase the number of plan\nvariants to consider - each subquery may be transformed or not, so the\nspace splits into 2. With 6 such subqueries you suddenly have 64x the\nnumber of plan variants you have to consider (I don't think you can just\nelimiate those early on).\n\n>> The Greenplum page mentions they also added \"join-aggregates\n>reordering\", in addition to subquery unnesting.\n>Thanks, I will search more about this.\n>\n>>Having said that, the best form of criticism is a patch. If somebody\n>>actually wrote the code to do something like this, we could look at how\n>>much time it wasted in which unsuccessful cases and then have an\n>>informed discussion about whether it was worth adopting.\n>>\n>\n>I would try to see how far I can get.\n\n+1\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 21 Nov 2019 23:57:22 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: why doesn't optimizer can pull up where a > ( ... )"
},
{
"msg_contents": "On Thu, Nov 21, 2019 at 11:57:22PM +0800, Andy Fan wrote:\n>On Thu, Nov 21, 2019 at 6:12 PM Tomas Vondra <tomas.vondra@2ndquadrant.com>\n>wrote:\n>\n>> On Thu, Nov 21, 2019 at 08:30:51AM +0800, Andy Fan wrote:\n>> >>\n>> >>\n>> >> Hm. That actually raises the stakes a great deal, because if that's\n>> >> what you're expecting, it would require planning out both the\n>> transformed\n>> >> and untransformed versions of the query before you could make a cost\n>> >> comparison.\n>> >\n>> >\n>> >I don't know an official name, let's call it as \"bloom filter push down\n>> >(BFPD)\" for reference. this algorithm may be helpful on this case with\n>> >some extra effort.\n>> >\n>> >First, Take . \"select ... from t1, t2 where t1.a = t2.a and t1.b = 100\"\n>> >for example, and assume t1 is scanned before t2 scanning, like hash\n>> >join/sort merge and take t1's result as inner table.\n>> >\n>> >1. it first scan t1 with filter t1.b = 100;\n>> >2. during the above scan, it build a bloom filter *based on the join key\n>> >(t1.a) for the \"selected\" rows.*\n>> >3. during scan t2.a, it filters t2.a with the bloom filter.\n>> >4. probe the the hash table with the filtered rows from the above step.\n>> >\n>>\n>> So essentially just a hash join with a bloom filter?\n>\n>\n>Yes, the idea is exactly same but we treat the value differently (both are\n>valid, and your point is more common) . In my opinion in some\n>environment like oracle exadata, it is much more powerful since it\n>transfers much less data from data node to compute node.\n>\n>Of course, the benefit is not always, but it is a good beginning to make\n>it smarter.\n>\n\nYes, it certainly depends on the workload. As was discussed in the\nother thread, to get the most benefit we'd have to push the bloom filter\ndown the other side of the join as far as possible, ideally to the scan\nnodes. But no one tried to do that.\n\n>\n>> That doesn't seem very relevant to this thread (at least I don't see any\n>> obvious link),\n>>\n>\n>The original problem \"group by p_brand\" for \"all the rows\" maybe not a\n>good idea all the time, and if we can do some filter before the group\n>by, the result would be better.\n>\n\nWell, I think vast majority of optimizations depend on the data. The\nreason why I think these two optimizations are quite different is that\none (blom filter with hash joins) is kinda localized and does not change\nthe general plan shape - you simply make the decision at the hash join\nlevel, and that's it (although it's true it does affect row counts on\none side of the join).\n\nThe optimization discussed here is very different because it requires\ntransformation of the query very early, before we actually can judge if\nit's a good idea or not.\n\n>> And in some\n>> cases building a bloom filter did result in nice speedups, but in other\n>> cases it was just an extra overhead. But it does not require change of\n>> plan shape, unlike the optimization discussed here.\n>>\n>\n>I thought we could add a step named \"build the filter\" and another step as\n>\"apply the filter\". If so, the plan shape is changed. anyway I don't\n>think this is a key point.\n>\n\nNot sure. Perhaps there are similarities, but I don't see them.\n\n>\n>>\n>> Ultimately there were discussions about pushing the bloom filter much\n>> deeper on the non-hash side, but that was never implemented.\n>\n>\n>Do you still have any plan about this feature since I see you raised the\n>idea and and the idea was very welcomed also?\n>\n\nI'm not working on it, and I don't think I'll get to do that any time\nsoon. So feel free to look into the problem if you wish.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 21 Nov 2019 18:15:27 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: why doesn't optimizer can pull up where a > ( ... )"
}
] |
[
{
"msg_contents": "On Wed, Aug 07, 2019 at 04:51:54PM -0700, Andres Freund wrote:\nhttps://www.postgresql.org/message-id/20190807235154.erbmr4o4bo6vgnjv%40alap3.anarazel.de\n| Ugh :(\n| \n| We really need to add a error context to vacuumlazy that shows which\n| block is being processed.\n\nI eeked out a minimal patch.\n\nI renamed \"StringInfoData buf\", since it wasn't nice to mask it by\n\"Buffer buf\".\n\npostgres=# SET statement_timeout=99;vacuum t;\nSET\n2019-11-20 14:52:49.521 CST [6319] ERROR: canceling statement due to statement timeout\n2019-11-20 14:52:49.521 CST [6319] CONTEXT: block 596\n2019-11-20 14:52:49.521 CST [6319] STATEMENT: vacuum t;\n\nJustin",
"msg_date": "Wed, 20 Nov 2019 15:06:00 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "error context for vacuum to include block number"
},
{
"msg_contents": "Find attached updated patch:\n . Use structure to include relation name.\n . Split into a separate patch rename of \"StringInfoData buf\".\n\n2019-11-27 20:04:53.640 CST [14244] ERROR: canceling statement due to statement timeout\n2019-11-27 20:04:53.640 CST [14244] CONTEXT: block 2314 of relation t\n2019-11-27 20:04:53.640 CST [14244] STATEMENT: vacuum t;\n\nI tried to use BufferGetTag() to avoid using a 2ndary structure, but fails if\nthe buffer is not pinned.",
"msg_date": "Fri, 6 Dec 2019 10:23:25 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Fri, Dec 06, 2019 at 10:23:25AM -0600, Justin Pryzby wrote:\n> Find attached updated patch:\n> . Use structure to include relation name.\n> . Split into a separate patch rename of \"StringInfoData buf\".\n> \n> 2019-11-27 20:04:53.640 CST [14244] ERROR: canceling statement due to statement timeout\n> 2019-11-27 20:04:53.640 CST [14244] CONTEXT: block 2314 of relation t\n> 2019-11-27 20:04:53.640 CST [14244] STATEMENT: vacuum t;\n> \n> I tried to use BufferGetTag() to avoid using a 2ndary structure, but fails if\n> the buffer is not pinned.\n\nNo problem from me to add more context directly in lazy_scan_heap().\n\n+ // errcallback.arg = (void *) &buf;\nThe first patch is full of that, please make sure to clean it up. \n\nLet's keep also the message simple, still I think that it should be a\nbit more explicative:\n1) Let's the schema name, and quote the relation name.\n2) Let's mention the scanning (or vacuuming) operation.\n\nSo I would suggest the following instead:\n\"while scanning block %u of relation \\\"%s.%s\\\"\" \n--\nMichael",
"msg_date": "Wed, 11 Dec 2019 21:15:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Wed, Dec 11, 2019 at 09:15:07PM +0900, Michael Paquier wrote:\n> On Fri, Dec 06, 2019 at 10:23:25AM -0600, Justin Pryzby wrote:\n> > Find attached updated patch:\n> > . Use structure to include relation name.\n> > . Split into a separate patch rename of \"StringInfoData buf\".\n> > \n> > 2019-11-27 20:04:53.640 CST [14244] ERROR: canceling statement due to statement timeout\n> > 2019-11-27 20:04:53.640 CST [14244] CONTEXT: block 2314 of relation t\n> > 2019-11-27 20:04:53.640 CST [14244] STATEMENT: vacuum t;\n> > \n> > I tried to use BufferGetTag() to avoid using a 2ndary structure, but fails if\n> > the buffer is not pinned.\n> \n> No problem from me to add more context directly in lazy_scan_heap().\n\nDo you mean without a callback ? I think that's necessary, since the IO errors\nwould happen within ReadBufferExtended, but we don't want to polute that with\nerrcontext. And cannot call errcontext on its own:\nFATAL: errstart was not called\n\n> So I would suggest the following instead:\n> \"while scanning block %u of relation \\\"%s.%s\\\"\" \n\nDone in the attached.",
"msg_date": "Wed, 11 Dec 2019 08:36:48 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On 2019-Dec-11, Justin Pryzby wrote:\n\n> @@ -635,6 +644,15 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,\n> \telse\n> \t\tskipping_blocks = false;\n> \n> +\t/* Setup error traceback support for ereport() */\n> +\terrcallback.callback = vacuum_error_callback;\n> +\tcbarg.relname = relname;\n> +\tcbarg.relnamespace = get_namespace_name(RelationGetNamespace(onerel));\n> +\tcbarg.blkno = 0; /* Not known yet */\n\nShouldn't you use InvalidBlockNumber for this initialization?\n\n> @@ -658,6 +676,8 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,\n> \n> \t\tpgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_SCANNED, blkno);\n> \n> +\t\tcbarg.blkno = blkno;\n\nI would put this before pgstat_progress_update_param, just out of\nparanoia.\n\n> @@ -817,7 +837,6 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,\n> \n> \t\tbuf = ReadBufferExtended(onerel, MAIN_FORKNUM, blkno,\n> \t\t\t\t\t\t\t\t RBM_NORMAL, vac_strategy);\n> -\n> \t\t/* We need buffer cleanup lock so that we can prune HOT chains. */\n> \t\tif (!ConditionalLockBufferForCleanup(buf))\n> \t\t{\n\nLose this hunk?\n\n> @@ -2354,3 +2376,15 @@ heap_page_is_all_visible(Relation rel, Buffer buf,\n> \n> \treturn all_visible;\n> }\n> +\n> +/*\n> + * Error context callback for errors occurring during vacuum.\n> + */\n> +static void\n> +vacuum_error_callback(void *arg)\n> +{\n> +\tvacuum_error_callback_arg *cbarg = arg;\n> +\n> +\terrcontext(\"while scanning block %u of relation \\\"%s.%s\\\"\",\n> +\t\t\tcbarg->blkno, cbarg->relnamespace, cbarg->relname);\n> +}\n\nI would put this function around line 1512 (just after lazy_scan_heap)\nrather than at bottom of file. (And move its prototype accordingly, to\nline 156.) Or do you intend that this is going to be used for\nlazy_vacuum_heap too? Maybe it should.\n\nPatch looks good to me otherwise.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 11 Dec 2019 12:33:53 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "Hi,\n\nThanks for working on this!\n\nOn 2019-12-11 08:36:48 -0600, Justin Pryzby wrote:\n> On Wed, Dec 11, 2019 at 09:15:07PM +0900, Michael Paquier wrote:\n> > On Fri, Dec 06, 2019 at 10:23:25AM -0600, Justin Pryzby wrote:\n> > > Find attached updated patch:\n> > > . Use structure to include relation name.\n> > > . Split into a separate patch rename of \"StringInfoData buf\".\n> > > \n> > > 2019-11-27 20:04:53.640 CST [14244] ERROR: canceling statement due to statement timeout\n> > > 2019-11-27 20:04:53.640 CST [14244] CONTEXT: block 2314 of relation t\n> > > 2019-11-27 20:04:53.640 CST [14244] STATEMENT: vacuum t;\n> > > \n> > > I tried to use BufferGetTag() to avoid using a 2ndary structure, but fails if\n> > > the buffer is not pinned.\n\nThe tag will not add all that informative details, because the\nrelfilenode isn't easily mappable to the table name or such.\n\n\n> diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c\n> index 043ebb4..9376989 100644\n> --- a/src/backend/access/heap/vacuumlazy.c\n> +++ b/src/backend/access/heap/vacuumlazy.c\n> @@ -138,6 +138,12 @@ typedef struct LVRelStats\n> \tbool\t\tlock_waiter_detected;\n> } LVRelStats;\n> \n> +typedef struct\n> +{\n> +\tchar *relname;\n> +\tchar *relnamespace;\n> +\tBlockNumber blkno;\n> +} vacuum_error_callback_arg;\n\nHm, wonder if could be worthwhile to not use a separate struct here, but\ninstead extend one of the existing structs to contain the necessary\ninformation. Or perhaps have one new struct with all the necessary\ninformation. There's already quite a few places that do\nget_namespace_name(), for example.\n\n\n\n> @@ -524,6 +531,8 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,\n> \t\tPROGRESS_VACUUM_MAX_DEAD_TUPLES\n> \t};\n> \tint64\t\tinitprog_val[3];\n> +\tErrorContextCallback errcallback;\n> +\tvacuum_error_callback_arg cbarg;\n\nNot a fan of \"cbarg\", too generic.\n\n> \tpg_rusage_init(&ru0);\n> \n> @@ -635,6 +644,15 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,\n> \telse\n> \t\tskipping_blocks = false;\n> \n> +\t/* Setup error traceback support for ereport() */\n> +\terrcallback.callback = vacuum_error_callback;\n> +\tcbarg.relname = relname;\n> +\tcbarg.relnamespace = get_namespace_name(RelationGetNamespace(onerel));\n> +\tcbarg.blkno = 0; /* Not known yet */\n> +\terrcallback.arg = (void *) &cbarg;\n> +\terrcallback.previous = error_context_stack;\n> +\terror_context_stack = &errcallback;\n> +\n> \tfor (blkno = 0; blkno < nblocks; blkno++)\n> \t{\n> \t\tBuffer\t\tbuf;\n> @@ -658,6 +676,8 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,\n> \n> \t\tpgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_SCANNED, blkno);\n> \n> +\t\tcbarg.blkno = blkno;\n> +\n> \t\tif (blkno == next_unskippable_block)\n> \t\t{\n> \t\t\t/* Time to advance next_unskippable_block */\n> @@ -817,7 +837,6 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,\n> \n> \t\tbuf = ReadBufferExtended(onerel, MAIN_FORKNUM, blkno,\n> \t\t\t\t\t\t\t\t RBM_NORMAL, vac_strategy);\n> -\n> \t\t/* We need buffer cleanup lock so that we can prune HOT chains. */\n> \t\tif (!ConditionalLockBufferForCleanup(buf))\n> \t\t{\n> @@ -1388,6 +1407,9 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,\n> \t\t\tRecordPageWithFreeSpace(onerel, blkno, freespace);\n> \t}\n> \n> +\t/* Pop the error context stack */\n> +\terror_context_stack = errcallback.previous;\n> +\n> \t/* report that everything is scanned and vacuumed */\n> \tpgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_SCANNED, blkno);\n> \n> @@ -2354,3 +2376,15 @@ heap_page_is_all_visible(Relation rel, Buffer buf,\n> \n> \treturn all_visible;\n> }\n\nI think this will misattribute errors that happen when in the:\n\t\t/*\n\t\t * If we are close to overrunning the available space for dead-tuple\n\t\t * TIDs, pause and do a cycle of vacuuming before we tackle this page.\n\t\t */\nsection of lazy_scan_heap(). That will\n\na) scan the index, during which we presumably don't want the same error\n context, as it'd be quite misleading: The block that was just scanned\n in the loop isn't actually likely to be the culprit for an index\n problem. And we'd not mention the fact that the problem is occurring\n in the index.\nb) will report the wrong block, when in lazy_vacuum_heap().\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 11 Dec 2019 08:54:25 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Wed, Dec 11, 2019 at 12:33:53PM -0300, Alvaro Herrera wrote:\n> On 2019-Dec-11, Justin Pryzby wrote:\n> > + cbarg.blkno = 0; /* Not known yet */\n> Shouldn't you use InvalidBlockNumber for this initialization?\n..\n> > pgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_SCANNED, blkno);\n> > + cbarg.blkno = blkno;\n> I would put this before pgstat_progress_update_param, just out of\n> paranoia.\n..\n> Lose this hunk?\n\nAddressed those.\n\n> Or do you intend that this is going to be used for lazy_vacuum_heap too?\n> Maybe it should.\n\nDone in a separate patch.\n\nOn Wed, Dec 11, 2019 at 08:54:25AM -0800, Andres Freund wrote:\n> Hm, wonder if could be worthwhile to not use a separate struct here, but\n> instead extend one of the existing structs to contain the necessary\n> information. Or perhaps have one new struct with all the necessary\n> information. There's already quite a few places that do\n> get_namespace_name(), for example.\n\nDidn't find a better struct to use yet.\n\n> > + vacuum_error_callback_arg cbarg;\n> Not a fan of \"cbarg\", too generic.\n..\n> I think this will misattribute errors that happen when in the:\n\nProbably right. Attached should address it. \n\nOn Wed, Dec 11, 2019 at 08:54:25AM -0800, Andres Freund wrote:\n> > +typedef struct\n> > +{\n> > +\tchar *relname;\n> > +\tchar *relnamespace;\n> > +\tBlockNumber blkno;\n> > +} vacuum_error_callback_arg;\n> \n> Hm, wonder if could be worthwhile to not use a separate struct here, but\n> instead extend one of the existing structs to contain the necessary\n> information. Or perhaps have one new struct with all the necessary\n> information. There's already quite a few places that do\n> get_namespace_name(), for example.\n\n> Not a fan of \"cbarg\", too generic.\n\n> I think this will misattribute errors that happen when in the:\n\nI think that's addressed after deduplicating in attached.\n\nDeduplication revealed 2nd progress call, which seems to have been included\nredundantly at c16dc1aca.\n\n- /* Remove tuples from heap */\n- pgstat_progress_update_param(PROGRESS_VACUUM_PHASE,\n- PROGRESS_VACUUM_PHASE_VACUUM_HEAP);\n\nJustin",
"msg_date": "Thu, 12 Dec 2019 21:08:31 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Thu, Dec 12, 2019 at 09:08:31PM -0600, Justin Pryzby wrote:\n> On Wed, Dec 11, 2019 at 08:54:25AM -0800, Andres Freund wrote:\n>> Hm, wonder if could be worthwhile to not use a separate struct here, but\n>> instead extend one of the existing structs to contain the necessary\n>> information. Or perhaps have one new struct with all the necessary\n>> information. There's already quite a few places that do\n>> get_namespace_name(), for example.\n> \n> Didn't find a better struct to use yet.\n\nYes, I am too wondering what Andres has in mind here. It is not like\nyou can use VacuumRelation as the OID of the relation may not have\nbeen stored.\n\n> On Wed, Dec 11, 2019 at 08:54:25AM -0800, Andres Freund wrote:> \n> I think that's addressed after deduplicating in attached.\n> \n> Deduplication revealed 2nd progress call, which seems to have been included\n> redundantly at c16dc1aca.\n> \n> - /* Remove tuples from heap */\n> - pgstat_progress_update_param(PROGRESS_VACUUM_PHASE,\n> - PROGRESS_VACUUM_PHASE_VACUUM_HEAP);\n\nWhat is the purpose of 0001 in the context of this thread? One could\nsay the same about 0002 and 0003. Anyway, you are right with 0002 as\nthe progress value for PROGRESS_VACUUM_PHASE gets updated twice in a\nrow with the same value. So let's clean up that. \n\nThe refactoring in 0003 is interesting, so I would be tempted to merge\nit. Now you have one small issue in it:\n- /*\n- * Forget the now-vacuumed tuples, and press on, but be careful\n- * not to reset latestRemovedXid since we want that value to be\n- * valid.\n- */\n+ lazy_vacuum_heap_index(onerel, vacrelstats, Irel, nindexes, indstats);\n vacrelstats->num_dead_tuples = 0;\n- vacrelstats->num_index_scans++;\nYou are moving this comment within lazy_vacuum_heap_index, but it\napplies to num_dead_tuples and not num_index_scans, no? You should\nkeep the incrementation of num_index_scans within the routine though.\n--\nMichael",
"msg_date": "Fri, 13 Dec 2019 22:28:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Fri, Dec 13, 2019 at 10:28:50PM +0900, Michael Paquier wrote:\n\n>> v4-0001-Rename-buf-to-avoid-shadowing-buf-of-another-type.patch\n>> v4-0002-Remove-redundant-call-to-vacuum-progress.patch\n>> v4-0003-deduplication.patch\n>> v4-0004-vacuum-errcontext-to-show-block-being-processed.patch\n>> v4-0005-add-errcontext-callback-in-lazy_vacuum_heap-too.patch\n\n> What is the purpose of 0001 in the context of this thread? One could\n> say the same about 0002 and 0003. Anyway, you are right with 0002 as\n> the progress value for PROGRESS_VACUUM_PHASE gets updated twice in a\n> row with the same value. So let's clean up that. \n\nIt's related code which I cleaned up before adding new stuff. Not essential,\nthus separate (0002 should be backpatched).\n\n> The refactoring in 0003 is interesting, so I would be tempted to merge\n> it. Now you have one small issue in it:\n> - /*\n> - * Forget the now-vacuumed tuples, and press on, but be careful\n> - * not to reset latestRemovedXid since we want that value to be\n> - * valid.\n> - */\n> + lazy_vacuum_heap_index(onerel, vacrelstats, Irel, nindexes, indstats);\n> vacrelstats->num_dead_tuples = 0;\n> - vacrelstats->num_index_scans++;\n> You are moving this comment within lazy_vacuum_heap_index, but it\n> applies to num_dead_tuples and not num_index_scans, no? You should\n> keep the incrementation of num_index_scans within the routine though.\n\nThank you, fixed.\n\n-- \nJustin Pryzby\nSystem Administrator\nTelsasoft\n+1-952-707-8581",
"msg_date": "Fri, 13 Dec 2019 16:47:35 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Fri, Dec 13, 2019 at 04:47:35PM -0600, Justin Pryzby wrote:\n> It's related code which I cleaned up before adding new stuff. Not essential,\n> thus separate (0002 should be backpatched).\n\nThe issue just causes some extra work and that's not a bug, so applied\nwithout a backpatch.\n\n>> The refactoring in 0003 is interesting, so I would be tempted to merge\n>> it. Now you have one small issue in it:\n>> - /*\n>> - * Forget the now-vacuumed tuples, and press on, but be careful\n>> - * not to reset latestRemovedXid since we want that value to be\n>> - * valid.\n>> - */\n>> + lazy_vacuum_heap_index(onerel, vacrelstats, Irel, nindexes, indstats);\n>> vacrelstats->num_dead_tuples = 0;\n>> - vacrelstats->num_index_scans++;\n>> You are moving this comment within lazy_vacuum_heap_index, but it\n>> applies to num_dead_tuples and not num_index_scans, no? You should\n>> keep the incrementation of num_index_scans within the routine though.\n> \n> Thank you, fixed.\n\nFor 0003, I think that lazy_vacuum_heap_index() can be confusing as\nthose indexes are unrelated to heap. Why not naming it just\nlazy_vacuum_all_indexes()? The routine should also have a header\ndescribing it.\n--\nMichael",
"msg_date": "Sun, 15 Dec 2019 22:07:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Sun, Dec 15, 2019 at 10:07:08PM +0900, Michael Paquier wrote:\n> On Fri, Dec 13, 2019 at 04:47:35PM -0600, Justin Pryzby wrote:\n> > It's related code which I cleaned up before adding new stuff. Not essential,\n> > thus separate (0002 should be backpatched).\n> \n> The issue just causes some extra work and that's not a bug, so applied\n> without a backpatch.\n\nThanks\n\n> For 0003, I think that lazy_vacuum_heap_index() can be confusing as\n> those indexes are unrelated to heap. Why not naming it just\n> lazy_vacuum_all_indexes()? The routine should also have a header\n> describing it.\n\nI named it so because it calls both lazy_vacuum_index\n(\"PROGRESS_VACUUM_PHASE_VACUUM_INDEX\") and\nlazy_vacuum_heap(\"PROGRESS_VACUUM_PHASE_VACUUM_HEAP\")\n\nI suppose you don't think the other way around is better?\nlazy_vacuum_index_heap\n\nJustin\n\n\n",
"msg_date": "Sun, 15 Dec 2019 10:27:12 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Sun, Dec 15, 2019 at 10:27:12AM -0600, Justin Pryzby wrote:\n> I named it so because it calls both lazy_vacuum_index\n> (\"PROGRESS_VACUUM_PHASE_VACUUM_INDEX\") and\n> lazy_vacuum_heap(\"PROGRESS_VACUUM_PHASE_VACUUM_HEAP\")\n> \n> I suppose you don't think the other way around is better?\n> lazy_vacuum_index_heap\n\nDunno. Let's see if others have other thoughts on the matter. FWIW,\nI have a long history at naming things in a way others don't like :)\n--\nMichael",
"msg_date": "Mon, 16 Dec 2019 11:49:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "At Mon, 16 Dec 2019 11:49:56 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Sun, Dec 15, 2019 at 10:27:12AM -0600, Justin Pryzby wrote:\n> > I named it so because it calls both lazy_vacuum_index\n> > (\"PROGRESS_VACUUM_PHASE_VACUUM_INDEX\") and\n> > lazy_vacuum_heap(\"PROGRESS_VACUUM_PHASE_VACUUM_HEAP\")\n> > \n> > I suppose you don't think the other way around is better?\n> > lazy_vacuum_index_heap\n> \n> Dunno. Let's see if others have other thoughts on the matter. FWIW,\n> I have a long history at naming things in a way others don't like :)\n\nlazy_vacuum_heap_index() seems confusing to me. I read the name as\nMichael did before looking the above explanation.\n\nlazy_vacuum_heap_and_index() is clearer to me.\nlazy_vacuum_heap_with_index() could also work but I'm not sure it's\nfurther better.\n\nI see some function names like that, and some others that have two\nverbs bonded by \"_and_\".\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 17 Dec 2019 20:17:36 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Mon, Dec 16, 2019 at 11:49:56AM +0900, Michael Paquier wrote:\n> On Sun, Dec 15, 2019 at 10:27:12AM -0600, Justin Pryzby wrote:\n> > I named it so because it calls both lazy_vacuum_index\n> > (\"PROGRESS_VACUUM_PHASE_VACUUM_INDEX\") and\n> > lazy_vacuum_heap(\"PROGRESS_VACUUM_PHASE_VACUUM_HEAP\")\n> > \n> > I suppose you don't think the other way around is better?\n> > lazy_vacuum_index_heap\n> \n> Dunno. Let's see if others have other thoughts on the matter. FWIW,\n> I have a long history at naming things in a way others don't like :)\n\nI renamed.\n\nAnd deduplicated two more hunks into a 2nd function.\n\n(I'm also including the changes I mentioned here ... in case anyone cares to\ncomment or review).\nhttps://www.postgresql.org/message-id/20191220171132.GB30414%40telsasoft.com",
"msg_date": "Mon, 23 Dec 2019 19:24:28 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Mon, Dec 23, 2019 at 07:24:28PM -0600, Justin Pryzby wrote:\n> I renamed.\n\nHmm. I have found what was partially itching me for patch 0002, and\nthat's actually the fact that we don't do the error reporting for heap\nwithin lazy_vacuum_heap() because the code relies too much on updating\ntwo progress parameters at the same time, on top of the fact that you\nare mixing multiple concepts with this refactoring. One problem is\nthat if this code is refactored in the future, future callers of\nlazy_vacuum_heap() would miss the update of the progress reporting.\nSplitting things improves also the readability of the code, so\nattached is the refactoring I would do for this portion of the set.\nIt is also more natural to increment num_index_scans when the\nreporting happens on consistency grounds.\n\n(Please note that I have not indented yet the patch.)\n--\nMichael",
"msg_date": "Tue, 24 Dec 2019 13:19:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Tue, Dec 24, 2019 at 01:19:09PM +0900, Michael Paquier wrote:\n> (Please note that I have not indented yet the patch.)\n\nAnd one indentation later, committed this one after an extra lookup as\nof 1ab41a3.\n--\nMichael",
"msg_date": "Thu, 26 Dec 2019 17:06:51 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Tue, Dec 24, 2019 at 01:19:09PM +0900, Michael Paquier wrote:\n> On Mon, Dec 23, 2019 at 07:24:28PM -0600, Justin Pryzby wrote:\n> > I renamed.\n> \n> Hmm. I have found what was partially itching me for patch 0002, and\n> that's actually the fact that we don't do the error reporting for heap\n> within lazy_vacuum_heap() because the code relies too much on updating\n> two progress parameters at the same time, on top of the fact that you\n> are mixing multiple concepts with this refactoring. One problem is\n> that if this code is refactored in the future, future callers of\n> lazy_vacuum_heap() would miss the update of the progress reporting.\n> Splitting things improves also the readability of the code, so\n> attached is the refactoring I would do for this portion of the set.\n> It is also more natural to increment num_index_scans when the\n\nI agree that's better.\nI don't see any reason why the progress params need to be updated atomically.\nSo rebasified against your patch.",
"msg_date": "Thu, 26 Dec 2019 09:57:04 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Thu, Dec 26, 2019 at 10:57 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> I agree that's better.\n> I don't see any reason why the progress params need to be updated atomically.\n> So rebasified against your patch.\n\nI am not sure whether it's important enough to make a stink about, but\nit bothers me a bit that this is being dismissed as unimportant. The\nproblem is that, if the updates are not atomic, then somebody might\nsee the data after one has been updated and the other has not yet been\nupdated. The result is that when the phase is\nPROGRESS_VACUUM_PHASE_VACUUM_INDEX, someone reading the information\ncan't tell whether the number of index scans reported is the number\n*previously* performed or the number performed including the one that\njust finished. The race to see the latter state is narrow, so it\nprobably wouldn't come up often, but it does seem like it would be\nconfusing if it did happen.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 28 Dec 2019 19:21:31 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Sat, Dec 28, 2019 at 07:21:31PM -0500, Robert Haas wrote:\n> On Thu, Dec 26, 2019 at 10:57 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > I agree that's better.\n> > I don't see any reason why the progress params need to be updated atomically.\n> > So rebasified against your patch.\n> \n> I am not sure whether it's important enough to make a stink about, but\n> it bothers me a bit that this is being dismissed as unimportant. The\n> problem is that, if the updates are not atomic, then somebody might\n> see the data after one has been updated and the other has not yet been\n> updated. The result is that when the phase is\n> PROGRESS_VACUUM_PHASE_VACUUM_INDEX, someone reading the information\n> can't tell whether the number of index scans reported is the number\n> *previously* performed or the number performed including the one that\n> just finished. The race to see the latter state is narrow, so it\n> probably wouldn't come up often, but it does seem like it would be\n> confusing if it did happen.\n\nWhat used to be atomic was this:\n\n- hvp_val[0] = PROGRESS_VACUUM_PHASE_VACUUM_HEAP;\n- hvp_val[1] = vacrelstats->num_index_scans + 1;\n\n=> switch from PROGRESS_VACUUM_PHASE_VACUUM INDEX to HEAP and increment\nindex_vacuum_count, which is documented as the \"Number of completed index\nvacuum cycles.\"\n\nNow, it 1) increments the number of completed scans; and, 2) then progresses\nphase to HEAP, so there's a window where the number of completed scans is\nincremented, and it still says VACUUM_INDEX.\n\nPreviously, if it said VACUUM_INDEX, one could assume that index_vacuum_count\nwould increase at least once more, and that's no longer true. If someone sees\nVACUUM_INDEX and some NUM_INDEX_VACUUMS, and then later sees VACUUM_HEAP or\nother later stage, with same (maybe final) value of NUM_INDEX_VACUUMS, that's\ndifferent than previous behavior.\n\nIt seems to me that a someone or their tool monitoring pg_stat shouldn't be\nconfused by this change, since:\n1) there's no promise about how high NUM_INDEX_VACUUMS will or won't go; and, \n2) index_vacuum_count didn't do anything strange like decreasing, or increased\nbefore the scans were done; and,\n3) the vacuum can finish at any time, and the monitoring process presumably\nknows that when the PID is gone, it's finished, even if it missed intermediate\nupdates;\n\nThe behavior is different from before, but I think that's ok: the number of\nscans is accurate, and the PHASE is accurate, even though it'll change a moment\nlater.\n\nI see there's similar case here:\n| /* report all blocks vacuumed; and that we're cleaning up */\n| pgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_VACUUMED, blkno);\n| pgstat_progress_update_param(PROGRESS_VACUUM_PHASE,\n| PROGRESS_VACUUM_PHASE_INDEX_CLEANUP);\n\nheap_blks_scanned is documented as \"Number of heap blocks SCANNED\", and it\nincrements exactly to heap_blks_total. Would someone be confused if\nheap_blks_scanned==heap_blks_total AND phase=='scanning heap' ? I think they'd\njust expect PHASE to be updated a moment later. (And if it wasn't, I agree they\nshould then be legitimately confused or concerned).\n\nActually, the doc says:\n|If heap_blks_scanned is less than heap_blks_total, the system will return to\n|scanning the heap after this phase is completed; otherwise, it will begin\n|cleaning up indexes AFTER THIS PHASE IS COMPLETED.\n\nI read that to mean that it's okay if heap_blks_scanned==heap_blks_total when\nscanning/vacuuming heap.\n\nJustin\n\n\n",
"msg_date": "Sun, 29 Dec 2019 14:17:47 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number (atomic\n progress update)"
},
{
"msg_contents": "On Thu, Dec 26, 2019 at 09:57:04AM -0600, Justin Pryzby wrote:\n> So rebasified against your patch.\n\nRebased against your patch in master this time.",
"msg_date": "Thu, 2 Jan 2020 10:27:01 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Sun, Dec 29, 2019 at 02:17:47PM -0600, Justin Pryzby wrote:\n> The behavior is different from before, but I think that's ok: the number of\n> scans is accurate, and the PHASE is accurate, even though it'll change a moment\n> later.\n\npgstat_progress_update_multi_param() is useful when it comes to update\nmultiple parameters at the same time consistently in a given progress\nphase. For example, in vacuum, when beginning the heap scan, the\nnumber of blocks to scan and the max number of dead tuples has to be\nupdated at the same as the phase name, as things have to be reported\nconsistently, so that's critical to be consistent IMO. Now, in this\ncase, we are discussing about updating a parameter which is related to\nthe index vacuuming phase, while switching at the same time to a\ndifferent phase. I think that splitting both is not confusing here\nbecause the number of times vacuum indexes have been done is unrelated\nto the heap cleanup happening afterwards. On top of that the new code\nis more readable, and future callers of lazy_vacuum_heap() will never\nmiss to update the progress reporting to the new phase.\n\nWhile on it, a \"git grep -n\" is showing me two places where we could\ncare more about being consistent by using the multi-param version of\nprogress reports when beginning a new progress phase:\n- reindex_index()\n- ReindexRelationConcurrently()\n\nOne can also note the switch to PROGRESS_VACUUM_PHASE_INDEX_CLEANUP in \nlazy_scan_heap() but it can be discarded for the same reason as what\nhas been refactored recently with the index vacuuming. \n--\nMichael",
"msg_date": "Mon, 6 Jan 2020 16:31:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number (atomic\n progress update)"
},
{
"msg_contents": "Rebased against 40d964ec997f64227bc0ff5e058dc4a5770a70a9\n\nI moved some unrelated patches to a separate thread (\"vacuum verbose detail logs are unclear\")",
"msg_date": "Sun, 19 Jan 2020 23:41:59 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "Hi,\n\nOn 2020-01-19 23:41:59 -0600, Justin Pryzby wrote:\n> /*\n> + * Return whether skipping blocks or not.\n> + * Except when aggressive is set, we want to skip pages that are\n> + * all-visible according to the visibility map, but only when we can skip\n> + * at least SKIP_PAGES_THRESHOLD consecutive pages. Since we're reading\n> + * sequentially, the OS should be doing readahead for us, so there's no\n> + * gain in skipping a page now and then; that's likely to disable\n> + * readahead and so be counterproductive. Also, skipping even a single\n> + * page means that we can't update relfrozenxid, so we only want to do it\n> + * if we can skip a goodly number of pages.\n> + *\n> + * When aggressive is set, we can't skip pages just because they are\n> + * all-visible, but we can still skip pages that are all-frozen, since\n> + * such pages do not need freezing and do not affect the value that we can\n> + * safely set for relfrozenxid or relminmxid.\n> + *\n> + * Before entering the main loop, establish the invariant that\n> + * next_unskippable_block is the next block number >= blkno that we can't\n> + * skip based on the visibility map, either all-visible for a regular scan\n> + * or all-frozen for an aggressive scan. We set it to nblocks if there's\n> + * no such block. We also set up the skipping_blocks flag correctly at\n> + * this stage.\n> + *\n> + * Note: The value returned by visibilitymap_get_status could be slightly\n> + * out-of-date, since we make this test before reading the corresponding\n> + * heap page or locking the buffer. This is OK. If we mistakenly think\n> + * that the page is all-visible or all-frozen when in fact the flag's just\n> + * been cleared, we might fail to vacuum the page. It's easy to see that\n> + * skipping a page when aggressive is not set is not a very big deal; we\n> + * might leave some dead tuples lying around, but the next vacuum will\n> + * find them. But even when aggressive *is* set, it's still OK if we miss\n> + * a page whose all-frozen marking has just been cleared. Any new XIDs\n> + * just added to that page are necessarily newer than the GlobalXmin we\n> + * computed, so they'll have no effect on the value to which we can safely\n> + * set relfrozenxid. A similar argument applies for MXIDs and relminmxid.\n> + *\n> + * We will scan the table's last page, at least to the extent of\n> + * determining whether it has tuples or not, even if it should be skipped\n> + * according to the above rules; except when we've already determined that\n> + * it's not worth trying to truncate the table. This avoids having\n> + * lazy_truncate_heap() take access-exclusive lock on the table to attempt\n> + * a truncation that just fails immediately because there are tuples in\n> + * the last page. This is worth avoiding mainly because such a lock must\n> + * be replayed on any hot standby, where it can be disruptive.\n> + */\n\nFWIW, I think we should just flat out delete all this logic, and replace\nit with a few explicit PrefetchBuffer() calls. Just by chance I\nliterally just now sped up a VACUUM by more than a factor of 10, by\nmanually prefetching buffers. At least the linux kernel readahead logic\ndoesn't deal well with reading and writing to different locations in the\nsame file, and that's what the ringbuffer pretty invariably leads to for\nworkloads that aren't cached.\n\nPartially so I'll find it when I invariably search for this in the\nfuture:\nselect pg_prewarm(relid, 'buffer', 'main', blocks_done, least(blocks_done+100000, blocks_total)) from pg_stat_progress_create_index where phase = 'building index: scanning table' and datid = (SELECT oid FROM pg_database WHERE datname = current_database());\n\\watch 0.5\n\n\n> From 623c725c8add0670b28cdbfceca1824ba5b0647c Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Thu, 12 Dec 2019 20:54:37 -0600\n> Subject: [PATCH v9 2/3] vacuum errcontext to show block being processed\n> \n> As requested here.\n> https://www.postgresql.org/message-id/20190807235154.erbmr4o4bo6vgnjv%40alap3.anarazel.de\n> ---\n> src/backend/access/heap/vacuumlazy.c | 37 ++++++++++++++++++++++++++++++++++++\n> 1 file changed, 37 insertions(+)\n> \n> diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c\n> index 9849685..c96abdf 100644\n> --- a/src/backend/access/heap/vacuumlazy.c\n> +++ b/src/backend/access/heap/vacuumlazy.c\n> @@ -289,6 +289,12 @@ typedef struct LVRelStats\n> \tbool\t\tlock_waiter_detected;\n> } LVRelStats;\n> \n> +typedef struct\n> +{\n> +\tchar *relname;\n> +\tchar *relnamespace;\n> +\tBlockNumber blkno;\n> +} vacuum_error_callback_arg;\n> \n> /* A few variables that don't seem worth passing around as parameters */\n> static int\televel = -1;\n> @@ -358,6 +364,7 @@ static void end_parallel_vacuum(Relation *Irel, IndexBulkDeleteResult **stats,\n> \t\t\t\t\t\t\t\tLVParallelState *lps, int nindexes);\n> static LVSharedIndStats *get_indstats(LVShared *lvshared, int n);\n> static bool skip_parallel_vacuum_index(Relation indrel, LVShared *lvshared);\n> +static void vacuum_error_callback(void *arg);\n> \n> \n> /*\n> @@ -803,6 +810,8 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,\n> \t\tPROGRESS_VACUUM_MAX_DEAD_TUPLES\n> \t};\n> \tint64\t\tinitprog_val[3];\n> +\tErrorContextCallback errcallback;\n> +\tvacuum_error_callback_arg errcbarg;\n> \n> \tpg_rusage_init(&ru0);\n> \n> @@ -879,6 +888,15 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,\n> \tnext_unskippable_block = 0;\n> \tskipping_blocks = skip_blocks(onerel, params, &next_unskippable_block, nblocks, &vmbuffer, aggressive);\n> \n> +\t/* Setup error traceback support for ereport() */\n> +\terrcbarg.relnamespace = get_namespace_name(RelationGetNamespace(onerel));\n> +\terrcbarg.relname = relname;\n> +\terrcbarg.blkno = InvalidBlockNumber; /* Not known yet */\n> +\terrcallback.callback = vacuum_error_callback;\n> +\terrcallback.arg = (void *) &errcbarg;\n> +\terrcallback.previous = error_context_stack;\n> +\terror_context_stack = &errcallback;\n> +\n> \tfor (blkno = 0; blkno < nblocks; blkno++)\n> \t{\n> \t\tBuffer\t\tbuf;\n> @@ -900,6 +918,8 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,\n> #define FORCE_CHECK_PAGE() \\\n> \t\t(blkno == nblocks - 1 && should_attempt_truncation(params, vacrelstats))\n> \n> +\t\terrcbarg.blkno = blkno;\n> +\n> \t\tpgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_SCANNED, blkno);\n> \n> \t\tif (blkno == next_unskippable_block)\n> @@ -966,8 +986,11 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,\n> \t\t\t}\n> \n> \t\t\t/* Work on all the indexes, then the heap */\n> +\t\t\t/* Don't use the errcontext handler outside this function */\n> +\t\t\terror_context_stack = errcallback.previous;\n> \t\t\tlazy_vacuum_all_indexes(onerel, Irel, indstats,\n> \t\t\t\t\t\t\t\t\tvacrelstats, lps, nindexes);\n> +\t\t\terror_context_stack = &errcallback;\n> \n> \t\t\t/* Remove tuples from heap */\n> \t\t\tlazy_vacuum_heap(onerel, vacrelstats);\n> @@ -1575,6 +1598,9 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,\n> \t\t\tRecordPageWithFreeSpace(onerel, blkno, freespace);\n> \t}\n\nAlternatively we could push another context for each index inside\nlazy_vacuum_all_indexes(). There's been plenty bugs in indexes\ntriggering problems, so that could be worthwhile.\n\n\n> +/*\n> + * Error context callback for errors occurring during vacuum.\n> + */\n> +static void\n> +vacuum_error_callback(void *arg)\n> +{\n> +\tvacuum_error_callback_arg *cbarg = arg;\n> +\terrcontext(\"while scanning block %u of relation \\\"%s.%s\\\"\",\n> +\t\t\tcbarg->blkno, cbarg->relnamespace, cbarg->relname);\n> +}\n> -- \n> 2.7.4\n> \n\nI think it might be useful to expand the message to explain which part\nof vacuuming this is about. But I'd leave that for a later patch.\n\n\n> From 27a0c085d8d965252ebb8eb2e47362f27fa4203e Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Thu, 12 Dec 2019 20:34:03 -0600\n> Subject: [PATCH v9 3/3] add errcontext callback in lazy_vacuum_heap, too\n> \n> ---\n> src/backend/access/heap/vacuumlazy.c | 14 ++++++++++++++\n> 1 file changed, 14 insertions(+)\n> \n> diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c\n> index c96abdf..f380437 100644\n> --- a/src/backend/access/heap/vacuumlazy.c\n> +++ b/src/backend/access/heap/vacuumlazy.c\n> @@ -1639,6 +1639,7 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,\n> \n> \t\t/* Remove tuples from heap */\n> \t\tlazy_vacuum_heap(onerel, vacrelstats);\n> +\t\terror_context_stack = errcallback.previous;\n> \t}\n\nThis I do not get. I didn't yet fully wake up, so I might just be slow?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 20 Jan 2020 11:11:20 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "Hi,\n\nOn 2019-12-12 21:08:31 -0600, Justin Pryzby wrote:\n> On Wed, Dec 11, 2019 at 12:33:53PM -0300, Alvaro Herrera wrote:\n> On Wed, Dec 11, 2019 at 08:54:25AM -0800, Andres Freund wrote:\n> > Hm, wonder if could be worthwhile to not use a separate struct here, but\n> > instead extend one of the existing structs to contain the necessary\n> > information. Or perhaps have one new struct with all the necessary\n> > information. There's already quite a few places that do\n> > get_namespace_name(), for example.\n> \n> Didn't find a better struct to use yet.\n\nI was thinking that you could just use LVRelStats.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 20 Jan 2020 11:13:05 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Mon, Jan 20, 2020 at 11:11:20AM -0800, Andres Freund wrote:\n> This I do not get. I didn't yet fully wake up, so I might just be slow?\n\nIt was needlessly cute at the cost of clarity (meant to avoid setting\nerror_context_stack in lazy_scan_heap and again immediately on its return).\n\nOn Mon, Jan 20, 2020 at 11:13:05AM -0800, Andres Freund wrote:\n> I was thinking that you could just use LVRelStats.\n\nDone.\n\nOn Mon, Jan 20, 2020 at 11:11:20AM -0800, Andres Freund wrote:\n> Alternatively we could push another context for each index inside\n> lazy_vacuum_all_indexes(). There's been plenty bugs in indexes\n> triggering problems, so that could be worthwhile.\n\nDid this too, although I'm not sure what kind of errors it'd find (?)\n\nI considered elimating other uses of RelationGetRelationName, or looping over\nvacrelstats->blkno instead of local blkno. I did that in an additional patch\n(that will cause conflicts if you try to apply it, due to other vacuum patch in\nthis branch).\n\nCREATE TABLE t AS SELECT generate_series(1,99999)a;\n\npostgres=# SET client_min_messages=debug;SET statement_timeout=39; VACUUM (VERBOSE, PARALLEL 0) t;\nINFO: vacuuming \"public.t\"\n2020-01-20 15:46:14.993 CST [20056] ERROR: canceling statement due to statement timeout\n2020-01-20 15:46:14.993 CST [20056] CONTEXT: while scanning block 211 of relation \"public.t\"\n2020-01-20 15:46:14.993 CST [20056] STATEMENT: VACUUM (VERBOSE, PARALLEL 0) t;\nERROR: canceling statement due to statement timeout\nCONTEXT: while scanning block 211 of relation \"public.t\"\n\nSELECT 'CREATE INDEX ON t(a)' FROM generate_series(1,11);\\gexec\nUPDATE t SET a=a+1;\n\npostgres=# SET client_min_messages=debug;SET statement_timeout=99; VACUUM (VERBOSE, PARALLEL 0) t;\nINFO: vacuuming \"public.t\"\nDEBUG: \"t_a_idx\": vacuuming index\n2020-01-20 15:47:36.338 CST [20139] ERROR: canceling statement due to statement timeout\n2020-01-20 15:47:36.338 CST [20139] CONTEXT: while vacuuming relation \"public.t_a_idx\"\n2020-01-20 15:47:36.338 CST [20139] STATEMENT: VACUUM (VERBOSE, PARALLEL 0) t;\nERROR: canceling statement due to statement timeout\nCONTEXT: while vacuuming relation \"public.t_a_idx\"\n\nI haven't found a good way of exercizing the \"vacuuming heap\" path, though.",
"msg_date": "Mon, 20 Jan 2020 15:49:29 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Tue, 21 Jan 2020 at 06:49, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Mon, Jan 20, 2020 at 11:11:20AM -0800, Andres Freund wrote:\n> > This I do not get. I didn't yet fully wake up, so I might just be slow?\n>\n> It was needlessly cute at the cost of clarity (meant to avoid setting\n> error_context_stack in lazy_scan_heap and again immediately on its return).\n>\n> On Mon, Jan 20, 2020 at 11:13:05AM -0800, Andres Freund wrote:\n> > I was thinking that you could just use LVRelStats.\n>\n> Done.\n>\n> On Mon, Jan 20, 2020 at 11:11:20AM -0800, Andres Freund wrote:\n> > Alternatively we could push another context for each index inside\n> > lazy_vacuum_all_indexes(). There's been plenty bugs in indexes\n> > triggering problems, so that could be worthwhile.\n>\n> Did this too, although I'm not sure what kind of errors it'd find (?)\n>\n> I considered elimating other uses of RelationGetRelationName, or looping over\n> vacrelstats->blkno instead of local blkno. I did that in an additional patch\n> (that will cause conflicts if you try to apply it, due to other vacuum patch in\n> this branch).\n>\n> CREATE TABLE t AS SELECT generate_series(1,99999)a;\n>\n> postgres=# SET client_min_messages=debug;SET statement_timeout=39; VACUUM (VERBOSE, PARALLEL 0) t;\n> INFO: vacuuming \"public.t\"\n> 2020-01-20 15:46:14.993 CST [20056] ERROR: canceling statement due to statement timeout\n> 2020-01-20 15:46:14.993 CST [20056] CONTEXT: while scanning block 211 of relation \"public.t\"\n> 2020-01-20 15:46:14.993 CST [20056] STATEMENT: VACUUM (VERBOSE, PARALLEL 0) t;\n> ERROR: canceling statement due to statement timeout\n> CONTEXT: while scanning block 211 of relation \"public.t\"\n>\n> SELECT 'CREATE INDEX ON t(a)' FROM generate_series(1,11);\\gexec\n> UPDATE t SET a=a+1;\n>\n> postgres=# SET client_min_messages=debug;SET statement_timeout=99; VACUUM (VERBOSE, PARALLEL 0) t;\n> INFO: vacuuming \"public.t\"\n> DEBUG: \"t_a_idx\": vacuuming index\n> 2020-01-20 15:47:36.338 CST [20139] ERROR: canceling statement due to statement timeout\n> 2020-01-20 15:47:36.338 CST [20139] CONTEXT: while vacuuming relation \"public.t_a_idx\"\n> 2020-01-20 15:47:36.338 CST [20139] STATEMENT: VACUUM (VERBOSE, PARALLEL 0) t;\n> ERROR: canceling statement due to statement timeout\n> CONTEXT: while vacuuming relation \"public.t_a_idx\"\n>\n> I haven't found a good way of exercizing the \"vacuuming heap\" path, though.\n\nSome of them conflicts with the current HEAD(62c9b52231). Please rebase them.\n\nRegards,\n\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 21 Jan 2020 15:11:35 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Tue, Jan 21, 2020 at 03:11:35PM +0900, Masahiko Sawada wrote:\n> Some of them conflicts with the current HEAD(62c9b52231). Please rebase them.\n\nSorry, it's due to other vacuum patch in this branch.\n\nNew patches won't conflict, except for the 0005, so I renamed it for cfbot.\nIf it's deemed to be useful, I'll make a separate branch for the others.",
"msg_date": "Tue, 21 Jan 2020 14:49:33 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On 2020-Jan-21, Justin Pryzby wrote:\n\n> On Tue, Jan 21, 2020 at 03:11:35PM +0900, Masahiko Sawada wrote:\n> > Some of them conflicts with the current HEAD(62c9b52231). Please rebase them.\n> \n> Sorry, it's due to other vacuum patch in this branch.\n> \n> New patches won't conflict, except for the 0005, so I renamed it for cfbot.\n> If it's deemed to be useful, I'll make a separate branch for the others.\n\nI think you have to have some other patches applied before these,\nbecause in the context lines for some of the hunks there are\ndouble-slash comments.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 21 Jan 2020 17:54:59 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Tue, Jan 21, 2020 at 05:54:59PM -0300, Alvaro Herrera wrote:\n> > On Tue, Jan 21, 2020 at 03:11:35PM +0900, Masahiko Sawada wrote:\n> > > Some of them conflicts with the current HEAD(62c9b52231). Please rebase them.\n> > \n> > Sorry, it's due to other vacuum patch in this branch.\n> > \n> > New patches won't conflict, except for the 0005, so I renamed it for cfbot.\n> > If it's deemed to be useful, I'll make a separate branch for the others.\n> \n> I think you have to have some other patches applied before these,\n> because in the context lines for some of the hunks there are\n> double-slash comments.\n\nAnd I knew that, so (re)tested that the extracted patches would apply, but it\nlooks like maybe the patch checker is less smart (or more strict) than git, so\nit didn't work after all.\n\n3rd attempt (sorry for the noise).",
"msg_date": "Tue, 21 Jan 2020 19:17:52 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On 2020-Jan-21, Justin Pryzby wrote:\n\n> And I knew that, so (re)tested that the extracted patches would apply, but it\n> looks like maybe the patch checker is less smart (or more strict) than git, so\n> it didn't work after all.\n\nHonestly, I think we should be scared of a patch applier that ignored\ndifferences in context lines. After all, that's why those context lines\nare there -- so that they provide additional location cues for the lines\nbeing modified. If you allow random other lines to be there, you could\nbe inserting stuff in arbitrarily erroneous places.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 21 Jan 2020 23:19:54 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Wed, 22 Jan 2020 at 10:17, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Tue, Jan 21, 2020 at 05:54:59PM -0300, Alvaro Herrera wrote:\n> > > On Tue, Jan 21, 2020 at 03:11:35PM +0900, Masahiko Sawada wrote:\n> > > > Some of them conflicts with the current HEAD(62c9b52231). Please rebase them.\n> > >\n> > > Sorry, it's due to other vacuum patch in this branch.\n> > >\n> > > New patches won't conflict, except for the 0005, so I renamed it for cfbot.\n> > > If it's deemed to be useful, I'll make a separate branch for the others.\n> >\n> > I think you have to have some other patches applied before these,\n> > because in the context lines for some of the hunks there are\n> > double-slash comments.\n>\n> And I knew that, so (re)tested that the extracted patches would apply, but it\n> looks like maybe the patch checker is less smart (or more strict) than git, so\n> it didn't work after all.\n\nThank you for updating the patches!\n\nI'm not sure it's worth to have patches separately but I could apply\nall patches cleanly. Here is my comments for the code applied all\npatches:\n\n1.\n+ /* Used by the error callback */\n+ char *relname;\n+ char *relnamespace;\n+ BlockNumber blkno;\n+ int stage; /* 0: scan heap; 1: vacuum heap; 2: vacuum index */\n+} LVRelStats;\n\n* The comment should be updated as we use both relname and\nrelnamespace for ereporting.\n\n* We can leave both blkno and stage that are used only for error\ncontext reporting put both relname and relnamespace to top of\nLVRelStats.\n\n* The 'stage' is missing to support index cleanup.\n\n* Maybe we need a comment for 'blkno'.\n\n2.\n@@ -748,8 +742,31 @@ lazy_scan_heap(Relation onerel, VacuumParams\n*params, LVRelStats *vacrelstats,\n vacrelstats->scanned_pages = 0;\n vacrelstats->tupcount_pages = 0;\n vacrelstats->nonempty_pages = 0;\n+\n+ /* Setup error traceback support for ereport() */\n+ vacrelstats->relnamespace =\nget_namespace_name(RelationGetNamespace(onerel));\n+ vacrelstats->relname = RelationGetRelationName(onerel);\n+ vacrelstats->blkno = InvalidBlockNumber; /* Not known yet */\n+ vacrelstats->stage = 0;\n+\n+ errcallback.callback = vacuum_error_callback;\n+ errcallback.arg = (void *) vacrelstats;\n+ errcallback.previous = error_context_stack;\n+ error_context_stack = &errcallback;\n+\n vacrelstats->latestRemovedXid = InvalidTransactionId;\n\n+ if (aggressive)\n+ ereport(elevel,\n+ (errmsg(\"aggressively vacuuming \\\"%s.%s\\\"\",\n+ vacrelstats->relnamespace,\n+ vacrelstats->relname)));\n+ else\n+ ereport(elevel,\n+ (errmsg(\"vacuuming \\\"%s.%s\\\"\",\n+ vacrelstats->relnamespace,\n+ vacrelstats->relname)));\n\n* It seems to me strange that only initialization of latestRemovedXid\nis done after error callback initialization.\n\n* Maybe we can initialize relname and relnamespace in heap_vacuum_rel\nrather than in lazy_scan_heap. BTW variables of vacrelstats are\ninitialized different places: some of them in heap_vacuum_rel and\nothers in lazy_scan_heap. I think we can gather those that can be\ninitialized at that time to heap_vacuum_rel.\n\n3.\n /* Work on all the indexes, then the heap */\n+ /* Don't use the errcontext handler outside this function */\n+ error_context_stack = errcallback.previous;\n lazy_vacuum_all_indexes(onerel, Irel, indstats,\n vacrelstats, lps, nindexes);\n-\n /* Remove tuples from heap */\n lazy_vacuum_heap(onerel, vacrelstats);\n+ error_context_stack = &errcallback;\n\nMaybe we can do like:\n\n /* Pop the error context stack */\n error_context_stack = errcallback.previous;\n\n /* Work on all the indexes, then the heap */\n lazy_vacuum_all_indexes(onerel, Irel, indstats,\n vacrelstats, lps, nindexes);\n\n /* Remove tuples from heap */\n lazy_vacuum_heap(onerel, vacrelstats);\n\n /* Push again the error context of heap scan */\n error_context_stack = &errcallback;\n\n4.\n+ /* Setup error traceback support for ereport() */\n+ /* vacrelstats->relnamespace already set */\n+ /* vacrelstats->relname already set */\n\nThese comments can be merged like:\n\n/*\n * Setup error traceback for ereport(). Both relnamespace and\n * relname are already set.\n */\n\n5.\n+ /* Setup error traceback support for ereport() */\n+ vacrelstats.relnamespace = get_namespace_name(RelationGetNamespace(indrel));\n+ vacrelstats.relname = RelationGetRelationName(indrel);\n+ vacrelstats.blkno = InvalidBlockNumber; /* Not used */\n\nWhy do we need to initialize blkno in spite of not using?\n\n6.\n+/*\n+ * Error context callback for errors occurring during vacuum.\n+ */\n+static void\n+vacuum_error_callback(void *arg)\n+{\n+ LVRelStats *cbarg = arg;\n+\n+ if (cbarg->stage == 0)\n+ errcontext(_(\"while scanning block %u of relation \\\"%s.%s\\\"\"),\n+ cbarg->blkno, cbarg->relnamespace, cbarg->relname);\n+ else if (cbarg->stage == 1)\n+ errcontext(_(\"while vacuuming block %u of relation \\\"%s.%s\\\"\"),\n+ cbarg->blkno, cbarg->relnamespace, cbarg->relname);\n+ else if (cbarg->stage == 2)\n+ errcontext(_(\"while vacuuming relation \\\"%s.%s\\\"\"),\n+ cbarg->relnamespace, cbarg->relname);\n+}\n\n* 'vacrelstats' would be a better name than 'cbarg'.\n\n* In index vacuum, how about \"while vacuuming index \\\"%s.%s\\\"\"?\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 22 Jan 2020 17:37:06 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Mon, Jan 20, 2020 at 11:11:20AM -0800, Andres Freund wrote:\n> > @@ -966,8 +986,11 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,\n> > \t\t\t/* Work on all the indexes, then the heap */\n> > +\t\t\t/* Don't use the errcontext handler outside this function */\n> > +\t\t\terror_context_stack = errcallback.previous;\n> > \t\t\tlazy_vacuum_all_indexes(onerel, Irel, indstats,\n> > \t\t\t\t\t\t\t\t\tvacrelstats, lps, nindexes);\n> > +\t\t\terror_context_stack = &errcallback;\n> \n> Alternatively we could push another context for each index inside\n> lazy_vacuum_all_indexes(). There's been plenty bugs in indexes\n> triggering problems, so that could be worthwhile.\n\nIs the callback for index vacuum useful without a block number?\n\nFYI, I have another patch which would add DEBUG output before each stage, which\nwould be just as much information, and without needing to use a callback.\nIt's 0004 here:\n\nhttps://www.postgresql.org/message-id/20200121134934.GY26045%40telsasoft.com\n@@ -1752,9 +1753,12 @@ lazy_vacuum_all_indexes(Relation onerel, Relation *Irel,\n {\n int idx;\n\n- for (idx = 0; idx < nindexes; idx++)\n+ for (idx = 0; idx < nindexes; idx++) {\n+ ereport(DEBUG1, (errmsg(\"\\\"%s\\\": vacuuming index\",\n+ RelationGetRelationName(Irel[idx]))));\n lazy_vacuum_index(Irel[idx], &stats[idx], vacrelstats->dead_tuples,\n\n\n\n",
"msg_date": "Wed, 22 Jan 2020 17:17:26 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "Thanks for reviewing\n\nOn Wed, Jan 22, 2020 at 05:37:06PM +0900, Masahiko Sawada wrote:\n> I'm not sure it's worth to have patches separately but I could apply\n\nThe later patches expanded on the initial scope, and to my understanding the\n1st callback is desirable but the others are maybe still at question.\n\n> 1. * The comment should be updated as we use both relname and\n> relnamespace for ereporting.\n> \n> * We can leave both blkno and stage that are used only for error\n> context reporting put both relname and relnamespace to top of\n> LVRelStats.\n\nUpdated in the 0005 - still not sure if that patch will be desirable, though.\nAlso, I realized that it's we cannot use vacrelstats->blkno instead of local\nblkno variable, since vacrelstats->blkno is used simultaneously by scan heap\nand vacuum heap).\n\n> * The 'stage' is missing to support index cleanup.\n\nBut the callback isn't used during index cleanup ?\n\n> * It seems to me strange that only initialization of latestRemovedXid\n> is done after error callback initialization.\n\nYes, that was silly - I thought it was just an artifact of diff's inability to\nexpress rearranged code any better.\n\n> * Maybe we can initialize relname and relnamespace in heap_vacuum_rel\n> rather than in lazy_scan_heap. BTW variables of vacrelstats are\n> initialized different places: some of them in heap_vacuum_rel and\n> others in lazy_scan_heap. I think we can gather those that can be\n> initialized at that time to heap_vacuum_rel.\n\nI think that's already true ? But topic for a separate patch, if not.\n\n> 3. Maybe we can do like:\n\ndone\n\n> 4. These comments can be merged like:\n\ndone\n\n> 5. Why do we need to initialize blkno in spite of not using?\n\nremoved\n\n> 6.\n> + cbarg->blkno, cbarg->relnamespace, cbarg->relname);\n> * 'vacrelstats' would be a better name than 'cbarg'.\n\nReally? I'd prefer to avoid repeating long variable three times:\n\n vacrelstats->blkno, vacrelstats->relnamespace, vacrelstats->relname);\n\n> * In index vacuum, how about \"while vacuuming index \\\"%s.%s\\\"\"?\n\nYes. I'm still unclear if this is useful without a block number, or why it\nwouldn't be better to write DEBUG1 log with index name before vacuuming each.\n\nJustin",
"msg_date": "Fri, 24 Jan 2020 13:21:55 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "Hi,\n\nOn 2020-01-22 17:17:26 -0600, Justin Pryzby wrote:\n> On Mon, Jan 20, 2020 at 11:11:20AM -0800, Andres Freund wrote:\n> > > @@ -966,8 +986,11 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,\n> > > \t\t\t/* Work on all the indexes, then the heap */\n> > > +\t\t\t/* Don't use the errcontext handler outside this function */\n> > > +\t\t\terror_context_stack = errcallback.previous;\n> > > \t\t\tlazy_vacuum_all_indexes(onerel, Irel, indstats,\n> > > \t\t\t\t\t\t\t\t\tvacrelstats, lps, nindexes);\n> > > +\t\t\terror_context_stack = &errcallback;\n> > \n> > Alternatively we could push another context for each index inside\n> > lazy_vacuum_all_indexes(). There's been plenty bugs in indexes\n> > triggering problems, so that could be worthwhile.\n> \n> Is the callback for index vacuum useful without a block number?\n\nYea, it is. Without at least that context I don't think we even will\nreliably know which index we're dealing with in case of an error.\n\n\n> FYI, I have another patch which would add DEBUG output before each stage, which\n> would be just as much information, and without needing to use a callback.\n> It's 0004 here:\n\nI don't think that is equivalent at all. With a context I see the\ncontext in the log in case of an error. With a DEBUG message I need to\nbe able to reproduce the error (without even knowing which relation\netc).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 26 Jan 2020 12:25:09 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "Hi,\n\nOn 2020-01-20 15:49:29 -0600, Justin Pryzby wrote:\n> On Mon, Jan 20, 2020 at 11:11:20AM -0800, Andres Freund wrote:\n> On Mon, Jan 20, 2020 at 11:11:20AM -0800, Andres Freund wrote:\n> > Alternatively we could push another context for each index inside\n> > lazy_vacuum_all_indexes(). There's been plenty bugs in indexes\n> > triggering problems, so that could be worthwhile.\n> \n> Did this too, although I'm not sure what kind of errors it'd find (?)\n\nWhat do you mean with \"kind of errors\"? We had index corruptions that\ncaused index vacuuming to fail, and there was no way to diagnose which\ntable / index it was so far?\n\n\n> postgres=# SET client_min_messages=debug;SET statement_timeout=99; VACUUM (VERBOSE, PARALLEL 0) t;\n> INFO: vacuuming \"public.t\"\n> DEBUG: \"t_a_idx\": vacuuming index\n> 2020-01-20 15:47:36.338 CST [20139] ERROR: canceling statement due to statement timeout\n> 2020-01-20 15:47:36.338 CST [20139] CONTEXT: while vacuuming relation \"public.t_a_idx\"\n> 2020-01-20 15:47:36.338 CST [20139] STATEMENT: VACUUM (VERBOSE, PARALLEL 0) t;\n> ERROR: canceling statement due to statement timeout\n> CONTEXT: while vacuuming relation \"public.t_a_idx\"\n\nIt'd be a bit nicer if it said index \"public.t_a_idx\" for relation \"public.t\".\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 26 Jan 2020 12:29:38 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Sun, Jan 26, 2020 at 12:29:38PM -0800, Andres Freund wrote:\n> > postgres=# SET client_min_messages=debug;SET statement_timeout=99; VACUUM (VERBOSE, PARALLEL 0) t;\n> > INFO: vacuuming \"public.t\"\n> > DEBUG: \"t_a_idx\": vacuuming index\n> > 2020-01-20 15:47:36.338 CST [20139] ERROR: canceling statement due to statement timeout\n> > 2020-01-20 15:47:36.338 CST [20139] CONTEXT: while vacuuming relation \"public.t_a_idx\"\n> > 2020-01-20 15:47:36.338 CST [20139] STATEMENT: VACUUM (VERBOSE, PARALLEL 0) t;\n> > ERROR: canceling statement due to statement timeout\n> > CONTEXT: while vacuuming relation \"public.t_a_idx\"\n> \n> It'd be a bit nicer if it said index \"public.t_a_idx\" for relation \"public.t\".\n\nI think that tips the scale in favour of making vacrelstats a global.\nI added that as a 1st patch, and squished the callback patches into one.\n\nAlso, it seems to me we shouldn't repeat the namespace of the index *and* its\ntable. I tried looking for consistency here:\n\ngrep -r '\\\\\"%s.%s\\\\\"' --incl='*.c' |grep '\\\\\"%s\\\\\"'\nsrc/backend/commands/cluster.c: (errmsg(\"clustering \\\"%s.%s\\\" using index scan on \\\"%s\\\"\",\nsrc/backend/access/heap/vacuumlazy.c: errcontext(_(\"while vacuuming index \\\"%s\\\" on table \\\"%s.%s\\\"\"),\n\ngrep -r 'index \\\\\".* table \\\\\"' --incl='*.c'\nsrc/backend/catalog/index.c: (errmsg(\"building index \\\"%s\\\" on table \\\"%s\\\" serially\",\nsrc/backend/catalog/index.c: (errmsg_plural(\"building index \\\"%s\\\" on table \\\"%s\\\" with request for %d parallel worker\",\nsrc/backend/catalog/index.c: \"building index \\\"%s\\\" on table \\\"%s\\\" with request for %d parallel workers\",\nsrc/backend/catalog/catalog.c: errmsg(\"index \\\"%s\\\" does not belong to table \\\"%s\\\"\",\nsrc/backend/commands/indexcmds.c: (errmsg(\"%s %s will create implicit index \\\"%s\\\" for table \\\"%s\\\"\",\nsrc/backend/commands/tablecmds.c: errmsg(\"index \\\"%s\\\" for table \\\"%s\\\" does not exist\",\nsrc/backend/commands/tablecmds.c: errmsg(\"index \\\"%s\\\" for table \\\"%s\\\" does not exist\",\nsrc/backend/commands/tablecmds.c: errdetail(\"The index \\\"%s\\\" belongs to a constraint in table \\\"%s\\\" but no constraint exists for index \\\"%s\\\".\",\nsrc/backend/commands/cluster.c: errmsg(\"index \\\"%s\\\" for table \\\"%s\\\" does not exist\",\nsrc/backend/parser/parse_utilcmd.c: errmsg(\"index \\\"%s\\\" does not belong to table \\\"%s\\\"\",",
"msg_date": "Sun, 26 Jan 2020 23:38:13 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "It occured to me that there's an issue with sharing vacrelstats between\nscan/vacuum, since blkno and stage are set by the heap/index vacuum routines,\nbut not reset on their return to heap scan. Not sure if we should reset them,\nor go back to using a separate struct, like it was here:\nhttps://www.postgresql.org/message-id/20200120054159.GT26045%40telsasoft.com\n\nOn Sun, Jan 26, 2020 at 11:38:13PM -0600, Justin Pryzby wrote:\n> From 592a77554f99b5ff9035c55bf19a79a1443ae59e Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Thu, 12 Dec 2019 20:54:37 -0600\n> Subject: [PATCH v14 2/3] vacuum errcontext to show block being processed\n> \n> As requested here.\n> https://www.postgresql.org/message-id/20190807235154.erbmr4o4bo6vgnjv%40alap3.anarazel.de\n> ---\n> src/backend/access/heap/vacuumlazy.c | 85 +++++++++++++++++++++++++++++++++++-\n> 1 file changed, 84 insertions(+), 1 deletion(-)\n> \n> diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c\n> index 114428b..a62dc79 100644\n> --- a/src/backend/access/heap/vacuumlazy.c\n> +++ b/src/backend/access/heap/vacuumlazy.c\n> @@ -290,8 +290,14 @@ typedef struct LVRelStats\n> \tint\t\t\tnum_index_scans;\n> \tTransactionId latestRemovedXid;\n> \tbool\t\tlock_waiter_detected;\n> -} LVRelStats;\n> \n> +\t/* Used by the error callback */\n> +\tchar\t\t*relname;\n> +\tchar \t\t*relnamespace;\n> +\tBlockNumber blkno;\n> +\tchar \t\t*indname;\n> +\tint\t\t\tstage;\t/* 0: scan heap; 1: vacuum heap; 2: vacuum index */\n> +} LVRelStats;\n> \n> /* A few variables that don't seem worth passing around as parameters */\n> static int\televel = -1;\n> @@ -360,6 +366,7 @@ static void end_parallel_vacuum(Relation *Irel, IndexBulkDeleteResult **stats,\n> \t\t\t\t\t\t\t\tLVParallelState *lps, int nindexes);\n> static LVSharedIndStats *get_indstats(LVShared *lvshared, int n);\n> static bool skip_parallel_vacuum_index(Relation indrel, LVShared *lvshared);\n> +static void vacuum_error_callback(void *arg);\n> \n> \n> /*\n> @@ -721,6 +728,7 @@ lazy_scan_heap(Relation onerel, VacuumParams *params,\n> \t\tPROGRESS_VACUUM_MAX_DEAD_TUPLES\n> \t};\n> \tint64\t\tinitprog_val[3];\n> +\tErrorContextCallback errcallback;\n> \n> \tpg_rusage_init(&ru0);\n> \n> @@ -867,6 +875,17 @@ lazy_scan_heap(Relation onerel, VacuumParams *params,\n> \telse\n> \t\tskipping_blocks = false;\n> \n> +\t/* Setup error traceback support for ereport() */\n> +\tvacrelstats.relnamespace = get_namespace_name(RelationGetNamespace(onerel));\n> +\tvacrelstats.relname = relname;\n> +\tvacrelstats.blkno = InvalidBlockNumber; /* Not known yet */\n> +\tvacrelstats.stage = 0;\n> +\n> +\terrcallback.callback = vacuum_error_callback;\n> +\terrcallback.arg = (void *) &vacrelstats;\n> +\terrcallback.previous = error_context_stack;\n> +\terror_context_stack = &errcallback;\n> +\n> \tfor (blkno = 0; blkno < nblocks; blkno++)\n> \t{\n> \t\tBuffer\t\tbuf;\n> @@ -888,6 +907,8 @@ lazy_scan_heap(Relation onerel, VacuumParams *params,\n> #define FORCE_CHECK_PAGE() \\\n> \t\t(blkno == nblocks - 1 && should_attempt_truncation(params))\n> \n> +\t\tvacrelstats.blkno = blkno;\n> +\n> \t\tpgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_SCANNED, blkno);\n> \n> \t\tif (blkno == next_unskippable_block)\n> @@ -984,12 +1005,18 @@ lazy_scan_heap(Relation onerel, VacuumParams *params,\n> \t\t\t\tvmbuffer = InvalidBuffer;\n> \t\t\t}\n> \n> +\t\t\t/* Pop the error context stack */\n> +\t\t\terror_context_stack = errcallback.previous;\n> +\n> \t\t\t/* Work on all the indexes, then the heap */\n> \t\t\tlazy_vacuum_all_indexes(onerel, Irel, indstats,\n> \t\t\t\t\t\t\t\t\tlps, nindexes);\n> \t\t\t/* Remove tuples from heap */\n> \t\t\tlazy_vacuum_heap(onerel);\n> \n> +\t\t\t/* Replace error context while continuing heap scan */\n> +\t\t\terror_context_stack = &errcallback;\n> +\n> \t\t\t/*\n> \t\t\t * Forget the now-vacuumed tuples, and press on, but be careful\n> \t\t\t * not to reset latestRemovedXid since we want that value to be\n> @@ -1593,6 +1620,9 @@ lazy_scan_heap(Relation onerel, VacuumParams *params,\n> \t\t\tRecordPageWithFreeSpace(onerel, blkno, freespace);\n> \t}\n> \n> +\t/* Pop the error context stack */\n> +\terror_context_stack = errcallback.previous;\n> +\n> \t/* report that everything is scanned and vacuumed */\n> \tpgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_SCANNED, blkno);\n> \n> @@ -1768,11 +1798,24 @@ lazy_vacuum_heap(Relation onerel)\n> \tint\t\t\tnpages;\n> \tPGRUsage\tru0;\n> \tBuffer\t\tvmbuffer = InvalidBuffer;\n> +\tErrorContextCallback errcallback;\n> \n> \t/* Report that we are now vacuuming the heap */\n> \tpgstat_progress_update_param(PROGRESS_VACUUM_PHASE,\n> \t\t\t\t\t\t\t\t PROGRESS_VACUUM_PHASE_VACUUM_HEAP);\n> \n> +\t/*\n> +\t * Setup error traceback support for ereport()\n> +\t * ->relnamespace and ->relname are already set\n> +\t */\n> +\tvacrelstats.blkno = InvalidBlockNumber; /* Not known yet */\n> +\tvacrelstats.stage = 1;\n> +\n> +\terrcallback.callback = vacuum_error_callback;\n> +\terrcallback.arg = (void *) &vacrelstats;\n> +\terrcallback.previous = error_context_stack;\n> +\terror_context_stack = &errcallback;\n> +\n> \tpg_rusage_init(&ru0);\n> \tnpages = 0;\n> \n> @@ -1787,6 +1830,7 @@ lazy_vacuum_heap(Relation onerel)\n> \t\tvacuum_delay_point();\n> \n> \t\ttblk = ItemPointerGetBlockNumber(&vacrelstats.dead_tuples->itemptrs[tupindex]);\n> +\t\tvacrelstats.blkno = tblk;\n> \t\tbuf = ReadBufferExtended(onerel, MAIN_FORKNUM, tblk, RBM_NORMAL,\n> \t\t\t\t\t\t\t\t vac_strategy);\n> \t\tif (!ConditionalLockBufferForCleanup(buf))\n> @@ -1807,6 +1851,9 @@ lazy_vacuum_heap(Relation onerel)\n> \t\tnpages++;\n> \t}\n> \n> +\t/* Pop the error context stack */\n> +\terror_context_stack = errcallback.previous;\n> +\n> \tif (BufferIsValid(vmbuffer))\n> \t{\n> \t\tReleaseBuffer(vmbuffer);\n> @@ -2314,6 +2361,8 @@ lazy_vacuum_index(Relation indrel, IndexBulkDeleteResult **stats,\n> \tIndexVacuumInfo ivinfo;\n> \tconst char *msg;\n> \tPGRUsage\tru0;\n> +\tErrorContextCallback errcallback;\n> +\tLVRelStats\terrcbarg; /* Used for error callback, only */\n> \n> \tpg_rusage_init(&ru0);\n> \n> @@ -2325,10 +2374,24 @@ lazy_vacuum_index(Relation indrel, IndexBulkDeleteResult **stats,\n> \tivinfo.num_heap_tuples = reltuples;\n> \tivinfo.strategy = vac_strategy;\n> \n> +\t/* Setup error traceback support for ereport() */\n> +\terrcbarg.relnamespace = get_namespace_name(RelationGetNamespace(indrel));\n> +\terrcbarg.indname = RelationGetRelationName(indrel);\n> +\terrcbarg.relname = vacrelstats.relname;\n> +\terrcbarg.stage = 2;\n> +\n> +\terrcallback.callback = vacuum_error_callback;\n> +\terrcallback.arg = (void *) &errcbarg;\n> +\terrcallback.previous = error_context_stack;\n> +\terror_context_stack = &errcallback;\n> +\n> \t/* Do bulk deletion */\n> \t*stats = index_bulk_delete(&ivinfo, *stats,\n> \t\t\t\t\t\t\t lazy_tid_reaped, (void *) dead_tuples);\n> \n> +\t/* Pop the error context stack */\n> +\terror_context_stack = errcallback.previous;\n> +\n> \tif (IsParallelWorker())\n> \t\tmsg = gettext_noop(\"scanned index \\\"%s\\\" to remove %d row versions by parallel vacuum worker\");\n> \telse\n> @@ -3371,3 +3434,23 @@ parallel_vacuum_main(dsm_segment *seg, shm_toc *toc)\n> \ttable_close(onerel, ShareUpdateExclusiveLock);\n> \tpfree(stats);\n> }\n> +\n> +/*\n> + * Error context callback for errors occurring during vacuum.\n> + */\n> +static void\n> +vacuum_error_callback(void *arg)\n> +{\n> +\tLVRelStats *cbarg = arg;\n> +\n> +\tif (cbarg->stage == 0)\n> +\t\terrcontext(_(\"while scanning block %u of relation \\\"%s.%s\\\"\"),\n> +\t\t\t\tcbarg->blkno, cbarg->relnamespace, cbarg->relname);\n> +\telse if (cbarg->stage == 1)\n> +\t\terrcontext(_(\"while vacuuming block %u of relation \\\"%s.%s\\\"\"),\n> +\t\t\t\tcbarg->blkno, cbarg->relnamespace, cbarg->relname);\n> +\telse if (cbarg->stage == 2)\n> +\t\terrcontext(_(\"while vacuuming index \\\"%s\\\" on table \\\"%s.%s\\\"\"),\n> +\t\t\t\tcbarg->indname, cbarg->relnamespace, cbarg->relname);\n> +\n> +}\n> -- \n> 2.7.4\n> \n\n\n",
"msg_date": "Mon, 27 Jan 2020 00:14:38 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Mon, 27 Jan 2020 at 14:38, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Sun, Jan 26, 2020 at 12:29:38PM -0800, Andres Freund wrote:\n> > > postgres=# SET client_min_messages=debug;SET statement_timeout=99; VACUUM (VERBOSE, PARALLEL 0) t;\n> > > INFO: vacuuming \"public.t\"\n> > > DEBUG: \"t_a_idx\": vacuuming index\n> > > 2020-01-20 15:47:36.338 CST [20139] ERROR: canceling statement due to statement timeout\n> > > 2020-01-20 15:47:36.338 CST [20139] CONTEXT: while vacuuming relation \"public.t_a_idx\"\n> > > 2020-01-20 15:47:36.338 CST [20139] STATEMENT: VACUUM (VERBOSE, PARALLEL 0) t;\n> > > ERROR: canceling statement due to statement timeout\n> > > CONTEXT: while vacuuming relation \"public.t_a_idx\"\n> >\n> > It'd be a bit nicer if it said index \"public.t_a_idx\" for relation \"public.t\".\n>\n> I think that tips the scale in favour of making vacrelstats a global.\n> I added that as a 1st patch, and squished the callback patches into one.\n\nHmm I don't think it's a good idea to make vacrelstats global. If we\nwant to display the relation name and its index name in error context\nwe might want to define a new struct dedicated for error context\nreporting. That is it has blkno, stage and relation name and schema\nname for both table and index and then we set these variables of\ncallback argument before performing a vacuum phase. We don't change\nLVRelStats at all.\n\nAlthough the patch replaces get_namespace_name and\nRelationGetRelationName but we use namespace name of relation at only\ntwo places and almost ereport/elog messages use only relation name\ngotten by RelationGetRelationName which is a macro to access the\nrelation name in Relation struct. So I think adding relname to\nLVRelStats would not be a big benefit. Similarly, adding table\nnamespace to LVRelStats would be good to avoid calling\nget_namespace_name whereas I'm not sure it's worth to have it because\nit's expected not to be really many times.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 27 Jan 2020 15:59:58 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Mon, Jan 27, 2020 at 03:59:58PM +0900, Masahiko Sawada wrote:\n> On Mon, 27 Jan 2020 at 14:38, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > On Sun, Jan 26, 2020 at 12:29:38PM -0800, Andres Freund wrote:\n> > > > CONTEXT: while vacuuming relation \"public.t_a_idx\"\n> > >\n> > > It'd be a bit nicer if it said index \"public.t_a_idx\" for relation \"public.t\".\n> >\n> > I think that tips the scale in favour of making vacrelstats a global.\n> > I added that as a 1st patch, and squished the callback patches into one.\n> \n> Hmm I don't think it's a good idea to make vacrelstats global. If we\n> want to display the relation name and its index name in error context\n> we might want to define a new struct dedicated for error context\n> reporting. That is it has blkno, stage and relation name and schema\n> name for both table and index and then we set these variables of\n> callback argument before performing a vacuum phase. We don't change\n> LVRelStats at all.\n\nOn Mon, Jan 27, 2020 at 12:14:38AM -0600, Justin Pryzby wrote:\n> It occured to me that there's an issue with sharing vacrelstats between\n> scan/vacuum, since blkno and stage are set by the heap/index vacuum routines,\n> but not reset on their return to heap scan. Not sure if we should reset them,\n> or go back to using a separate struct, like it was here:\n> https://www.postgresql.org/message-id/20200120054159.GT26045%40telsasoft.com\n\nI went back to this, original, way of doing it.\nThe parallel vacuum patch made it harder to pass the table around :(\nAnd has to be separately tested:\n\n| SET statement_timeout=0; DROP TABLE t; CREATE TABLE t AS SELECT generate_series(1,99999)a; CREATE INDEX ON t(a); CREATE INDEX ON t(a); UPDATE t SET a=1+a; SET statement_timeout=99;VACUUM(VERBOSE, PARALLEL 2) t;\n\nI had to allocate space for the table name within the LVShared struct, not just\na pointer, otherwise it would variously crash or fail to output the index name.\nI think pointers can't be passed to parallel process except using some\nheavyweight thing like shm_toc_...\n\nI guess the callback could also take the index relid instead of name, and use\nsomething like IndexGetRelation().\n\n> Although the patch replaces get_namespace_name and\n> RelationGetRelationName but we use namespace name of relation at only\n> two places and almost ereport/elog messages use only relation name\n> gotten by RelationGetRelationName which is a macro to access the\n> relation name in Relation struct. So I think adding relname to\n> LVRelStats would not be a big benefit. Similarly, adding table\n> namespace to LVRelStats would be good to avoid calling\n> get_namespace_name whereas I'm not sure it's worth to have it because\n> it's expected not to be really many times.\n\nRight, I only tried that to save a few LOC and maybe make shorter lines.\nIt's not important so I'll drop that patch.",
"msg_date": "Mon, 27 Jan 2020 16:50:18 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Tue, 28 Jan 2020 at 07:50, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Mon, Jan 27, 2020 at 03:59:58PM +0900, Masahiko Sawada wrote:\n> > On Mon, 27 Jan 2020 at 14:38, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > On Sun, Jan 26, 2020 at 12:29:38PM -0800, Andres Freund wrote:\n> > > > > CONTEXT: while vacuuming relation \"public.t_a_idx\"\n> > > >\n> > > > It'd be a bit nicer if it said index \"public.t_a_idx\" for relation \"public.t\".\n> > >\n> > > I think that tips the scale in favour of making vacrelstats a global.\n> > > I added that as a 1st patch, and squished the callback patches into one.\n> >\n> > Hmm I don't think it's a good idea to make vacrelstats global. If we\n> > want to display the relation name and its index name in error context\n> > we might want to define a new struct dedicated for error context\n> > reporting. That is it has blkno, stage and relation name and schema\n> > name for both table and index and then we set these variables of\n> > callback argument before performing a vacuum phase. We don't change\n> > LVRelStats at all.\n>\n> On Mon, Jan 27, 2020 at 12:14:38AM -0600, Justin Pryzby wrote:\n> > It occured to me that there's an issue with sharing vacrelstats between\n> > scan/vacuum, since blkno and stage are set by the heap/index vacuum routines,\n> > but not reset on their return to heap scan. Not sure if we should reset them,\n> > or go back to using a separate struct, like it was here:\n> > https://www.postgresql.org/message-id/20200120054159.GT26045%40telsasoft.com\n>\n> I went back to this, original, way of doing it.\n> The parallel vacuum patch made it harder to pass the table around :(\n> And has to be separately tested:\n>\n> | SET statement_timeout=0; DROP TABLE t; CREATE TABLE t AS SELECT generate_series(1,99999)a; CREATE INDEX ON t(a); CREATE INDEX ON t(a); UPDATE t SET a=1+a; SET statement_timeout=99;VACUUM(VERBOSE, PARALLEL 2) t;\n>\n> I had to allocate space for the table name within the LVShared struct, not just\n> a pointer, otherwise it would variously crash or fail to output the index name.\n> I think pointers can't be passed to parallel process except using some\n> heavyweight thing like shm_toc_...\n>\n> I guess the callback could also take the index relid instead of name, and use\n> something like IndexGetRelation().\n>\n> > Although the patch replaces get_namespace_name and\n> > RelationGetRelationName but we use namespace name of relation at only\n> > two places and almost ereport/elog messages use only relation name\n> > gotten by RelationGetRelationName which is a macro to access the\n> > relation name in Relation struct. So I think adding relname to\n> > LVRelStats would not be a big benefit. Similarly, adding table\n> > namespace to LVRelStats would be good to avoid calling\n> > get_namespace_name whereas I'm not sure it's worth to have it because\n> > it's expected not to be really many times.\n>\n> Right, I only tried that to save a few LOC and maybe make shorter lines.\n> It's not important so I'll drop that patch.\n\nThank you for updating the patch. Here is some review comments:\n\n1.\n+typedef struct\n+{\n+ char *relnamespace;\n+ char *relname;\n+ char *indname; /* If vacuuming index */\n\nI think \"Non-null if vacuuming index\" is better. And tablename is\nbetter than relname for accuracy?\n\n2.\n+ BlockNumber blkno;\n+ int stage; /* 0: scan heap; 1: vacuum heap; 2: vacuum index */\n+} vacuum_error_callback_arg;\n\nWhy do we not support index cleanup phase?\n\n3.\n /* Work on all the indexes, then the heap */\n lazy_vacuum_all_indexes(onerel, Irel, indstats,\n vacrelstats, lps, nindexes);\n-\n /* Remove tuples from heap */\n lazy_vacuum_heap(onerel, vacrelstats);\n\nI think it's an unnecessary removal.\n\n4.\n static void\n lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)\n {\n int tupindex;\n int npages;\n PGRUsage ru0;\n Buffer vmbuffer = InvalidBuffer;\n+ ErrorContextCallback errcallback;\n+ vacuum_error_callback_arg errcbarg;\n\n /* Report that we are now vacuuming the heap */\n pgstat_progress_update_param(PROGRESS_VACUUM_PHASE,\n PROGRESS_VACUUM_PHASE_VACUUM_HEAP);\n\n+ /*\n+ * Setup error traceback support for ereport()\n+ * ->relnamespace and ->relname are already set\n+ */\n+ errcbarg.blkno = InvalidBlockNumber; /* Not known yet */\n+ errcbarg.stage = 1;\n\nrelnamespace and relname of errcbarg is not set as it is initialized\nin this function.\n\n5.\n@@ -177,6 +177,7 @@ typedef struct LVShared\n * the lazy vacuum.\n */\n Oid relid;\n+ char relname[NAMEDATALEN]; /* tablename, used for error callback */\n\nHmm I think it's not a good idea to have LVShared have relname because\nthe parallel vacuum worker being able to know the table name by oid\nand it consumes DSM memory. To pass the relation name down to\nlazy_vacuum_index I thought to add new argument relname to some\nfunctions but in parallel vacuum case there are multiple functions\nuntil we reach lazy_vacuum_index. So I think it doesn't make sense to\nadd a new argument to all those functions. How about getting relation\nname from index relation? That is, in lazy_vacuum_index we can get\ntable oid from the index relation by IndexGetRelation() and therefore\nwe can get the table name which is in palloc'd memory. That way we\ndon't need to add relname to any existing struct such as LVRelStats\nand LVShared.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 2 Feb 2020 10:45:12 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "Thanks for reviewing again\n\nOn Sun, Feb 02, 2020 at 10:45:12AM +0900, Masahiko Sawada wrote:\n> Thank you for updating the patch. Here is some review comments:\n> \n> 1.\n> +typedef struct\n> +{\n> + char *relnamespace;\n> + char *relname;\n> + char *indname; /* If vacuuming index */\n> \n> I think \"Non-null if vacuuming index\" is better.\n\nActually it's undefined garbage (not NULL) if not vacuuming index.\n\n> And tablename is better than relname for accuracy?\n\nThe existing code uses relname, so I left that, since it's strange to\nstart using tablename and then write things like:\n\n| errcbarg.tblname = relname;\n...\n| errcontext(_(\"while scanning block %u of relation \\\"%s.%s\\\"\"),\n| cbarg->blkno, cbarg->relnamespace, cbarg->tblname);\n\nAlso, mat views can be vacuumed.\n\n> 2.\n> + BlockNumber blkno;\n> + int stage; /* 0: scan heap; 1: vacuum heap; 2: vacuum index */\n> +} vacuum_error_callback_arg;\n> \n> Why do we not support index cleanup phase?\n\nThe patch started out just handling scan_heap. The added vacuum_heap. Then\nadded vacuum_index. Now, I've added index cleanup.\n\n> 4.\n> + /*\n> + * Setup error traceback support for ereport()\n> + * ->relnamespace and ->relname are already set\n> + */\n> + errcbarg.blkno = InvalidBlockNumber; /* Not known yet */\n> + errcbarg.stage = 1;\n> \n> relnamespace and relname of errcbarg is not set as it is initialized\n> in this function.\n\nThanks. That's an oversight from switching back to local vars instead of\nLVRelStats while updating the patch while out of town..\n\nI don't know how to consistently test the vacuum_heap case, but rechecked it just now.\n\npostgres=# SET client_min_messages=debug; SET statement_timeout=0; UPDATE t SET a=1+a; SET statement_timeout=150; VACUUM(VERBOSE, PARALLEL 1) t;\n...\n2020-02-01 23:11:06.482 CST [26609] ERROR: canceling statement due to statement timeout\n2020-02-01 23:11:06.482 CST [26609] CONTEXT: while vacuuming block 33 of relation \"public.t\"\n\n> 5.\n> @@ -177,6 +177,7 @@ typedef struct LVShared\n> * the lazy vacuum.\n> */\n> Oid relid;\n> + char relname[NAMEDATALEN]; /* tablename, used for error callback */\n> \n> How about getting relation\n> name from index relation? That is, in lazy_vacuum_index we can get\n> table oid from the index relation by IndexGetRelation() and therefore\n> we can get the table name which is in palloc'd memory. That way we\n> don't need to add relname to any existing struct such as LVRelStats\n> and LVShared.\n\nSee attached\n\nAlso, I think we shouldn't show a block number if it's \"Invalid\", to avoid\nsaying \"while vacuuming block 4294967295 of relation ...\"\n\nFor now, I made it not show any errcontext at all in that case.",
"msg_date": "Sun, 2 Feb 2020 00:02:59 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Sun, 2 Feb 2020 at 15:03, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> Thanks for reviewing again\n>\n> On Sun, Feb 02, 2020 at 10:45:12AM +0900, Masahiko Sawada wrote:\n> > Thank you for updating the patch. Here is some review comments:\n> >\n> > 1.\n> > +typedef struct\n> > +{\n> > + char *relnamespace;\n> > + char *relname;\n> > + char *indname; /* If vacuuming index */\n> >\n> > I think \"Non-null if vacuuming index\" is better.\n>\n> Actually it's undefined garbage (not NULL) if not vacuuming index.\n\nSo how about something like \"set index name only during vacuuming\nindex\". My point is that the current comment seems to be unclear to me\nwhat describing.\n\n>\n> > And tablename is better than relname for accuracy?\n>\n> The existing code uses relname, so I left that, since it's strange to\n> start using tablename and then write things like:\n>\n> | errcbarg.tblname = relname;\n> ...\n> | errcontext(_(\"while scanning block %u of relation \\\"%s.%s\\\"\"),\n> | cbarg->blkno, cbarg->relnamespace, cbarg->tblname);\n>\n> Also, mat views can be vacuumed.\n\nok, agreed.\n\n>\n> > 2.\n> > + BlockNumber blkno;\n> > + int stage; /* 0: scan heap; 1: vacuum heap; 2: vacuum index */\n> > +} vacuum_error_callback_arg;\n> >\n> > Why do we not support index cleanup phase?\n>\n> The patch started out just handling scan_heap. The added vacuum_heap. Then\n> added vacuum_index. Now, I've added index cleanup.\n>\n> > 4.\n> > + /*\n> > + * Setup error traceback support for ereport()\n> > + * ->relnamespace and ->relname are already set\n> > + */\n> > + errcbarg.blkno = InvalidBlockNumber; /* Not known yet */\n> > + errcbarg.stage = 1;\n> >\n> > relnamespace and relname of errcbarg is not set as it is initialized\n> > in this function.\n>\n> Thanks. That's an oversight from switching back to local vars instead of\n> LVRelStats while updating the patch while out of town..\n>\n> I don't know how to consistently test the vacuum_heap case, but rechecked it just now.\n>\n> postgres=# SET client_min_messages=debug; SET statement_timeout=0; UPDATE t SET a=1+a; SET statement_timeout=150; VACUUM(VERBOSE, PARALLEL 1) t;\n> ...\n> 2020-02-01 23:11:06.482 CST [26609] ERROR: canceling statement due to statement timeout\n> 2020-02-01 23:11:06.482 CST [26609] CONTEXT: while vacuuming block 33 of relation \"public.t\"\n>\n> > 5.\n> > @@ -177,6 +177,7 @@ typedef struct LVShared\n> > * the lazy vacuum.\n> > */\n> > Oid relid;\n> > + char relname[NAMEDATALEN]; /* tablename, used for error callback */\n> >\n> > How about getting relation\n> > name from index relation? That is, in lazy_vacuum_index we can get\n> > table oid from the index relation by IndexGetRelation() and therefore\n> > we can get the table name which is in palloc'd memory. That way we\n> > don't need to add relname to any existing struct such as LVRelStats\n> > and LVShared.\n>\n> See attached\n>\n> Also, I think we shouldn't show a block number if it's \"Invalid\", to avoid\n> saying \"while vacuuming block 4294967295 of relation ...\"\n>\n> For now, I made it not show any errcontext at all in that case.\n\nThank you for updating the patch!\n\nHere is the comment for v16 patch:\n\n1.\n+ ErrorContextCallback errcallback = { error_context_stack,\nvacuum_error_callback, &errcbarg, };\n\nI think it's better to initialize individual fields because we might\nneed to fix it as well when fields of ErrorContextCallback are\nchanged.\n\n2.\n+ /* Replace error context while continuing heap scan */\n+ error_context_stack = &errcallback;\n\n /*\n * Forget the now-vacuumed tuples, and press on, but be careful\n * not to reset latestRemovedXid since we want that value to be\n * valid.\n */\n dead_tuples->num_tuples = 0;\n\n /*\n * Vacuum the Free Space Map to make newly-freed space visible on\n * upper-level FSM pages. Note we have not yet processed blkno.\n */\n FreeSpaceMapVacuumRange(onerel, next_fsm_block_to_vacuum, blkno);\n next_fsm_block_to_vacuum = blkno;\n\n /* Report that we are once again scanning the heap */\n pgstat_progress_update_param(PROGRESS_VACUUM_PHASE,\n PROGRESS_VACUUM_PHASE_SCAN_HEAP);\n }\n\nI think we can set the error context for heap scan again after\nfreespace map vacuum finishing, maybe after reporting the new phase.\nOtherwise the user will get confused if an error occurs during\nfreespace map vacuum. And I think the comment is unclear, how about\n\"Set the error context fro heap scan again\"?\n\n3.\n+ if (cbarg->blkno!=InvalidBlockNumber)\n+ errcontext(_(\"while scanning block %u of relation \\\"%s.%s\\\"\"),\n+ cbarg->blkno, cbarg->relnamespace, cbarg->relname);\n\nWe can use BlockNumberIsValid macro instead.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 4 Feb 2020 13:58:20 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Tue, Feb 04, 2020 at 01:58:20PM +0900, Masahiko Sawada wrote:\n> Here is the comment for v16 patch:\n> \n> 2.\n> I think we can set the error context for heap scan again after\n> freespace map vacuum finishing, maybe after reporting the new phase.\n> Otherwise the user will get confused if an error occurs during\n> freespace map vacuum. And I think the comment is unclear, how about\n> \"Set the error context fro heap scan again\"?\n\nGood point\n\n> 3.\n> + if (cbarg->blkno!=InvalidBlockNumber)\n> + errcontext(_(\"while scanning block %u of relation \\\"%s.%s\\\"\"),\n> + cbarg->blkno, cbarg->relnamespace, cbarg->relname);\n> \n> We can use BlockNumberIsValid macro instead.\n\nThanks. See attached, now squished together patches.\n\nI added functions to initialize the callbacks, so error handling is out of the\nway and minimally distract from the rest of vacuum.",
"msg_date": "Fri, 7 Feb 2020 19:01:07 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Sat, 8 Feb 2020 at 10:01, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Tue, Feb 04, 2020 at 01:58:20PM +0900, Masahiko Sawada wrote:\n> > Here is the comment for v16 patch:\n> >\n> > 2.\n> > I think we can set the error context for heap scan again after\n> > freespace map vacuum finishing, maybe after reporting the new phase.\n> > Otherwise the user will get confused if an error occurs during\n> > freespace map vacuum. And I think the comment is unclear, how about\n> > \"Set the error context fro heap scan again\"?\n>\n> Good point\n>\n> > 3.\n> > + if (cbarg->blkno!=InvalidBlockNumber)\n> > + errcontext(_(\"while scanning block %u of relation \\\"%s.%s\\\"\"),\n> > + cbarg->blkno, cbarg->relnamespace, cbarg->relname);\n> >\n> > We can use BlockNumberIsValid macro instead.\n>\n> Thanks. See attached, now squished together patches.\n>\n> I added functions to initialize the callbacks, so error handling is out of the\n> way and minimally distract from the rest of vacuum.\n\nThank you for updating the patch! Here is the review comments:\n\n1.\n+static void vacuum_error_callback(void *arg);\n+static void init_error_context_heap(ErrorContextCallback\n*errcallback, vacuum_error_callback_arg *errcbarg, Relation onerel,\nint phase);\n+static void init_error_context_index(ErrorContextCallback\n*errcallback, vacuum_error_callback_arg *errcbarg, Relation indrel,\nint phase);\n\nYou need to add a newline to follow the limit line lengths so that the\ncode is readable in an 80-column window. Or please run pgindent.\n\n2.\n+/* Initialize error context for heap operations */\n+static void\n+init_error_context_heap(ErrorContextCallback *errcallback,\nvacuum_error_callback_arg *errcbarg, Relation onerel, int phase)\n+{\n+ errcbarg->relnamespace = get_namespace_name(RelationGetNamespace(onerel));\n+ errcbarg->relname = RelationGetRelationName(onerel);\n+ errcbarg->indname = NULL; /* Not used for heap */\n+ errcbarg->blkno = InvalidBlockNumber; /* Not known yet */\n+ errcbarg->phase = phase;\n+\n+ errcallback->callback = vacuum_error_callback;\n+ errcallback->arg = errcbarg;\n+ errcallback->previous = error_context_stack;\n+ error_context_stack = errcallback;\n+}\n\nI think that making initialization process of errcontext argument a\nfunction is good. But maybe we can merge these two functions into one.\ninit_error_context_heap and init_error_context_index actually don't\nonly initialize the callback arguments but also push the vacuum\nerrcallback, in spite of the function name having 'init'. Also I think\nit might be better to only initialize the callback arguments in this\nfunction and to set errcallback by caller, rather than to wrap pushing\nerrcallback by a function. How about the following function\ninitializing the vacuum callback arguments?\n\nstatic void\ninit_vacuum_error_callback_arg(vacuum_error_callback_arg *errcbarg,\nRelation rel, int phase)\n{\n errcbarg->relnamespace = get_namespace_name(RelationGetNamespace(onerel));\n errcbarg->blkno = InvalidBlockNumber;\n errcbarg->phase = phase;\n\n switch (phase) {\n case PROGRESS_VACUUM_PHASE_SCAN_HEAP:\n case PROGRESS_VACUUM_PHASE_VACUUM_HEAP:\n errcbarg->relname = RelationGetRelationName(rel);\n errcbarg->indname = NULL;\n break;\n\n case PROGRESS_VACUUM_PHASE_VACUUM_INDEX:\n case PROGRESS_VACUUM_PHASE_INDEX_CLEANUP:\n /* rel is an index relation in index vacuum case */\n errcbarg->relname = get_rel_name(indrel->rd_index->indexrelid);\n errcbarg->indname = RelationGetRelationName(rel);\n break;\n\n }\n}\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 13 Feb 2020 14:55:53 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Thu, Feb 13, 2020 at 02:55:53PM +0900, Masahiko Sawada wrote:\n> You need to add a newline to follow the limit line lengths so that the\n> code is readable in an 80-column window. Or please run pgindent.\n\nFor now I :set tw=80\n\n> 2.\n> I think that making initialization process of errcontext argument a\n> function is good. But maybe we can merge these two functions into one.\n\nThanks, this is better, and I used that.\n\n> init_error_context_heap and init_error_context_index actually don't\n> only initialize the callback arguments but also push the vacuum\n> errcallback, in spite of the function name having 'init'. Also I think\n> it might be better to only initialize the callback arguments in this\n> function and to set errcallback by caller, rather than to wrap pushing\n> errcallback by a function.\n\nHowever I think it's important not to repeat this 4 times:\n errcallback->callback = vacuum_error_callback;\n errcallback->arg = errcbarg;\n errcallback->previous = error_context_stack;\n error_context_stack = errcallback;\n\nSo I kept the first 3 of those in the function and copied only assignment to\nthe global. That helps makes the heap scan function clear, which assigns to it\ntwice.\n\nBTW, for testing, I'm able to consistently hit the \"vacuuming block\" case like\nthis:\n\nSET statement_timeout=0; DROP TABLE t; CREATE TABLE t(i int); CREATE INDEX ON t(i); INSERT INTO t SELECT generate_series(1,99999); UPDATE t SET i=i-1; SET statement_timeout=111; SET vacuum_cost_delay=3; SET vacuum_cost_page_dirty=0; SET vacuum_cost_page_hit=11; SET vacuum_cost_limit=33; SET statement_timeout=3333; VACUUM VERBOSE t;\n\nThanks for re-reviewing.\n\n-- \nJustin",
"msg_date": "Thu, 13 Feb 2020 17:52:54 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Tue, Jan 21, 2020 at 8:11 AM Andres Freund <andres@anarazel.de> wrote:\n> FWIW, I think we should just flat out delete all this logic, and replace\n> it with a few explicit PrefetchBuffer() calls. Just by chance I\n> literally just now sped up a VACUUM by more than a factor of 10, by\n> manually prefetching buffers. At least the linux kernel readahead logic\n> doesn't deal well with reading and writing to different locations in the\n> same file, and that's what the ringbuffer pretty invariably leads to for\n> workloads that aren't cached.\n\nInteresting. Andrew Gierth made a similar observation on FreeBSD, and\nshowed that by patching his kernel to track sequential writes and\nsequential reads separately he could improve performance, and I\nreproduced the same speedup in a patch of my own based on his\ndescription (that, erm, I've lost). It's not only VACUUM, it's\nanything that is writing to a lot of sequential blocks, since the\nwriteback trails along behind by some distance (maybe a ring buffer,\nmaybe all of shared buffers, whatever). The OS sees you flipping back\nand forth between single block reads and writes and thinks it's\nrandom. I didn't investigate this much but it seemed that ZFS was\nsomehow smart enough to understand what was happening at some level\nbut other filesystems were not.\n\n\n",
"msg_date": "Fri, 14 Feb 2020 14:40:26 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Fri, 14 Feb 2020 at 08:52, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n\nThank you for updating the patch.\n\n> On Thu, Feb 13, 2020 at 02:55:53PM +0900, Masahiko Sawada wrote:\n> > You need to add a newline to follow the limit line lengths so that the\n> > code is readable in an 80-column window. Or please run pgindent.\n>\n> For now I :set tw=80\n>\n> > 2.\n> > I think that making initialization process of errcontext argument a\n> > function is good. But maybe we can merge these two functions into one.\n>\n> Thanks, this is better, and I used that.\n>\n> > init_error_context_heap and init_error_context_index actually don't\n> > only initialize the callback arguments but also push the vacuum\n> > errcallback, in spite of the function name having 'init'. Also I think\n> > it might be better to only initialize the callback arguments in this\n> > function and to set errcallback by caller, rather than to wrap pushing\n> > errcallback by a function.\n>\n> However I think it's important not to repeat this 4 times:\n> errcallback->callback = vacuum_error_callback;\n> errcallback->arg = errcbarg;\n> errcallback->previous = error_context_stack;\n> error_context_stack = errcallback;\n>\n> So I kept the first 3 of those in the function and copied only assignment to\n> the global. That helps makes the heap scan function clear, which assigns to it\n> twice.\n\nOkay. Here is the review comments for v18 patch:\n\n1.\n+/* Initialize error context for heap operations */\n+static void\n+init_error_context(ErrorContextCallback *errcallback,\nvacuum_error_callback_arg *errcbarg, Relation rel, int phase)\n\n* I think the function name is too generic. init_vacuum_error_callback\nor init_vacuum_errcallback is better.\n\n* The comment of this function is not accurate since this function is\nnot only for heap vacuum but also index vacuum. How about just\n\"Initialize vacuum error callback\"?\n\n2.\n+{\n+ switch (phase)\n+ {\n+ case PROGRESS_VACUUM_PHASE_SCAN_HEAP:\n+ case PROGRESS_VACUUM_PHASE_VACUUM_HEAP:\n+ errcbarg->relname = RelationGetRelationName(rel);\n+ errcbarg->indname = NULL; /* Not used for heap */\n+ break;\n+\n+ case PROGRESS_VACUUM_PHASE_VACUUM_INDEX:\n+ case PROGRESS_VACUUM_PHASE_INDEX_CLEANUP:\n+ /* indname is the index being processed,\nrelname is its relation */\n+ errcbarg->indname = RelationGetRelationName(rel);\n+ errcbarg->relname =\nget_rel_name(rel->rd_index->indexrelid);\n\n* I think it's easier to read the code if we set the relname and\nindname in the same order.\n\n* The comment I wrote in the previous mail seems better, because in\nthis function the reader might get confused that 'rel' is a relation\nor an index depending on the phase but that comment helps it.\n\n* rel->rd_index->indexrelid should be rel->rd_index->indrelid.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 14 Feb 2020 12:30:25 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Fri, Feb 14, 2020 at 12:30:25PM +0900, Masahiko Sawada wrote:\n> * I think the function name is too generic. init_vacuum_error_callback\n> or init_vacuum_errcallback is better.\n\n> * The comment of this function is not accurate since this function is\n> not only for heap vacuum but also index vacuum. How about just\n> \"Initialize vacuum error callback\"?\n\n> * I think it's easier to read the code if we set the relname and\n> indname in the same order.\n\n> * The comment I wrote in the previous mail seems better, because in\n> this function the reader might get confused that 'rel' is a relation\n> or an index depending on the phase but that comment helps it.\n\nFixed these\n\n> * rel->rd_index->indexrelid should be rel->rd_index->indrelid.\n\nAck. I think that's been wrong since I first wrote it two weeks ago :(\nThe error is probably more obvious due to the switch statement you proposed.\n\nThanks for continued reviews.\n\n-- \nJustin",
"msg_date": "Fri, 14 Feb 2020 09:34:12 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Sat, 15 Feb 2020 at 00:34, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Fri, Feb 14, 2020 at 12:30:25PM +0900, Masahiko Sawada wrote:\n> > * I think the function name is too generic. init_vacuum_error_callback\n> > or init_vacuum_errcallback is better.\n>\n> > * The comment of this function is not accurate since this function is\n> > not only for heap vacuum but also index vacuum. How about just\n> > \"Initialize vacuum error callback\"?\n>\n> > * I think it's easier to read the code if we set the relname and\n> > indname in the same order.\n>\n> > * The comment I wrote in the previous mail seems better, because in\n> > this function the reader might get confused that 'rel' is a relation\n> > or an index depending on the phase but that comment helps it.\n>\n> Fixed these\n>\n> > * rel->rd_index->indexrelid should be rel->rd_index->indrelid.\n>\n> Ack. I think that's been wrong since I first wrote it two weeks ago :(\n> The error is probably more obvious due to the switch statement you proposed.\n>\n> Thanks for continued reviews.\n\nThank you for updating the patch!\n\n1.\n+ /* Setup error traceback support for ereport() */\n+ init_vacuum_error_callback(&errcallback, &errcbarg, onerel,\nPROGRESS_VACUUM_PHASE_SCAN_HEAP);\n\n+ /*\n+ * Setup error traceback support for ereport()\n+ */\n+ init_vacuum_error_callback(&errcallback, &errcbarg, onerel,\nPROGRESS_VACUUM_PHASE_VACUUM_HEAP);\n\n+ /* Setup error traceback support for ereport() */\n+ init_vacuum_error_callback(&errcallback, &errcbarg, indrel,\nPROGRESS_VACUUM_PHASE_VACUUM_INDEX);\n\n+ /* Setup error traceback support for ereport() */\n+ init_vacuum_error_callback(&errcallback, &errcbarg, indrel,\nPROGRESS_VACUUM_PHASE_INDEX_CLEANUP);\n\n+/* Initialize vacuum error callback */\n+static void\n+init_vacuum_error_callback(ErrorContextCallback *errcallback,\nvacuum_error_callback_arg *errcbarg, Relation rel, int phase)\n\nThe above lines need a new line.\n\n2.\nstatic void\nlazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)\n{\n int tupindex;\n int npages;\n PGRUsage ru0;\n Buffer vmbuffer = InvalidBuffer;\n ErrorContextCallback errcallback;\n vacuum_error_callback_arg errcbarg;\n\n /* Report that we are now vacuuming the heap */\n pgstat_progress_update_param(PROGRESS_VACUUM_PHASE,\n PROGRESS_VACUUM_PHASE_VACUUM_HEAP);\n\n /*\n * Setup error traceback support for ereport()\n */\n init_vacuum_error_callback(&errcallback, &errcbarg, onerel,\nPROGRESS_VACUUM_PHASE_VACUUM_HEAP);\n error_context_stack = &errcallback;\n\n pg_rusage_init(&ru0);\n npages = 0;\n :\n\nIn lazy_vacuum_heap, we set the error context and then call\npg_rusage_init whereas lazy_vacuum_index and lazy_cleanup_index does\nthe opposite. And lazy_scan_heap also call pg_rusage first. I think\nlazy_vacuum_heap should follow them for consistency. That is, we can\nset error context after pages = 0.\n\n3.\nWe have 2 other phases: PROGRESS_VACUUM_PHASE_TRUNCATE and\nPROGRESS_VACUUM_PHASE_FINAL_CLEANUP. I think it's better to set the\nerror context in lazy_truncate_heap as well. What do you think?\n\nI'm not sure it's worth to set the error context for FINAL_CLENAUP but\nwe should add the case of FINAL_CLENAUP to vacuum_error_callback as\nno-op or explain it as a comment even if we don't.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 17 Feb 2020 10:47:47 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Mon, Feb 17, 2020 at 10:47:47AM +0900, Masahiko Sawada wrote:\n> Thank you for updating the patch!\n> \n> 1.\n> The above lines need a new line.\n\nDone, thanks.\n\n> 2.\n> In lazy_vacuum_heap, we set the error context and then call\n> pg_rusage_init whereas lazy_vacuum_index and lazy_cleanup_index does\n> the opposite. And lazy_scan_heap also call pg_rusage first. I think\n> lazy_vacuum_heap should follow them for consistency. That is, we can\n> set error context after pages = 0.\n\nRight. Maybe I did it the other way because the two uses of\nPROGRESS_VACUUM_PHASE_VACUUM_HEAP were right next to each other.\n\n> 3.\n> We have 2 other phases: PROGRESS_VACUUM_PHASE_TRUNCATE and\n> PROGRESS_VACUUM_PHASE_FINAL_CLEANUP. I think it's better to set the\n> error context in lazy_truncate_heap as well. What do you think?\n> \n> I'm not sure it's worth to set the error context for FINAL_CLENAUP but\n> we should add the case of FINAL_CLENAUP to vacuum_error_callback as\n> no-op or explain it as a comment even if we don't.\n\nI don't have strong feelings either way.\n\nI looked a bit at the truncation phase. It also truncates the FSM and VM\nforks, which could be misleading if the error was in one of those files and not\nthe main filenode.\n\nI'd have to find a way to test it... \n...and was pleasantly surprised to see that earlier phases don't choke if I\n(re)create a fake 4GB table like:\n\npostgres=# CREATE TABLE trunc(i int);\nCREATE TABLE\npostgres=# \\x\\t\nExpanded display is on.\nTuples only is on.\npostgres=# SELECT relfilenode FROM pg_class WHERE oid='trunc'::regclass;\nrelfilenode | 59068\n\npostgres=# \\! bash -xc 'truncate -s 1G ./pgsql13.dat111/base/12689/59068{,.{1..3}}'\n+ truncate -s 1G ./pgsql13.dat111/base/12689/59074 ./pgsql13.dat111/base/12689/59074.1 ./pgsql13.dat111/base/12689/59074.2 ./pgsql13.dat111/base/12689/59074.3\n\npostgres=# \\timing \nTiming is on.\npostgres=# SET client_min_messages=debug; SET statement_timeout='13s'; VACUUM VERBOSE trunc;\nINFO: vacuuming \"public.trunc\"\nINFO: \"trunc\": found 0 removable, 0 nonremovable row versions in 524288 out of 524288 pages\nDETAIL: 0 dead row versions cannot be removed yet, oldest xmin: 2098\nThere were 0 unused item identifiers.\nSkipped 0 pages due to buffer pins, 0 frozen pages.\n524288 pages are entirely empty.\nCPU: user: 5.00 s, system: 1.50 s, elapsed: 6.52 s.\nERROR: canceling statement due to statement timeout\nCONTEXT: while truncating relation \"public.trunc\" to 0 blocks\n\nThe callback surrounding RelationTruncate() seems hard to hit unless you add\nCHECK_FOR_INTERRUPTS(); the same was true for index cleanup.\n\nThe truncation uses a prefetch, which is more likely to hit any lowlevel error,\nso I added callback there, too.\n\nBTW, for the index cases, I didn't like repeating the namespace here, but WDYT ?\n|CONTEXT: while vacuuming index \"public.t_i_idx3\" of relation \"t\"\n\nThanks for rerere-reviewing.\n\n-- \nJustin",
"msg_date": "Sun, 16 Feb 2020 21:57:31 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Mon, 17 Feb 2020 at 12:57, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Mon, Feb 17, 2020 at 10:47:47AM +0900, Masahiko Sawada wrote:\n> > Thank you for updating the patch!\n> >\n\nThank you for updating the patch.\n\n> > 1.\n> > The above lines need a new line.\n>\n> Done, thanks.\n>\n> > 2.\n> > In lazy_vacuum_heap, we set the error context and then call\n> > pg_rusage_init whereas lazy_vacuum_index and lazy_cleanup_index does\n> > the opposite. And lazy_scan_heap also call pg_rusage first. I think\n> > lazy_vacuum_heap should follow them for consistency. That is, we can\n> > set error context after pages = 0.\n>\n> Right. Maybe I did it the other way because the two uses of\n> PROGRESS_VACUUM_PHASE_VACUUM_HEAP were right next to each other.\n>\n> > 3.\n> > We have 2 other phases: PROGRESS_VACUUM_PHASE_TRUNCATE and\n> > PROGRESS_VACUUM_PHASE_FINAL_CLEANUP. I think it's better to set the\n> > error context in lazy_truncate_heap as well. What do you think?\n> >\n> > I'm not sure it's worth to set the error context for FINAL_CLENAUP but\n> > we should add the case of FINAL_CLENAUP to vacuum_error_callback as\n> > no-op or explain it as a comment even if we don't.\n>\n> I don't have strong feelings either way.\n>\n> I looked a bit at the truncation phase. It also truncates the FSM and VM\n> forks, which could be misleading if the error was in one of those files and not\n> the main filenode.\n>\n> I'd have to find a way to test it...\n> ...and was pleasantly surprised to see that earlier phases don't choke if I\n> (re)create a fake 4GB table like:\n>\n> postgres=# CREATE TABLE trunc(i int);\n> CREATE TABLE\n> postgres=# \\x\\t\n> Expanded display is on.\n> Tuples only is on.\n> postgres=# SELECT relfilenode FROM pg_class WHERE oid='trunc'::regclass;\n> relfilenode | 59068\n>\n> postgres=# \\! bash -xc 'truncate -s 1G ./pgsql13.dat111/base/12689/59068{,.{1..3}}'\n> + truncate -s 1G ./pgsql13.dat111/base/12689/59074 ./pgsql13.dat111/base/12689/59074.1 ./pgsql13.dat111/base/12689/59074.2 ./pgsql13.dat111/base/12689/59074.3\n>\n> postgres=# \\timing\n> Timing is on.\n> postgres=# SET client_min_messages=debug; SET statement_timeout='13s'; VACUUM VERBOSE trunc;\n> INFO: vacuuming \"public.trunc\"\n> INFO: \"trunc\": found 0 removable, 0 nonremovable row versions in 524288 out of 524288 pages\n> DETAIL: 0 dead row versions cannot be removed yet, oldest xmin: 2098\n> There were 0 unused item identifiers.\n> Skipped 0 pages due to buffer pins, 0 frozen pages.\n> 524288 pages are entirely empty.\n> CPU: user: 5.00 s, system: 1.50 s, elapsed: 6.52 s.\n> ERROR: canceling statement due to statement timeout\n> CONTEXT: while truncating relation \"public.trunc\" to 0 blocks\n>\n\nYeah lazy_scan_heap deals with all dummy files as new empty pages.\n\n> The callback surrounding RelationTruncate() seems hard to hit unless you add\n> CHECK_FOR_INTERRUPTS(); the same was true for index cleanup.\n>\n> The truncation uses a prefetch, which is more likely to hit any lowlevel error,\n> so I added callback there, too.\n>\n> BTW, for the index cases, I didn't like repeating the namespace here, but WDYT ?\n> |CONTEXT: while vacuuming index \"public.t_i_idx3\" of relation \"t\"\n\nThe current message looks good to me because we cannot have a table\nand its index in the different schema.\n\n1.\n pg_rusage_init(&ru0);\n npages = 0;\n\n /*\n * Setup error traceback support for ereport()\n */\n init_vacuum_error_callback(&errcallback, &errcbarg, onerel,\n PROGRESS_VACUUM_PHASE_VACUUM_HEAP);\n error_context_stack = &errcallback;\n\n tupindex = 0;\n\nOops it seems to me that it's better to set error context after\ntupindex = 0. Sorry for my bad.\n\nAnd the above comment can be written in a single line as other same comments.\n\n2.\n@@ -2568,6 +2643,12 @@ count_nondeletable_pages(Relation onerel,\nLVRelStats *vacrelstats)\n BlockNumber blkno;\n BlockNumber prefetchedUntil;\n instr_time starttime;\n+ ErrorContextCallback errcallback;\n+ vacuum_error_callback_arg errcbarg;\n+\n+ /* Setup error traceback support for ereport() */\n+ init_vacuum_error_callback(&errcallback, &errcbarg, onerel,\n+ PROGRESS_VACUUM_PHASE_TRUNCATE);\n\nHmm I don't think it's a good idea to have count_nondeletable_pages\nset the error context of PHASE_TRUNCATE. Because the patch sets the\nerror context during RelationTruncate that actually truncates the heap\nbut count_nondeletable_pages doesn't truncate it. I think setting the\nerror context only during RelationTruncate would be a good start. We\ncan hear other opinions from other hackers. Some hackers may want to\nset the error context for whole lazy_truncate_heap.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 17 Feb 2020 14:18:11 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Mon, Feb 17, 2020 at 02:18:11PM +0900, Masahiko Sawada wrote:\n> Oops it seems to me that it's better to set error context after\n> tupindex = 0. Sorry for my bad.\n\nI take your point but did it differently - see what you think\n\n> And the above comment can be written in a single line as other same comments.\n\nThanks :)\n\n> Hmm I don't think it's a good idea to have count_nondeletable_pages\n> set the error context of PHASE_TRUNCATE.\n\nI think if we don't do it there then we shouldn't bother handling\nPHASE_TRUNCATE at all, since that's what's likely to hit filesystem or other\nlowlevel errors, before lazy_truncate_heap() hits them.\n\n> Because the patch sets the\n> error context during RelationTruncate that actually truncates the heap\n> but count_nondeletable_pages doesn't truncate it.\n\nI would say that ReadBuffer called by the prefetch in\ncount_nondeletable_pages() is called during the course of truncation, the same\nas ReadBuffer called during the course of vacuuming can be attributed to\nvacuuming.\n\n> I think setting the error context only during RelationTruncate would be a\n> good start. We can hear other opinions from other hackers. Some hackers may\n> want to set the error context for whole lazy_truncate_heap.\n\nI avoided doing that since it has several \"return\" statements, each of which\nwould need to \"Pop the error context stack\", which is at risk of being\nforgotten and left unpopped by anyone who adds or changes flow control.\n\nAlso, I just added this to the TRUNCATE case, even though that should never\nhappen: if (BlockNumberIsValid(cbarg->blkno))...\n\n-- \nJustin",
"msg_date": "Mon, 17 Feb 2020 00:14:31 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Mon, 17 Feb 2020 at 15:14, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Mon, Feb 17, 2020 at 02:18:11PM +0900, Masahiko Sawada wrote:\n> > Oops it seems to me that it's better to set error context after\n> > tupindex = 0. Sorry for my bad.\n>\n> I take your point but did it differently - see what you think\n>\n> > And the above comment can be written in a single line as other same comments.\n>\n> Thanks :)\n>\n> > Hmm I don't think it's a good idea to have count_nondeletable_pages\n> > set the error context of PHASE_TRUNCATE.\n>\n> I think if we don't do it there then we shouldn't bother handling\n> PHASE_TRUNCATE at all, since that's what's likely to hit filesystem or other\n> lowlevel errors, before lazy_truncate_heap() hits them.\n>\n> > Because the patch sets the\n> > error context during RelationTruncate that actually truncates the heap\n> > but count_nondeletable_pages doesn't truncate it.\n>\n> I would say that ReadBuffer called by the prefetch in\n> count_nondeletable_pages() is called during the course of truncation, the same\n> as ReadBuffer called during the course of vacuuming can be attributed to\n> vacuuming.\n\nWhy do we want to include only count_nondeletable_pages in spite of\nthat there are also several places where we may wait: waiting for\nlock, get the number of blocks etc. User may cancel vacuum during them\nbut user will not be able to know that vacuum is in truncation phase.\nIf we want to set the error callback during operation that actually\ndoesn't truncate heap like count_nondeletable_pages we should set it\nfor whole lazy_truncate_heap. Otherwise I think we should set it for\nonly RelationTruncate.\n\n>\n> > I think setting the error context only during RelationTruncate would be a\n> > good start. We can hear other opinions from other hackers. Some hackers may\n> > want to set the error context for whole lazy_truncate_heap.\n>\n> I avoided doing that since it has several \"return\" statements, each of which\n> would need to \"Pop the error context stack\", which is at risk of being\n> forgotten and left unpopped by anyone who adds or changes flow control.\n\nI imagined that we can add some goto and pop the error callback there.\nBut since it might make the code bad I suggested to set the error\ncallback for only RelationTruncate as the first step\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 18 Feb 2020 18:18:16 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "Rebased on top of 007491979461ff10d487e1da9bcc87f2fd834f26\n\nAlso, I was thinking that lazy_scan_heap doesn't needs to do this:\n\n+ /* Pop the error context stack while calling vacuum */\n+ error_context_stack = errcallback.previous;\n...\n+ /* Set the error context while continuing heap scan */\n+ error_context_stack = &errcallback;\n\nIt seems to me that's not actually necessary, since lazy_vacuum_heap will just\n*push* a context handler onto the stack, and then pop it back off. We don't\nneed to pop our context beforehand. We also vacuum the FSM, and one might say\nthat we shouldn't report \"...while scanning block number...\" if it was\n\"vacuuming FSM\" instead of \"scanning heap\", to which I would reply that either:\nvacuuming FSM could be considered a part of scanning heap?? Or, maybe we\nshould add an additional callback for that, which is only not very nice since\nwe'd need to add a PROGRESS enum for which we don't actually report PROGRESS\n(or stop using that enum).\n\nI tested using variations on this that works as expected, that context is\ncorrect during vacuum while scanning and after vacuum while scanning:\n\ntemplate1=# SET statement_timeout=0; SET maintenance_work_mem='1MB'; DROP TABLE tt; CREATE UNLOGGED TABLE tt(i int); INSERT INTO tt SELECT generate_series(1,399999); CREATE INDEX ON tt(i); UPDATE tt SET i=i-1; SET statement_timeout=1222; VACUUM VERBOSE tt;",
"msg_date": "Wed, 19 Feb 2020 14:38:21 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "This is, by far, the most complex error context callback we've tried to\nwrite ... Easy stuff first:\n\nIn the error context function itself, you don't need the _() around the\nstrings: errcontext() is marked as a gettext trigger and it does the\ntranslation itself, so the manually added _() is just cruft.\n\nWhen reporting index names, make sure to attach the namespace to the\ntable, not to the index. Example:\n\n case PROGRESS_VACUUM_PHASE_INDEX_CLEANUP:\n- errcontext(_(\"while cleaning up index \\\"%s.%s\\\" of relation \\\"%s\\\"\"), \n- cbarg->relnamespace, cbarg->indname, cbarg->relname); \n+ errcontext(\"while cleaning up index \\\"%s\\\" of relation \\\"%s.%s\\\"\", \n+ cbarg->indname, cbarg->relnamespace, cbarg->relname); \n\nI think it would be worthwhile to have the \"truncate wait\" phase as a\nseparate thing from the truncate itself, since it requires acquiring a\npossibly taken lock. This suggests that using the progress enum is not\na 100% solution ... or maybe it suggests that the progress enum too\nneeds to report the truncate-wait phase separately. (I like the latter\nmyself, actually.)\n\nOn 2020-Feb-19, Justin Pryzby wrote:\n\n> Also, I was thinking that lazy_scan_heap doesn't needs to do this:\n> \n> + /* Pop the error context stack while calling vacuum */\n> + error_context_stack = errcallback.previous;\n> ...\n> + /* Set the error context while continuing heap scan */\n> + error_context_stack = &errcallback;\n> \n> It seems to me that's not actually necessary, since lazy_vacuum_heap will just\n> *push* a context handler onto the stack, and then pop it back off.\n\nSo if you don't pop before pushing, you'll end up with two context\nlines, right?\n\nI find that arrangement a bit confusing. I think it would make sense to\ninitialize the context callback just *once* for a vacuum run, and from\nthat point onwards, just update the errcbarg struct to match what\nyou're currently doing -- not continually pop/push error callback stack\nentries. See below ...\n\n(This means you need to pass the \"cbarg\" as new argument to some of the\ncalled functions, so that they can update it.)\n\nAnother point is that this patch seems to be leaking memory each time\nyou set relation/index/namespace name, since you never free those and\nthey are changed over and over.\n\nIn init_vacuum_error_callback() you don't need the \"switch(phase)\" bit;\ninstead, test rel->rd_rel->relkind, and if it's RELKIND_INDEX then you\nput the relname as indexname, otherwise set it to NULL (after freeing\nthe previous value, if there's one). Note that with this, you only need\nto set the relation name (table name) in the first call! IOW you should\nsplit init_vacuum_error_callback() in two functions: one \"init\" to call\nat start of vacuum, where you set relnamespace and relname; the other\nfunction is update_vacuum_error_callback() (or you find a better name\nfor that) and it sets the phase, and optionally the block number and\nindex name (these last two get reset to InvalidBlkNum/ NULL if not\npassed by caller). I'm not really sure what this means for the parallel\nindex vacuuming stuff; probably you'll need a special case for that: the\nparallel children will need to \"init\" on their own, right?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 20 Feb 2020 14:02:36 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> Another point is that this patch seems to be leaking memory each time\n> you set relation/index/namespace name, since you never free those and\n> they are changed over and over.\n\nOne other point is that this code seems to be trying to ensure that\nthe error context callback itself won't need to touch the catalog cache or\nrelcache, which is an important safety feature ... but it's failing at\nthat goal, because RelationGetRelationName() is going to hand back a\npointer to a string in the relcache. You need another pstrdup for that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 20 Feb 2020 13:10:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Thu, Feb 20, 2020 at 02:02:36PM -0300, Alvaro Herrera wrote:\n> On 2020-Feb-19, Justin Pryzby wrote:\n> \n> > Also, I was thinking that lazy_scan_heap doesn't needs to do this:\n> > \n> > + /* Pop the error context stack while calling vacuum */\n> > + error_context_stack = errcallback.previous;\n> > ...\n> > + /* Set the error context while continuing heap scan */\n> > + error_context_stack = &errcallback;\n> > \n> > It seems to me that's not actually necessary, since lazy_vacuum_heap will just\n> > *push* a context handler onto the stack, and then pop it back off.\n> \n> So if you don't pop before pushing, you'll end up with two context\n> lines, right?\n\nHm, looks like you're right, but that's not what I intended (and I didn't hit\nthat in my test).\n\n> I think it would make sense to\n> initialize the context callback just *once* for a vacuum run, and from\n> that point onwards, just update the errcbarg struct to match what\n> you're currently doing -- not continually pop/push error callback stack\n> entries. See below ...\n\nOriginally, the patch only supported \"scanning heap\", and set the callback\nstrictly, to avoid having callback installed when calling other functions (like\nvacuuming heap/indexes).\n\nThen incrementally added callbacks in increasing number of places. We only\nneed one errcontext. And possibly you're right that the callback could always\nbe in place (?). But what about things like vacuuming FSM ? I think we'd need\nanother \"phase\" for that (or else invent a PHASE_IGNORE to do nothing). Would\nVACUUM_FSM be added to progress reporting, too? We're also talking about new\nphase for TRUNCATE_PREFETCH and TRUNCATE_WAIT.\n\nRegarding the cbarg, at one point I took a suggestion from Andres to use the\nLVRelStats struct. I got rid of that since I didn't like sharing \"blkno\"\nbetween heap scanning and heap vacuuming, and needs to be reset when switching\nback to scanning heap. I experimented now going back to that now. The only\nutility is in having an single allocation of relname/space.\n\n-- \nJustin",
"msg_date": "Thu, 27 Feb 2020 15:08:13 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On 2020-Feb-27, Justin Pryzby wrote:\n\n> Originally, the patch only supported \"scanning heap\", and set the callback\n> strictly, to avoid having callback installed when calling other functions (like\n> vacuuming heap/indexes).\n> \n> Then incrementally added callbacks in increasing number of places. We only\n> need one errcontext. And possibly you're right that the callback could always\n> be in place (?). But what about things like vacuuming FSM ? I think we'd need\n> another \"phase\" for that (or else invent a PHASE_IGNORE to do nothing). Would\n> VACUUM_FSM be added to progress reporting, too? We're also talking about new\n> phase for TRUNCATE_PREFETCH and TRUNCATE_WAIT.\n\nI think we should use a separate enum. It's simple enough, and there's\nno reason to use the same enum for two different things if it seems to\ncomplicate matters.\n\n> Regarding the cbarg, at one point I took a suggestion from Andres to use the\n> LVRelStats struct. I got rid of that since I didn't like sharing \"blkno\"\n> between heap scanning and heap vacuuming, and needs to be reset when switching\n> back to scanning heap. I experimented now going back to that now. The only\n> utility is in having an single allocation of relname/space.\n\nI'm unsure about reusing that struct. Not saying don't do it, just ...\nunsure. It possibly has other responsibilities.\n\nI don't think there's a reason to keep 0002 separate.\n\nRegarding this,\n\n> +\t\tcase PROGRESS_VACUUM_PHASE_VACUUM_HEAP:\n> +\t\t\tif (BlockNumberIsValid(cbarg->blkno))\n> +\t\t\t\terrcontext(\"while vacuuming block %u of relation \\\"%s.%s\\\"\",\n> +\t\t\t\t\t\tcbarg->blkno, cbarg->relnamespace, cbarg->relname);\n> +\t\t\tbreak;\n\nI think you should still call errcontext() when blkno is invalid. In\nfact, just remove the \"if\" line altogether and let it show whatever\nvalue is there. It should work okay. We don't expect the value to be\ninvalid anyway.\n\nMaybe it would make sense to make the LVRelStats struct members be char\narrays rather than pointers. Then you memcpy() or strlcpy() them\ninstead of palloc/free.\n\nPlease don't cuddle your braces.\n\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 27 Feb 2020 21:09:42 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Fri, 21 Feb 2020 at 02:02, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> This is, by far, the most complex error context callback we've tried to\n> write ... Easy stuff first:\n>\n> In the error context function itself, you don't need the _() around the\n> strings: errcontext() is marked as a gettext trigger and it does the\n> translation itself, so the manually added _() is just cruft.\n>\n> When reporting index names, make sure to attach the namespace to the\n> table, not to the index. Example:\n>\n> case PROGRESS_VACUUM_PHASE_INDEX_CLEANUP:\n> - errcontext(_(\"while cleaning up index \\\"%s.%s\\\" of relation \\\"%s\\\"\"),\n> - cbarg->relnamespace, cbarg->indname, cbarg->relname);\n> + errcontext(\"while cleaning up index \\\"%s\\\" of relation \\\"%s.%s\\\"\",\n> + cbarg->indname, cbarg->relnamespace, cbarg->relname);\n>\n> I think it would be worthwhile to have the \"truncate wait\" phase as a\n> separate thing from the truncate itself, since it requires acquiring a\n> possibly taken lock. This suggests that using the progress enum is not\n> a 100% solution ... or maybe it suggests that the progress enum too\n> needs to report the truncate-wait phase separately. (I like the latter\n> myself, actually.)\n>\n> On 2020-Feb-19, Justin Pryzby wrote:\n>\n> > Also, I was thinking that lazy_scan_heap doesn't needs to do this:\n> >\n> > + /* Pop the error context stack while calling vacuum */\n> > + error_context_stack = errcallback.previous;\n> > ...\n> > + /* Set the error context while continuing heap scan */\n> > + error_context_stack = &errcallback;\n> >\n> > It seems to me that's not actually necessary, since lazy_vacuum_heap will just\n> > *push* a context handler onto the stack, and then pop it back off.\n>\n> So if you don't pop before pushing, you'll end up with two context\n> lines, right?\n>\n> I find that arrangement a bit confusing. I think it would make sense to\n> initialize the context callback just *once* for a vacuum run, and from\n> that point onwards, just update the errcbarg struct to match what\n> you're currently doing -- not continually pop/push error callback stack\n> entries. See below ...\n\nI was concerned about fsm vacuum; vacuum error context might show heap\nscan while actually doing fsm vacuum. But perhaps we can update\ncallback args for that. That would be helpful for user to distinguish\nthat the problem seems to be either in heap vacuum or in fsm vacuum.\n\n>\n> (This means you need to pass the \"cbarg\" as new argument to some of the\n> called functions, so that they can update it.)\n>\n> Another point is that this patch seems to be leaking memory each time\n> you set relation/index/namespace name, since you never free those and\n> they are changed over and over.\n>\n> In init_vacuum_error_callback() you don't need the \"switch(phase)\" bit;\n> instead, test rel->rd_rel->relkind, and if it's RELKIND_INDEX then you\n> put the relname as indexname, otherwise set it to NULL (after freeing\n> the previous value, if there's one). Note that with this, you only need\n> to set the relation name (table name) in the first call! IOW you should\n> split init_vacuum_error_callback() in two functions: one \"init\" to call\n> at start of vacuum, where you set relnamespace and relname; the other\n> function is update_vacuum_error_callback() (or you find a better name\n> for that) and it sets the phase, and optionally the block number and\n> index name (these last two get reset to InvalidBlkNum/ NULL if not\n> passed by caller). I'm not really sure what this means for the parallel\n> index vacuuming stuff; probably you'll need a special case for that: the\n> parallel children will need to \"init\" on their own, right?\n\nRight. In that case, I think parallel vacuum worker needs to init the\ncallback args at parallel_vacuum_main(). Other functions that parallel\nvacuum worker could call are also called by the leader process.\n\nRegards,\n\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 3 Mar 2020 22:05:42 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Thu, Feb 27, 2020 at 09:09:42PM -0300, Alvaro Herrera wrote:\n> > +\t\tcase PROGRESS_VACUUM_PHASE_VACUUM_HEAP:\n> > +\t\t\tif (BlockNumberIsValid(cbarg->blkno))\n> > +\t\t\t\terrcontext(\"while vacuuming block %u of relation \\\"%s.%s\\\"\",\n> > +\t\t\t\t\t\tcbarg->blkno, cbarg->relnamespace, cbarg->relname);\n> > +\t\t\tbreak;\n> \n> I think you should still call errcontext() when blkno is invalid.\n\nIn my experience while testing, the conditional avoids lots of CONTEXT noise\nfrom interrupted autovacuum, at least. I couldn't easily reproduce it with the\ncurrent patch, though, maybe due to less pushing and popping.\n\n> Maybe it would make sense to make the LVRelStats struct members be char\n> arrays rather than pointers. Then you memcpy() or strlcpy() them\n> instead of palloc/free.\n\nI had done that in the v15 patch, to allow passing it to parallel workers.\nBut I don't think it's really needed.\n\nOn Tue, Mar 03, 2020 at 10:05:42PM +0900, Masahiko Sawada wrote:\n> I was concerned about fsm vacuum; vacuum error context might show heap\n> scan while actually doing fsm vacuum. But perhaps we can update\n> callback args for that. That would be helpful for user to distinguish\n> that the problem seems to be either in heap vacuum or in fsm vacuum.\n\nDone in the attached. But I think non-error reporting of additional progress\nphases is out of scope for this patch.\n\n> On Fri, 21 Feb 2020 at 02:02, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > parallel children will need to \"init\" on their own, right?\n> Right. In that case, I think parallel vacuum worker needs to init the\n> callback args at parallel_vacuum_main(). Other functions that parallel\n> vacuum worker could call are also called by the leader process.\n\nIn the previous patch, I added this to vacuum_one_index. But I noticed that\nsometimes reported multiple CONTEXT lines (while vacuuming..while scanning),\nwhich isn't intended. I was hacked around that by setting ->previous=NULL, but\nyour way in parallel main() seems better.\n\n-- \nJustin",
"msg_date": "Tue, 3 Mar 2020 13:32:05 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On 2020-Mar-03, Justin Pryzby wrote:\n\n> On Thu, Feb 27, 2020 at 09:09:42PM -0300, Alvaro Herrera wrote:\n> > > +\t\tcase PROGRESS_VACUUM_PHASE_VACUUM_HEAP:\n> > > +\t\t\tif (BlockNumberIsValid(cbarg->blkno))\n> > > +\t\t\t\terrcontext(\"while vacuuming block %u of relation \\\"%s.%s\\\"\",\n> > > +\t\t\t\t\t\tcbarg->blkno, cbarg->relnamespace, cbarg->relname);\n> > > +\t\t\tbreak;\n> > \n> > I think you should still call errcontext() when blkno is invalid.\n> \n> In my experience while testing, the conditional avoids lots of CONTEXT noise\n> from interrupted autovacuum, at least. I couldn't easily reproduce it with the\n> current patch, though, maybe due to less pushing and popping.\n\nI think you're saying that the code had the bug that too many lines were\nreported because of excessive stack pushes, and you worked around it by\nmaking the errcontext() be conditional; and that now the bug is fixed by\navoiding the push/pop games -- which explains why you can no longer\nreproduce it. I don't see why you want to keep the no-longer-needed\nworkaround.\n\n\nYour use of the progress-report enum now has two warts -- the \"-1\"\nvalue, and this one,\n\n> +#define PROGRESS_VACUUM_PHASE_VACUUM_FSM\t\t7 /* For error reporting only */\n\nI'd rather you define a new enum, in lazyvacuum.c.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 3 Mar 2020 16:49:00 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Wed, 4 Mar 2020 at 04:32, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Thu, Feb 27, 2020 at 09:09:42PM -0300, Alvaro Herrera wrote:\n> > > + case PROGRESS_VACUUM_PHASE_VACUUM_HEAP:\n> > > + if (BlockNumberIsValid(cbarg->blkno))\n> > > + errcontext(\"while vacuuming block %u of relation \\\"%s.%s\\\"\",\n> > > + cbarg->blkno, cbarg->relnamespace, cbarg->relname);\n> > > + break;\n> >\n> > I think you should still call errcontext() when blkno is invalid.\n>\n> In my experience while testing, the conditional avoids lots of CONTEXT noise\n> from interrupted autovacuum, at least. I couldn't easily reproduce it with the\n> current patch, though, maybe due to less pushing and popping.\n>\n> > Maybe it would make sense to make the LVRelStats struct members be char\n> > arrays rather than pointers. Then you memcpy() or strlcpy() them\n> > instead of palloc/free.\n>\n> I had done that in the v15 patch, to allow passing it to parallel workers.\n> But I don't think it's really needed.\n>\n> On Tue, Mar 03, 2020 at 10:05:42PM +0900, Masahiko Sawada wrote:\n> > I was concerned about fsm vacuum; vacuum error context might show heap\n> > scan while actually doing fsm vacuum. But perhaps we can update\n> > callback args for that. That would be helpful for user to distinguish\n> > that the problem seems to be either in heap vacuum or in fsm vacuum.\n>\n> Done in the attached. But I think non-error reporting of additional progress\n> phases is out of scope for this patch.\n\nThank you for updating the patch. But we have two more places where we\ndo fsm vacuum.\n\n /*\n * Periodically do incremental FSM vacuuming to make newly-freed\n * space visible on upper FSM pages. Note: although we've cleaned\n * the current block, we haven't yet updated its FSM entry (that\n * happens further down), so passing end == blkno is correct.\n */\n if (blkno - next_fsm_block_to_vacuum >= VACUUM_FSM_EVERY_PAGES)\n {\n FreeSpaceMapVacuumRange(onerel, next_fsm_block_to_vacuum,\n blkno);\n next_fsm_block_to_vacuum = blkno;\nand\n\n /*\n * Vacuum the remainder of the Free Space Map. We must do this whether or\n * not there were indexes.\n */\n if (blkno > next_fsm_block_to_vacuum)\n FreeSpaceMapVacuumRange(onerel, next_fsm_block_to_vacuum, blkno);\n\n\n---\n static void vacuum_one_index(Relation indrel, IndexBulkDeleteResult **stats,\n LVShared *lvshared, LVSharedIndStats\n*shared_indstats,\n- LVDeadTuples *dead_tuples);\n+ LVDeadTuples *dead_tuples, LVRelStats\n*vacrelstats);\n static void lazy_vacuum_index(Relation indrel, IndexBulkDeleteResult **stats,\n- LVDeadTuples *dead_tuples, double reltuples);\n+ LVDeadTuples *dead_tuples, double\nreltuples, LVRelStats *vacrelstats);\n\nThese functions have LVDeadTuples and LVRelStats but LVDeadTuples can\nbe referred by LVRelStats. If we want to use LVRelStats as callback\nargument, we can remove function arguments that can be referred by\nLVRelStats.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\n\n\n\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 4 Mar 2020 16:21:06 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Wed, Mar 04, 2020 at 04:21:06PM +0900, Masahiko Sawada wrote:\n> Thank you for updating the patch. But we have two more places where we\n> do fsm vacuum.\n\nOops, thanks.\n\nI realized that vacuum_page is called not only from lazy_vacuum_heap, but also\ndirectly from lazy_scan_heap, which failed to update errcbarg. So I changed to\nupdate errcbarg in vacuum_page.\n\nWhat about these other calls ? I think granularity of individual function\ncalls requires a debugger, but is it fine issue if errors here are attributed\nto (say) \"scanning heap\" ?\n\nGetRecordedFreeSpace\nheap_*_freeze_tuple\nheap_page_prune\nHeapTupleSatisfiesVacuum\nLockBufferForCleanup\nMarkBufferDirty\nPage*AllVisible\nPageGetHeapFreeSpace\nRecordPageWithFreeSpace\nvisibilitymap_*\nVM_ALL_FROZEN\n\n> These functions have LVDeadTuples and LVRelStats but LVDeadTuples can\n> be referred by LVRelStats. If we want to use LVRelStats as callback\n> argument, we can remove function arguments that can be referred by\n> LVRelStats.\n\nThat doesn't work easily with parallel vacuum, which passes not\nvacrelstats->dead_tuples, but a dead_tuples pulled out of shm_toc.\n\nBut it was easy enough to remove \"reltuples\".\n\n-- \nJustin",
"msg_date": "Wed, 4 Mar 2020 15:51:59 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Tue, Mar 03, 2020 at 04:49:00PM -0300, Alvaro Herrera wrote:\n> On 2020-Mar-03, Justin Pryzby wrote:\n> > On Thu, Feb 27, 2020 at 09:09:42PM -0300, Alvaro Herrera wrote:\n> > > > +\t\tcase PROGRESS_VACUUM_PHASE_VACUUM_HEAP:\n> > > > +\t\t\tif (BlockNumberIsValid(cbarg->blkno))\n> > > > +\t\t\t\terrcontext(\"while vacuuming block %u of relation \\\"%s.%s\\\"\",\n> > > \n> > > I think you should still call errcontext() when blkno is invalid.\n> > \n> > In my experience while testing, the conditional avoids lots of CONTEXT noise\n> > from interrupted autovacuum, at least. I couldn't easily reproduce it with the\n> > current patch, though, maybe due to less pushing and popping.\n> \n> I think you're saying that the code had the bug that too many lines were\n> reported because of excessive stack pushes, and you worked around it by\n> making the errcontext() be conditional; and that now the bug is fixed by\n> avoiding the push/pop games -- which explains why you can no longer\n> reproduce it. I don't see why you want to keep the no-longer-needed\n> workaround.\n\nNo - the issue I observed from autovacuum (\"while scanning block number\n4294967295\") was unrelated to showing multiple context lines (that issue I only\nsaw in the v22 patch, and was related to vacuum_one_index being used by both\nleader and parallel workers).\n\nThe locations showing a block number first set vacrelstats->blkno to\nInvalidBlockNumber, and then later update the vacrelstats->blkno from a loop\nvariable. I think today's v24 patch makes it harder to hit the window where\nit's set to InvalidBlockNumber, by moving VACUUM_HEAP context into\nvacuum_page(), but I don't think we should rely on an absence of\nCHECK_FOR_INTERRUPTS() to avoid misleading noise context. \n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 4 Mar 2020 15:53:38 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Thu, Mar 5, 2020 at 3:22 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Wed, Mar 04, 2020 at 04:21:06PM +0900, Masahiko Sawada wrote:\n> > Thank you for updating the patch. But we have two more places where we\n> > do fsm vacuum.\n>\n> Oops, thanks.\n>\n> I realized that vacuum_page is called not only from lazy_vacuum_heap, but also\n> directly from lazy_scan_heap, which failed to update errcbarg. So I changed to\n> update errcbarg in vacuum_page.\n>\n> What about these other calls ? I think granularity of individual function\n> calls requires a debugger, but is it fine issue if errors here are attributed\n> to (say) \"scanning heap\" ?\n>\n> GetRecordedFreeSpace\n> heap_*_freeze_tuple\n> heap_page_prune\n> HeapTupleSatisfiesVacuum\n> LockBufferForCleanup\n> MarkBufferDirty\n> Page*AllVisible\n> PageGetHeapFreeSpace\n> RecordPageWithFreeSpace\n> visibilitymap_*\n> VM_ALL_FROZEN\n>\n\nI think we can keep granularity the same as we have for progress\nupdate functionality which means \"scanning heap\" is fine. On similar\nlines, it is not clear whether it is a good idea to keep a phase like\nVACUUM_ERRCB_PHASE_VACUUM_FSM as it has added additional updates in\nmultiple places in the code.\n\nFew other comments:\n1.\n+ /* Init vacrelstats for use as error callback by parallel worker: */\n+ vacrelstats.relnamespace = get_namespace_name(RelationGetNamespace(onerel));\n\nIt looks a bit odd that the comment is ended with semicolon (:), is\nthere any reason for same?\n\n2.\n+ /* Setup error traceback support for ereport() */\n+ update_vacuum_error_cbarg(vacrelstats, VACUUM_ERRCB_PHASE_SCAN_HEAP,\n+ InvalidBlockNumber, NULL);\n+ errcallback.callback = vacuum_error_callback;\n+ errcallback.arg = vacrelstats;\n+ errcallback.previous = error_context_stack;\n+ error_context_stack = &errcallback;\n..\n..\n+ /* Init vacrelstats for use as error callback by parallel worker: */\n+ vacrelstats.relnamespace = get_namespace_name(RelationGetNamespace(onerel));\n+ vacrelstats.relname = pstrdup(RelationGetRelationName(onerel));\n+ vacrelstats.indname = NULL;\n+ vacrelstats.phase = VACUUM_ERRCB_PHASE_UNKNOWN; /* Not yet processing */\n+\n+ /* Setup error traceback support for ereport() */\n+ errcallback.callback = vacuum_error_callback;\n+ errcallback.arg = &vacrelstats;\n+ errcallback.previous = error_context_stack;\n+ error_context_stack = &errcallback;\n+\n\nI think the code can be bit simplified if we have a function\nsetup_vacuum_error_ctx which takes necessary parameters and fill the\nrequired vacrelstats params, setup errcallback. Then we can use\nupdate_vacuum_error_cbarg at required places.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 16 Mar 2020 11:44:25 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On 2020-Mar-16, Amit Kapila wrote:\n\n> 2.\n> + /* Setup error traceback support for ereport() */\n> + update_vacuum_error_cbarg(vacrelstats, VACUUM_ERRCB_PHASE_SCAN_HEAP,\n> + InvalidBlockNumber, NULL);\n> + errcallback.callback = vacuum_error_callback;\n> + errcallback.arg = vacrelstats;\n> + errcallback.previous = error_context_stack;\n> + error_context_stack = &errcallback;\n> ..\n> ..\n> + /* Init vacrelstats for use as error callback by parallel worker: */\n> + vacrelstats.relnamespace = get_namespace_name(RelationGetNamespace(onerel));\n> + vacrelstats.relname = pstrdup(RelationGetRelationName(onerel));\n> + vacrelstats.indname = NULL;\n> + vacrelstats.phase = VACUUM_ERRCB_PHASE_UNKNOWN; /* Not yet processing */\n> +\n> + /* Setup error traceback support for ereport() */\n> + errcallback.callback = vacuum_error_callback;\n> + errcallback.arg = &vacrelstats;\n> + errcallback.previous = error_context_stack;\n> + error_context_stack = &errcallback;\n> +\n> \n> I think the code can be bit simplified if we have a function\n> setup_vacuum_error_ctx which takes necessary parameters and fill the\n> required vacrelstats params, setup errcallback. Then we can use\n> update_vacuum_error_cbarg at required places.\n\nHeh, he had that and I took it away -- it looked unnatural. I thought\nchanging error_context_stack inside such a function, then resetting it\nback to \"previous\" outside the function, was too leaky an abstraction.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 16 Mar 2020 11:17:33 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Mon, Mar 16, 2020 at 7:47 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2020-Mar-16, Amit Kapila wrote:\n>\n> > 2.\n> > + /* Setup error traceback support for ereport() */\n> > + update_vacuum_error_cbarg(vacrelstats, VACUUM_ERRCB_PHASE_SCAN_HEAP,\n> > + InvalidBlockNumber, NULL);\n> > + errcallback.callback = vacuum_error_callback;\n> > + errcallback.arg = vacrelstats;\n> > + errcallback.previous = error_context_stack;\n> > + error_context_stack = &errcallback;\n> > ..\n> > ..\n> > + /* Init vacrelstats for use as error callback by parallel worker: */\n> > + vacrelstats.relnamespace = get_namespace_name(RelationGetNamespace(onerel));\n> > + vacrelstats.relname = pstrdup(RelationGetRelationName(onerel));\n> > + vacrelstats.indname = NULL;\n> > + vacrelstats.phase = VACUUM_ERRCB_PHASE_UNKNOWN; /* Not yet processing */\n> > +\n> > + /* Setup error traceback support for ereport() */\n> > + errcallback.callback = vacuum_error_callback;\n> > + errcallback.arg = &vacrelstats;\n> > + errcallback.previous = error_context_stack;\n> > + error_context_stack = &errcallback;\n> > +\n> >\n> > I think the code can be bit simplified if we have a function\n> > setup_vacuum_error_ctx which takes necessary parameters and fill the\n> > required vacrelstats params, setup errcallback. Then we can use\n> > update_vacuum_error_cbarg at required places.\n>\n> Heh, he had that and I took it away -- it looked unnatural. I thought\n> changing error_context_stack inside such a function, then resetting it\n> back to \"previous\" outside the function, was too leaky an abstraction.\n>\n\nWe could have something like setup_parser_errposition_callback and\ncancel_parser_errposition_callback which might look a bit better. I\nthought to avoid having similar code at different places and it might\nlook a bit cleaner especially because we are adding code to an already\nlarge function like lazy_scan_heap(), but if you don't like the idea,\nthen we can leave it as it is.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 17 Mar 2020 09:15:49 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Tue, Mar 03, 2020 at 10:05:42PM +0900, Masahiko Sawada wrote:\n> I was concerned about fsm vacuum; vacuum error context might show heap\n> scan while actually doing fsm vacuum. But perhaps we can update\n> callback args for that. That would be helpful for user to distinguish\n> that the problem seems to be either in heap vacuum or in fsm vacuum.\n\nOn Tue, Mar 03, 2020 at 04:49:00PM -0300, Alvaro Herrera wrote:\n> Your use of the progress-report enum now has two warts -- the \"-1\"\n> value, and this one,\n> \n> > +#define PROGRESS_VACUUM_PHASE_VACUUM_FSM\t\t7 /* For error reporting only */\n> \n> I'd rather you define a new enum, in lazyvacuum.c.\n\nOn Mon, Mar 16, 2020 at 11:44:25AM +0530, Amit Kapila wrote:\n> > On Wed, Mar 04, 2020 at 04:21:06PM +0900, Masahiko Sawada wrote:\n> > > Thank you for updating the patch. But we have two more places where we\n> > > do fsm vacuum.\n> >\n> > Oops, thanks.\n...\n> it is not clear whether it is a good idea to keep a phase like\n> VACUUM_ERRCB_PHASE_VACUUM_FSM as it has added additional updates in\n> multiple places in the code.\n\nI think you're suggesting to rip out VACUUM_ERRCB_PHASE_VACUUM_FSM, and allow\nreporting any errors there with an error context like \"while scanning heap\".\n\nAn alternative in the three places using VACUUM_ERRCB_PHASE_VACUUM_FSM is to\nset:\n\n|phase = VACUUM_ERRCB_PHASE_UNKNOWN;\n\nto avoid reporting any error context until another phase is set.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 16 Mar 2020 22:51:19 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Tue, Mar 17, 2020 at 9:21 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Tue, Mar 03, 2020 at 10:05:42PM +0900, Masahiko Sawada wrote:\n> > I was concerned about fsm vacuum; vacuum error context might show heap\n> > scan while actually doing fsm vacuum. But perhaps we can update\n> > callback args for that. That would be helpful for user to distinguish\n> > that the problem seems to be either in heap vacuum or in fsm vacuum.\n>\n> On Tue, Mar 03, 2020 at 04:49:00PM -0300, Alvaro Herrera wrote:\n> > Your use of the progress-report enum now has two warts -- the \"-1\"\n> > value, and this one,\n> >\n> > > +#define PROGRESS_VACUUM_PHASE_VACUUM_FSM 7 /* For error reporting only */\n> >\n> > I'd rather you define a new enum, in lazyvacuum.c.\n>\n> On Mon, Mar 16, 2020 at 11:44:25AM +0530, Amit Kapila wrote:\n> > > On Wed, Mar 04, 2020 at 04:21:06PM +0900, Masahiko Sawada wrote:\n> > > > Thank you for updating the patch. But we have two more places where we\n> > > > do fsm vacuum.\n> > >\n> > > Oops, thanks.\n> ...\n> > it is not clear whether it is a good idea to keep a phase like\n> > VACUUM_ERRCB_PHASE_VACUUM_FSM as it has added additional updates in\n> > multiple places in the code.\n>\n> I think you're suggesting to rip out VACUUM_ERRCB_PHASE_VACUUM_FSM, and allow\n> reporting any errors there with an error context like \"while scanning heap\".\n>\n\nRight, because that is what we do for progress updates.\n\n> An alternative in the three places using VACUUM_ERRCB_PHASE_VACUUM_FSM is to\n> set:\n>\n> |phase = VACUUM_ERRCB_PHASE_UNKNOWN;\n>\n> to avoid reporting any error context until another phase is set.\n>\n\nRight, that is an alternative, but not sure if it is worth adding\nadditional code. I am trying to see if we can get this functionality\nwithout adding code at too many places primarily because the code in\nthis area is already complex, so adding more things can make it\ndifficult to understand.\n\nAnother minor point, don't we need to remove the error stack by doing\n\"error_context_stack = errcallback.previous;\" in parallel_vacuum_main?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 17 Mar 2020 09:52:00 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Tue, Mar 17, 2020 at 9:52 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n>\n> Another minor point, don't we need to remove the error stack by doing\n> \"error_context_stack = errcallback.previous;\" in parallel_vacuum_main?\n>\n\nFew other comments:\n1. The error in lazy_vacuum_heap can either have phase\nVACUUM_ERRCB_PHASE_INDEX_* or VACUUM_ERRCB_PHASE_VACUUM_HEAP depending\non when it occurs. If it occurs the first time it enters that\nfunction before a call to lazy_vacuum_page, it will use phase\nVACUUM_ERRCB_PHASE_INDEX_*, otherwise, it would use\nVACUUM_ERRCB_PHASE_VACUUM_HEAP. The reason is lazy_vacuum_index or\nlazy_cleanup_index won't reset the phase after leaving that function.\n\n2. Also once we set phase as VACUUM_ERRCB_PHASE_VACUUM_HEAP via\nlazy_vacuum_page, it doesn't seem to be reset to\nVACUUM_ERRCB_PHASE_SCAN_HEAP even when we do scanning of the heap. I\nthink you need to set phase VACUUM_ERRCB_PHASE_SCAN_HEAP inside loop.\n\nI think we need to be a bit more careful in setting/resetting the\nphase information correctly so that it doesn't display the wrong info\nin the context in an error message.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 17 Mar 2020 11:51:09 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Tue, Mar 17, 2020 at 11:51 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Mar 17, 2020 at 9:52 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > Another minor point, don't we need to remove the error stack by doing\n> > \"error_context_stack = errcallback.previous;\" in parallel_vacuum_main?\n> >\n>\n> Few other comments:\n> 1. The error in lazy_vacuum_heap can either have phase\n> VACUUM_ERRCB_PHASE_INDEX_* or VACUUM_ERRCB_PHASE_VACUUM_HEAP depending\n> on when it occurs. If it occurs the first time it enters that\n> function before a call to lazy_vacuum_page, it will use phase\n> VACUUM_ERRCB_PHASE_INDEX_*, otherwise, it would use\n> VACUUM_ERRCB_PHASE_VACUUM_HEAP. The reason is lazy_vacuum_index or\n> lazy_cleanup_index won't reset the phase after leaving that function.\n>\n> 2. Also once we set phase as VACUUM_ERRCB_PHASE_VACUUM_HEAP via\n> lazy_vacuum_page, it doesn't seem to be reset to\n> VACUUM_ERRCB_PHASE_SCAN_HEAP even when we do scanning of the heap. I\n> think you need to set phase VACUUM_ERRCB_PHASE_SCAN_HEAP inside loop.\n>\n> I think we need to be a bit more careful in setting/resetting the\n> phase information correctly so that it doesn't display the wrong info\n> in the context in an error message.\n>\n\nJustin, are you planning to work on the pending comments? If you\nwant, I can try to fix some of these. We have less time left for this\nCF, so we need to do things a bit quicker.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 19 Mar 2020 08:20:51 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Thu, Mar 19, 2020 at 08:20:51AM +0530, Amit Kapila wrote:\n> On Tue, Mar 17, 2020 at 11:51 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Tue, Mar 17, 2020 at 9:52 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > Another minor point, don't we need to remove the error stack by doing\n> > > \"error_context_stack = errcallback.previous;\" in parallel_vacuum_main?\n\nIt's a good idea, thanks.\n\n> > Few other comments:\n> > 1. The error in lazy_vacuum_heap can either have phase\n> > VACUUM_ERRCB_PHASE_INDEX_* or VACUUM_ERRCB_PHASE_VACUUM_HEAP depending\n> > on when it occurs. If it occurs the first time it enters that\n> > function before a call to lazy_vacuum_page, it will use phase\n> > VACUUM_ERRCB_PHASE_INDEX_*, otherwise, it would use\n> > VACUUM_ERRCB_PHASE_VACUUM_HEAP. The reason is lazy_vacuum_index or\n> > lazy_cleanup_index won't reset the phase after leaving that function.\n\nI think you mean that lazy_vacuum_heap() calls ReadBuffer itself, so needs to\nbe in phase VACUUM_HEAP even before it calls vacuum_page().\n\n> > 2. Also once we set phase as VACUUM_ERRCB_PHASE_VACUUM_HEAP via\n> > lazy_vacuum_page, it doesn't seem to be reset to\n> > VACUUM_ERRCB_PHASE_SCAN_HEAP even when we do scanning of the heap. I\n> > think you need to set phase VACUUM_ERRCB_PHASE_SCAN_HEAP inside loop.\n\nYou're right. PHASE_SCAN_HEAP was set, but only inside a conditional.\n\nBoth those issues are due to a change in the most recent patch. In the\nprevious patch, the PHASE_VACUUM_HEAP was set only by lazy_vacuum_heap(), and I\nmoved it recently to vacuum_page. But it needs to be copied, as you point out.\n\nThat's unfortunate due to a lack of symmetry: lazy_vacuum_page does its own\nprogress update, which suggests to me that it should also set its own error\ncallback. It'd be nicer if EITHER the calling functions did that (scan_heap()\nand vacuum_heap()) or if it was sufficient for the called function\n(vacuum_page()) to do it. \n\n> > I think we need to be a bit more careful in setting/resetting the\n> > phase information correctly so that it doesn't display the wrong info\n> > in the context in an error message.\n> \n> Justin, are you planning to work on the pending comments? If you\n> want, I can try to fix some of these. We have less time left for this\n> CF, so we need to do things a bit quicker.\n\nThanks for reminding.\n\n-- \nJustin",
"msg_date": "Wed, 18 Mar 2020 23:07:59 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Thu, Mar 19, 2020 at 9:38 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Thu, Mar 19, 2020 at 08:20:51AM +0530, Amit Kapila wrote:\n>\n> > > Few other comments:\n> > > 1. The error in lazy_vacuum_heap can either have phase\n> > > VACUUM_ERRCB_PHASE_INDEX_* or VACUUM_ERRCB_PHASE_VACUUM_HEAP depending\n> > > on when it occurs. If it occurs the first time it enters that\n> > > function before a call to lazy_vacuum_page, it will use phase\n> > > VACUUM_ERRCB_PHASE_INDEX_*, otherwise, it would use\n> > > VACUUM_ERRCB_PHASE_VACUUM_HEAP. The reason is lazy_vacuum_index or\n> > > lazy_cleanup_index won't reset the phase after leaving that function.\n>\n> I think you mean that lazy_vacuum_heap() calls ReadBuffer itself, so needs to\n> be in phase VACUUM_HEAP even before it calls vacuum_page().\n>\n\nRight.\n\n> > > 2. Also once we set phase as VACUUM_ERRCB_PHASE_VACUUM_HEAP via\n> > > lazy_vacuum_page, it doesn't seem to be reset to\n> > > VACUUM_ERRCB_PHASE_SCAN_HEAP even when we do scanning of the heap. I\n> > > think you need to set phase VACUUM_ERRCB_PHASE_SCAN_HEAP inside loop.\n>\n> You're right. PHASE_SCAN_HEAP was set, but only inside a conditional.\n>\n\nI think if we do it inside for loop, then we don't need to set it\nconditionally at multiple places. I have changed like that in the\nattached patch, see if that makes sense to you.\n\n> Both those issues are due to a change in the most recent patch. In the\n> previous patch, the PHASE_VACUUM_HEAP was set only by lazy_vacuum_heap(), and I\n> moved it recently to vacuum_page. But it needs to be copied, as you point out.\n>\n> That's unfortunate due to a lack of symmetry: lazy_vacuum_page does its own\n> progress update, which suggests to me that it should also set its own error\n> callback. It'd be nicer if EITHER the calling functions did that (scan_heap()\n> and vacuum_heap()) or if it was sufficient for the called function\n> (vacuum_page()) to do it.\n>\n\nRight, but adding in callers will spread at multiple places.\n\nI have made a few additional changes in the attached. (a) Removed\nVACUUM_ERRCB_PHASE_VACUUM_FSM as I think we have to add it at many\nplaces, you seem to have added for FreeSpaceMapVacuumRange() but not\nfor RecordPageWithFreeSpace(), (b) Reset the phase to\nVACUUM_ERRCB_PHASE_UNKNOWN after finishing the work for a particular\nphase, so that the new phase shouldn't continue in the callers.\n\nI have another idea to make (b) better. How about if a call to\nupdate_vacuum_error_cbarg returns information of old phase (blkno,\nphase, and indname) along with what it is doing now and then once the\nwork for the current phase is over it can reset it back with old phase\ninformation? This way the callee after finishing the new phase work\nwould be able to reset back to the old phase. This will work\nsomething similar to our MemoryContextSwitchTo.\n--\nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 19 Mar 2020 15:18:32 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Thu, Mar 19, 2020 at 03:18:32PM +0530, Amit Kapila wrote:\n> > You're right. PHASE_SCAN_HEAP was set, but only inside a conditional.\n> \n> I think if we do it inside for loop, then we don't need to set it\n> conditionally at multiple places. I have changed like that in the\n> attached patch, see if that makes sense to you.\n\nYes, makes sense, and it's right near pgstat_progress_update_param, which is\nnice.\n\n> > Both those issues are due to a change in the most recent patch. In the\n> > previous patch, the PHASE_VACUUM_HEAP was set only by lazy_vacuum_heap(), and I\n> > moved it recently to vacuum_page. But it needs to be copied, as you point out.\n> >\n> > That's unfortunate due to a lack of symmetry: lazy_vacuum_page does its own\n> > progress update, which suggests to me that it should also set its own error\n> > callback. It'd be nicer if EITHER the calling functions did that (scan_heap()\n> > and vacuum_heap()) or if it was sufficient for the called function\n> > (vacuum_page()) to do it.\n> \n> Right, but adding in callers will spread at multiple places.\n> \n> I have made a few additional changes in the attached. (a) Removed\n> VACUUM_ERRCB_PHASE_VACUUM_FSM as I think we have to add it at many\n> places, you seem to have added for FreeSpaceMapVacuumRange() but not\n> for RecordPageWithFreeSpace(), (b) Reset the phase to\n> VACUUM_ERRCB_PHASE_UNKNOWN after finishing the work for a particular\n> phase, so that the new phase shouldn't continue in the callers.\n> \n> I have another idea to make (b) better. How about if a call to\n> update_vacuum_error_cbarg returns information of old phase (blkno,\n> phase, and indname) along with what it is doing now and then once the\n> work for the current phase is over it can reset it back with old phase\n> information? This way the callee after finishing the new phase work\n> would be able to reset back to the old phase. This will work\n> something similar to our MemoryContextSwitchTo.\n\nI was going to suggest that we could do that by passing in a pointer to a local\nvariable \"LVRelStats olderrcbarg\", like:\n| update_vacuum_error_cbarg(vacrelstats, VACUUM_ERRCB_PHASE_SCAN_HEAP,\n| blkno, NULL, &olderrcbarg);\n\nand then later call:\n|update_vacuum_error_cbarg(vacrelstats, olderrcbarg.phase,\n| olderrcbarg.blkno,\n| olderrcbarg.indname,\n| NULL);\n\nI implemented it in a separate patch, but it may be a bad idea, due to freeing\nindname. To exercise it, I tried to cause a crash by changing \"else if\n(errcbarg->indname)\" to \"if\" without else, but wasn't able to cause a crash,\nprobably just due to having a narrow timing window.\n\nAs written, we only pfree indname if we do actually \"reset\" the cbarg, which is\nin the two routines handling indexes. It's probably a good idea to pass the\nindname rather than the relation in any case.\n\nI rebased the rest of my patches on top of yours.\n\n-- \nJustin",
"msg_date": "Thu, 19 Mar 2020 15:29:31 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Thu, Mar 19, 2020 at 03:29:31PM -0500, Justin Pryzby wrote:\n> I was going to suggest that we could do that by passing in a pointer to a local\n> variable \"LVRelStats olderrcbarg\", like:\n> | update_vacuum_error_cbarg(vacrelstats, VACUUM_ERRCB_PHASE_SCAN_HEAP,\n> | blkno, NULL, &olderrcbarg);\n> \n> and then later call:\n> |update_vacuum_error_cbarg(vacrelstats, olderrcbarg.phase,\n> | olderrcbarg.blkno,\n> | olderrcbarg.indname,\n> | NULL);\n> \n> I implemented it in a separate patch, but it may be a bad idea, due to freeing\n> indname. To exercise it, I tried to cause a crash by changing \"else if\n> (errcbarg->indname)\" to \"if\" without else, but wasn't able to cause a crash,\n> probably just due to having a narrow timing window.\n\nI realized it was better for the caller to just assign the struct on its own.\n\nWhich gives me an excuse for resending patch, which is needed since I spent too\nmuch time testing this that I failed to update the tip commit for the new\nargument.\n\n> It's probably a good idea to pass the indname rather than the relation in any\n> case.\n\nI included that with 0001.\nI also fixed the argument name in the prototype (Relation rel vs indrel).\n\nAnd removed these, which were the whole motivation behind saving the values.\n|Set the error context while continuing heap scan\n\n-- \nJustin",
"msg_date": "Thu, 19 Mar 2020 19:29:09 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Fri, Mar 20, 2020 at 5:59 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Thu, Mar 19, 2020 at 03:29:31PM -0500, Justin Pryzby wrote:\n> > I was going to suggest that we could do that by passing in a pointer to a local\n> > variable \"LVRelStats olderrcbarg\", like:\n> > | update_vacuum_error_cbarg(vacrelstats, VACUUM_ERRCB_PHASE_SCAN_HEAP,\n> > | blkno, NULL, &olderrcbarg);\n> >\n> > and then later call:\n> > |update_vacuum_error_cbarg(vacrelstats, olderrcbarg.phase,\n> > | olderrcbarg.blkno,\n> > | olderrcbarg.indname,\n> > | NULL);\n> >\n> > I implemented it in a separate patch, but it may be a bad idea, due to freeing\n> > indname. To exercise it, I tried to cause a crash by changing \"else if\n> > (errcbarg->indname)\" to \"if\" without else, but wasn't able to cause a crash,\n> > probably just due to having a narrow timing window.\n>\n> I realized it was better for the caller to just assign the struct on its own.\n>\n\nThat makes sense. I have a few more comments:\n\n1.\n+ VACUUM_ERRCB_PHASE_INDEX_CLEANUP,\n+} errcb_phase;\n\nWhy do you need a comma after the last element in the above enum?\n\n2.\n+ /* Setup error traceback support for ereport() */\n+ update_vacuum_error_cbarg(vacrelstats, VACUUM_ERRCB_PHASE_SCAN_HEAP,\n+ InvalidBlockNumber, NULL);\n+ errcallback.callback = vacuum_error_callback;\n+ errcallback.arg = vacrelstats;\n+ errcallback.previous = error_context_stack;\n+ error_context_stack = &errcallback;\n\nWhy do we need to call update_vacuum_error_cbarg at the above place\nafter we have added a new one inside for.. loop?\n\n3.\n+ * free_oldindex is true if the previous \"indname\" should be freed. It must be\n+ * false if the caller has copied the old LVRelSTats,\n\n/LVRelSTats/LVRelStats\n\n4.\n/* Clear the error traceback phase */\n update_vacuum_error_cbarg(vacrelstats,\n- VACUUM_ERRCB_PHASE_UNKNOWN, InvalidBlockNumber,\n- NULL);\n+ olderrcbarg.phase,\n+ olderrcbarg.blkno,\n+ olderrcbarg.indname,\n+ true);\n\nAt this and similar places, change the comment to something like:\n\"Reset the old phase information for error traceback\".\n\n5.\nSubject: [PATCH v28 3/5] Drop reltuples\n\n---\n src/backend/access/heap/vacuumlazy.c | 24 +++++++++++-------------\n 1 file changed, 11 insertions(+), 13 deletions(-)\n\nIs this patch directly related to the main patch (vacuum errcontext to\nshow block being processed) or is it an independent improvement of\ncode?\n\n6.\n[PATCH v28 4/5] add callback for truncation\n\n+ VACUUM_ERRCB_PHASE_TRUNCATE,\n+ VACUUM_ERRCB_PHASE_TRUNCATE_PREFETCH,\n\nDo we really need separate phases for truncate and truncate_prefetch?\nWe have only one phase for a progress update, similarly, I think\nhaving one phase for error reporting should be sufficient. It will\nalso reduce the number of places where we need to call\nupdate_vacuum_error_cbarg. I think we can set\nVACUUM_ERRCB_PHASE_TRUNCATE before count_nondeletable_pages and reset\nit at the place you are doing right now in the patch.\n\n7. Is there a reason to keep the truncate phase patch separate from\nthe main patch? If not, let's merge them.\n\n8. Can we think of some easy way to add tests for this patch?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 20 Mar 2020 11:24:25 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Fri, Mar 20, 2020 at 11:24:25AM +0530, Amit Kapila wrote:\n> On Fri, Mar 20, 2020 at 5:59 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> That makes sense. I have a few more comments:\n> \n> 1.\n> + VACUUM_ERRCB_PHASE_INDEX_CLEANUP,\n> +} errcb_phase;\n> \n> Why do you need a comma after the last element in the above enum?\n\nIt's not needed but a common convention to avoid needing a two-line patch in\norder to add a line at the end, like:\n\n- foo\n+ foo,\n+ bar\n\n> 2. update_vacuum_error_cbarg(vacrelstats, VACUUM_ERRCB_PHASE_SCAN_HEAP, InvalidBlockNumber, NULL);\n> \n> Why do we need to call update_vacuum_error_cbarg at the above place\n> after we have added a new one inside for.. loop?\n\nIf we're going to update the error_context_stack global point to our callback,\nwithout our vacrelstats arg, it'd better be initialized. I changed to do\nvacrelstats->phase = UNKNOWN after its allocation in heap_vacuum_rel().\nThat matches parallel_vacuum_main().\n\n> 4. At this and similar places, change the comment to something like:\n> \"Reset the old phase information for error traceback\".\n\nI did this:\n/* Revert back to the old phase information for error traceback */\n\n> 5. Subject: [PATCH v28 3/5] Drop reltuples\n> \n> Is this patch directly related to the main patch (vacuum errcontext to\n> show block being processed) or is it an independent improvement of\n> code?\n\nIt's a cleanup after implementing the new feature. I left it as a separate\npatch to make review easier of the essential patch and of the cleanup. \nSee here:\nhttps://www.postgresql.org/message-id/CA%2Bfd4k4JA3YkP6-HUqHOqu6cTGqqZUhBfsMqQ4WXkD0Y8uotUg%40mail.gmail.com\n\n> 6. [PATCH v28 4/5] add callback for truncation\n> \n> + VACUUM_ERRCB_PHASE_TRUNCATE,\n> + VACUUM_ERRCB_PHASE_TRUNCATE_PREFETCH,\n> \n> Do we really need separate phases for truncate and truncate_prefetch?\n\nThe context is that there was a request to add err context for (yet another)\nphase, TRUNCATE. But I insisted on adding it to prefetch, too, since it does\nReadBuffer. But there was an objection that the error might be misleading if\nit said \"while truncating\" but it was actually \"prefetching to truncate\".\n\n> 7. Is there a reason to keep the truncate phase patch separate from\n> the main patch? If not, let's merge them.\n\nThey were separate since it's the most-recently added part, and (as now)\nthere's still discussion about it.\n\n> 8. Can we think of some easy way to add tests for this patch?\n\nIs it possible to make an corrupted index which errors during scan during\nregress tests ?\n\nThanks for looking.\n\n-- \nJustin",
"msg_date": "Fri, 20 Mar 2020 01:51:20 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Fri, Mar 20, 2020 at 12:21 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Fri, Mar 20, 2020 at 11:24:25AM +0530, Amit Kapila wrote:\n> > On Fri, Mar 20, 2020 at 5:59 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > That makes sense. I have a few more comments:\n> >\n> > 1.\n> > + VACUUM_ERRCB_PHASE_INDEX_CLEANUP,\n> > +} errcb_phase;\n> >\n> > Why do you need a comma after the last element in the above enum?\n>\n> It's not needed but a common convention to avoid needing a two-line patch in\n> order to add a line at the end, like:\n>\n> - foo\n> + foo,\n> + bar\n>\n\nI don't think this is required and we don't have this at other places,\nso I removed it. Apart from that, I made a few additional changes\n(a) moved the typedef to a different palace as it was looking odd\nin-between other struct defines, (b) renamed the enum ErrCbPhase as\nthat suits more to nearby other trypedefs (c) added/edited comments at\nfew places, (d) ran pgindent.\n\nSee, how the attached looks? I have written a commit message as well,\nsee if I have missed anyone is from the credit list?\n\n>\n> > 8. Can we think of some easy way to add tests for this patch?\n>\n> Is it possible to make an corrupted index which errors during scan during\n> regress tests ?\n>\n\nI don't think so.\n\nFor now, let's focus on the main patch. Once that is committed, we\ncan look into the other code rearrangement/cleanup patches.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 20 Mar 2020 16:58:08 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Fri, Mar 20, 2020 at 04:58:08PM +0530, Amit Kapila wrote:\n> See, how the attached looks? I have written a commit message as well,\n> see if I have missed anyone is from the credit list?\n\nThanks for looking again.\n\nCouple tweaks:\n\n+/* Phases of vacuum during which an error can occur. */\n\nCan you say: \"during which we report error context\"\nOtherwise it sounds like we've somehow precluded errors from happening anywhere\nelse, which I don't think we can claim.\n\nIn the commit messsage:\n|The additional information displayed will be block number for errors\n|occurred while processing heap and index name for errors occurred\n|while processing the index.\n\n=> error occurring\n\n|This will help us in diagnosing the problems that occurred during a\n|vacuum. For ex. due to corruption if we get some error while vacuuming,\n\n=> problems that occur\n\nMaybe it should say that this will help both 1) admins who have corruption due\nto hardware (say); and, 2) developer's with corruption due to a bug.\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 20 Mar 2020 09:53:59 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Fri, Mar 20, 2020 at 8:24 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Fri, Mar 20, 2020 at 04:58:08PM +0530, Amit Kapila wrote:\n> > See, how the attached looks? I have written a commit message as well,\n> > see if I have missed anyone is from the credit list?\n>\n> Thanks for looking again.\n>\n> Couple tweaks:\n>\n\nI have addressed your comments in the attached patch. Today, while\ntesting error messages from various phases, I noticed that the patch\nfails to display error context if the error occurs during the truncate\nphase. The reason was that we had popped the error stack in\nlazy_scan_heap due to which it never calls the callback. I think we\nneed to set up callback at a higher level as is done in the attached\npatch. I have done the testing by inducing errors in various phases\nand it prints the required information. Let me know what you think of\nthe attached?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Sat, 21 Mar 2020 13:00:03 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Sat, Mar 21, 2020 at 01:00:03PM +0530, Amit Kapila wrote:\n> I have addressed your comments in the attached patch. Today, while\n> testing error messages from various phases, I noticed that the patch\n> fails to display error context if the error occurs during the truncate\n> phase. The reason was that we had popped the error stack in\n> lazy_scan_heap due to which it never calls the callback. I think we\n> need to set up callback at a higher level as is done in the attached\n> patch. I have done the testing by inducing errors in various phases\n> and it prints the required information. Let me know what you think of\n> the attached?\n\nThanks. My tests with TRUNCATE were probably back when we had multiple\npush/pop cycles of local error callbacks.\n\nThis passes my tests.\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 21 Mar 2020 03:03:19 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Sat, 21 Mar 2020 at 16:30, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Mar 20, 2020 at 8:24 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > On Fri, Mar 20, 2020 at 04:58:08PM +0530, Amit Kapila wrote:\n> > > See, how the attached looks? I have written a commit message as well,\n> > > see if I have missed anyone is from the credit list?\n> >\n> > Thanks for looking again.\n> >\n> > Couple tweaks:\n> >\n>\n> I have addressed your comments in the attached patch. Today, while\n> testing error messages from various phases, I noticed that the patch\n> fails to display error context if the error occurs during the truncate\n> phase. The reason was that we had popped the error stack in\n> lazy_scan_heap due to which it never calls the callback. I think we\n> need to set up callback at a higher level as is done in the attached\n> patch. I have done the testing by inducing errors in various phases\n> and it prints the required information. Let me know what you think of\n> the attached?\n\nI've looked at the current version patch.\n\n+/* Phases of vacuum during which we report error context. */\n+typedef enum\n+{\n+ VACUUM_ERRCB_PHASE_UNKNOWN,\n+ VACUUM_ERRCB_PHASE_SCAN_HEAP,\n+ VACUUM_ERRCB_PHASE_VACUUM_INDEX,\n+ VACUUM_ERRCB_PHASE_VACUUM_HEAP,\n+ VACUUM_ERRCB_PHASE_INDEX_CLEANUP,\n+ VACUUM_ERRCB_PHASE_TRUNCATE\n+} ErrCbPhase;\n\nI've already commented on earlier patch but I personally think we'd be\nbetter to report freespace map vacuum as a separate phase. The\nprogress report of vacuum command is used to know the progress but\nthis error context would be useful for diagnostic of failure such as\ndisk corruption. For visibility map, I think the visibility map bit\nthat are processed during vacuum is exactly corresponding to the block\nnumber but since freespace map vacuum processes the range of blocks\nI've sometimes had trouble with identifying the cause of the problem.\nWhat do you think?\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 23 Mar 2020 16:39:54 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Mon, Mar 23, 2020 at 04:39:54PM +0900, Masahiko Sawada wrote:\n> I've already commented on earlier patch but I personally think we'd be\n> better to report freespace map vacuum as a separate phase. The\n> progress report of vacuum command is used to know the progress but\n> this error context would be useful for diagnostic of failure such as\n> disk corruption. For visibility map, I think the visibility map bit\n> that are processed during vacuum is exactly corresponding to the block\n> number but since freespace map vacuum processes the range of blocks\n> I've sometimes had trouble with identifying the cause of the problem.\n\nYea, and it would be misleading if we reported \"while scanning block..of\nrelation\" if we actually failed while writing its FSM.\n\nMy previous patches did this:\n\n+ case VACUUM_ERRCB_PHASE_VACUUM_FSM: \n+ errcontext(\"while vacuuming free space map of relation \\\"%s.%s\\\"\", \n+ cbarg->relnamespace, cbarg->relname); \n+ break; \n\nAre you suggesting it should report the start (or end?) block number ?\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 23 Mar 2020 03:16:39 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Mon, Mar 23, 2020 at 1:46 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Mon, Mar 23, 2020 at 04:39:54PM +0900, Masahiko Sawada wrote:\n> > I've already commented on earlier patch but I personally think we'd be\n> > better to report freespace map vacuum as a separate phase. The\n> > progress report of vacuum command is used to know the progress but\n> > this error context would be useful for diagnostic of failure such as\n> > disk corruption. For visibility map, I think the visibility map bit\n> > that are processed during vacuum is exactly corresponding to the block\n> > number but since freespace map vacuum processes the range of blocks\n> > I've sometimes had trouble with identifying the cause of the problem.\n>\n\nWhat extra information we can print that can help? The main problem I\nsee is that we need to sprinkle errorcallback update function at many\nmore places. We can think of writing a wrapper function for FSM calls\nused in a vacuum, but I think those can be used only for vacuum.\n\n> Yea, and it would be misleading if we reported \"while scanning block..of\n> relation\" if we actually failed while writing its FSM.\n>\n> My previous patches did this:\n>\n> + case VACUUM_ERRCB_PHASE_VACUUM_FSM:\n> + errcontext(\"while vacuuming free space map of relation \\\"%s.%s\\\"\",\n> + cbarg->relnamespace, cbarg->relname);\n> + break;\n>\n\nIn what kind of errors will this help?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 23 Mar 2020 14:25:14 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Sat, Mar 21, 2020 at 1:33 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Sat, Mar 21, 2020 at 01:00:03PM +0530, Amit Kapila wrote:\n> > I have addressed your comments in the attached patch. Today, while\n> > testing error messages from various phases, I noticed that the patch\n> > fails to display error context if the error occurs during the truncate\n> > phase. The reason was that we had popped the error stack in\n> > lazy_scan_heap due to which it never calls the callback. I think we\n> > need to set up callback at a higher level as is done in the attached\n> > patch. I have done the testing by inducing errors in various phases\n> > and it prints the required information. Let me know what you think of\n> > the attached?\n>\n> Thanks. My tests with TRUNCATE were probably back when we had multiple\n> push/pop cycles of local error callbacks.\n>\n> This passes my tests.\n>\n\nToday, I have done some additional testing with parallel workers and\nit seems to display the appropriate errors. See below:\n\npostgres=# create table t1(c1 int, c2 char(500), c3 char(500));\nCREATE TABLE\npostgres=# insert into t1 values(generate_series(1,300000),'aaaa', 'bbbb');\nINSERT 0 300000\npostgres=# delete from t1 where c1 > 200000;\nDELETE 100000\npostgres=# vacuum t1;\nERROR: Error induced during index vacuum\nCONTEXT: while vacuuming index \"idx_t1_c3\" of relation \"public.t1\"\nparallel worker\nwhile vacuuming index \"idx_t1_c2\" of relation \"public.t1\"\n\nHere, you can see that the index names displayed in two messages are\ndifferent, basically, the leader backend got the error generated in\nworker when it was vacuuming the other index.\n\nI have used the attached patch to induce error.\n\nI think the patch is in good shape now and I am happy with it. We can\nthink of proceeding with this unless we want the further enhancement\nfor FSM which I am not sure is a good idea.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 23 Mar 2020 15:11:35 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Mon, Mar 23, 2020 at 02:25:14PM +0530, Amit Kapila wrote:\n> > Yea, and it would be misleading if we reported \"while scanning block..of\n> > relation\" if we actually failed while writing its FSM.\n> >\n> > My previous patches did this:\n> >\n> > + case VACUUM_ERRCB_PHASE_VACUUM_FSM:\n> > + errcontext(\"while vacuuming free space map of relation \\\"%s.%s\\\"\",\n> > + cbarg->relnamespace, cbarg->relname);\n> > + break;\n> >\n> \n> In what kind of errors will this help?\n\nIf there's an I/O error on an _fsm file, for one.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 23 Mar 2020 23:16:16 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Tue, Mar 24, 2020 at 9:46 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Mon, Mar 23, 2020 at 02:25:14PM +0530, Amit Kapila wrote:\n> > > Yea, and it would be misleading if we reported \"while scanning block..of\n> > > relation\" if we actually failed while writing its FSM.\n> > >\n> > > My previous patches did this:\n> > >\n> > > + case VACUUM_ERRCB_PHASE_VACUUM_FSM:\n> > > + errcontext(\"while vacuuming free space map of relation \\\"%s.%s\\\"\",\n> > > + cbarg->relnamespace, cbarg->relname);\n> > > + break;\n> > >\n> >\n> > In what kind of errors will this help?\n>\n> If there's an I/O error on an _fsm file, for one.\n>\n\nIf there is a read or write failure, then we give error like below\nwhich already has required information.\nereport(ERROR,\n(errcode_for_file_access(),\nerrmsg(\"could not read block %u in file \\\"%s\\\": %m\",\nblocknum, FilePathName(v->mdfd_vfd))));\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 24 Mar 2020 10:22:49 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Tue, 24 Mar 2020 at 13:53, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Mar 24, 2020 at 9:46 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > On Mon, Mar 23, 2020 at 02:25:14PM +0530, Amit Kapila wrote:\n> > > > Yea, and it would be misleading if we reported \"while scanning block..of\n> > > > relation\" if we actually failed while writing its FSM.\n> > > >\n> > > > My previous patches did this:\n> > > >\n> > > > + case VACUUM_ERRCB_PHASE_VACUUM_FSM:\n> > > > + errcontext(\"while vacuuming free space map of relation \\\"%s.%s\\\"\",\n> > > > + cbarg->relnamespace, cbarg->relname);\n> > > > + break;\n> > > >\n> > >\n> > > In what kind of errors will this help?\n> >\n> > If there's an I/O error on an _fsm file, for one.\n> >\n>\n> If there is a read or write failure, then we give error like below\n> which already has required information.\n> ereport(ERROR,\n> (errcode_for_file_access(),\n> errmsg(\"could not read block %u in file \\\"%s\\\": %m\",\n> blocknum, FilePathName(v->mdfd_vfd))));\n\nYeah, you're right. We, however, cannot see that the error happened\nwhile recording freespace or while vacuuming freespace map but perhaps\nwe can see the situation by seeing the error message in most cases. So\nI'm okay with the current set of phases.\n\nI'll also test the current version of patch today.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 24 Mar 2020 18:07:18 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Tue, Mar 24, 2020 at 2:37 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Tue, 24 Mar 2020 at 13:53, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Mar 24, 2020 at 9:46 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > >\n> > > On Mon, Mar 23, 2020 at 02:25:14PM +0530, Amit Kapila wrote:\n> > > > > Yea, and it would be misleading if we reported \"while scanning block..of\n> > > > > relation\" if we actually failed while writing its FSM.\n> > > > >\n> > > > > My previous patches did this:\n> > > > >\n> > > > > + case VACUUM_ERRCB_PHASE_VACUUM_FSM:\n> > > > > + errcontext(\"while vacuuming free space map of relation \\\"%s.%s\\\"\",\n> > > > > + cbarg->relnamespace, cbarg->relname);\n> > > > > + break;\n> > > > >\n> > > >\n> > > > In what kind of errors will this help?\n> > >\n> > > If there's an I/O error on an _fsm file, for one.\n> > >\n> >\n> > If there is a read or write failure, then we give error like below\n> > which already has required information.\n> > ereport(ERROR,\n> > (errcode_for_file_access(),\n> > errmsg(\"could not read block %u in file \\\"%s\\\": %m\",\n> > blocknum, FilePathName(v->mdfd_vfd))));\n>\n> Yeah, you're right. We, however, cannot see that the error happened\n> while recording freespace or while vacuuming freespace map but perhaps\n> we can see the situation by seeing the error message in most cases. So\n> I'm okay with the current set of phases.\n>\n> I'll also test the current version of patch today.\n>\n\nokay, thanks.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 24 Mar 2020 14:48:51 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Tue, 24 Mar 2020 at 18:19, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Mar 24, 2020 at 2:37 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Tue, 24 Mar 2020 at 13:53, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Mar 24, 2020 at 9:46 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > >\n> > > > On Mon, Mar 23, 2020 at 02:25:14PM +0530, Amit Kapila wrote:\n> > > > > > Yea, and it would be misleading if we reported \"while scanning block..of\n> > > > > > relation\" if we actually failed while writing its FSM.\n> > > > > >\n> > > > > > My previous patches did this:\n> > > > > >\n> > > > > > + case VACUUM_ERRCB_PHASE_VACUUM_FSM:\n> > > > > > + errcontext(\"while vacuuming free space map of relation \\\"%s.%s\\\"\",\n> > > > > > + cbarg->relnamespace, cbarg->relname);\n> > > > > > + break;\n> > > > > >\n> > > > >\n> > > > > In what kind of errors will this help?\n> > > >\n> > > > If there's an I/O error on an _fsm file, for one.\n> > > >\n> > >\n> > > If there is a read or write failure, then we give error like below\n> > > which already has required information.\n> > > ereport(ERROR,\n> > > (errcode_for_file_access(),\n> > > errmsg(\"could not read block %u in file \\\"%s\\\": %m\",\n> > > blocknum, FilePathName(v->mdfd_vfd))));\n> >\n> > Yeah, you're right. We, however, cannot see that the error happened\n> > while recording freespace or while vacuuming freespace map but perhaps\n> > we can see the situation by seeing the error message in most cases. So\n> > I'm okay with the current set of phases.\n> >\n> > I'll also test the current version of patch today.\n> >\n>\n> okay, thanks.\n\nI've read through the latest version patch (v31), here are my comments:\n\n1.\n+ /* Update error traceback information */\n+ olderrcbarg = *vacrelstats;\n+ update_vacuum_error_cbarg(vacrelstats,\n+ VACUUM_ERRCB_PHASE_TRUNCATE,\nnew_rel_pages, NULL,\n+ false);\n+\n /*\n * Scan backwards from the end to verify that the end pages actually\n * contain no tuples. This is *necessary*, not optional, because\n * other backends could have added tuples to these pages whilst we\n * were vacuuming.\n */\n new_rel_pages = count_nondeletable_pages(onerel, vacrelstats);\n\nWe need to set the error context after setting new_rel_pages.\n\n2.\n+ vacrelstats->relnamespace =\nget_namespace_name(RelationGetNamespace(onerel));\n+ vacrelstats->relname = pstrdup(RelationGetRelationName(onerel));\n\nI think we can pfree these two variables to avoid a memory leak during\nvacuum on multiple relations.\n\n3.\n+/* Phases of vacuum during which we report error context. */\n+typedef enum\n+{\n+ VACUUM_ERRCB_PHASE_UNKNOWN,\n+ VACUUM_ERRCB_PHASE_SCAN_HEAP,\n+ VACUUM_ERRCB_PHASE_VACUUM_INDEX,\n+ VACUUM_ERRCB_PHASE_VACUUM_HEAP,\n+ VACUUM_ERRCB_PHASE_INDEX_CLEANUP,\n+ VACUUM_ERRCB_PHASE_TRUNCATE\n+} ErrCbPhase;\n\nComparing to the vacuum progress phases, there is not a phase for\nfinal cleanup which includes updating the relation stats. Is there any\nreason why we don't have that phase for ErrCbPhase?\n\nThe type name ErrCbPhase seems to be very generic name, how about\nVacErrCbPhase or VacuumErrCbPhase?\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 24 Mar 2020 21:47:30 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Tue, Mar 24, 2020 at 6:18 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n>\n> I've read through the latest version patch (v31), here are my comments:\n>\n> 1.\n> + /* Update error traceback information */\n> + olderrcbarg = *vacrelstats;\n> + update_vacuum_error_cbarg(vacrelstats,\n> + VACUUM_ERRCB_PHASE_TRUNCATE,\n> new_rel_pages, NULL,\n> + false);\n> +\n> /*\n> * Scan backwards from the end to verify that the end pages actually\n> * contain no tuples. This is *necessary*, not optional, because\n> * other backends could have added tuples to these pages whilst we\n> * were vacuuming.\n> */\n> new_rel_pages = count_nondeletable_pages(onerel, vacrelstats);\n>\n> We need to set the error context after setting new_rel_pages.\n>\n\nWe want to cover the errors raised in count_nondeletable_pages(). In\nan earlier version of the patch, we had TRUNCATE_PREFETCH phase which\nuse to cover those errors, but that was not good as we were\nsetting/resetting it multiple times and it was not clear such a\nseparate phase would add any value.\n\n> 2.\n> + vacrelstats->relnamespace =\n> get_namespace_name(RelationGetNamespace(onerel));\n> + vacrelstats->relname = pstrdup(RelationGetRelationName(onerel));\n>\n> I think we can pfree these two variables to avoid a memory leak during\n> vacuum on multiple relations.\n>\n\nYeah, I had also thought about it but I noticed that we are not\nfreeing for vacrelstats. Also, I think the memory is allocated in\nTopTransactionContext which should be freed via\nCommitTransactionCommand before vacuuming of the next relation, so not\nsure if there is much value in freeing those variables.\n\n> 3.\n> +/* Phases of vacuum during which we report error context. */\n> +typedef enum\n> +{\n> + VACUUM_ERRCB_PHASE_UNKNOWN,\n> + VACUUM_ERRCB_PHASE_SCAN_HEAP,\n> + VACUUM_ERRCB_PHASE_VACUUM_INDEX,\n> + VACUUM_ERRCB_PHASE_VACUUM_HEAP,\n> + VACUUM_ERRCB_PHASE_INDEX_CLEANUP,\n> + VACUUM_ERRCB_PHASE_TRUNCATE\n> +} ErrCbPhase;\n>\n> Comparing to the vacuum progress phases, there is not a phase for\n> final cleanup which includes updating the relation stats. Is there any\n> reason why we don't have that phase for ErrCbPhase?\n>\n\nI think we can add it if we want, but it was not clear to me if that is helpful.\n\n> The type name ErrCbPhase seems to be very generic name, how about\n> VacErrCbPhase or VacuumErrCbPhase?\n>\n\nIt sounds like a better name.\n\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 24 Mar 2020 19:07:03 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Tue, Mar 24, 2020 at 07:07:03PM +0530, Amit Kapila wrote:\n> On Tue, Mar 24, 2020 at 6:18 PM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\n> > 1.\n> > + /* Update error traceback information */\n> > + olderrcbarg = *vacrelstats;\n> > + update_vacuum_error_cbarg(vacrelstats,\n> > + VACUUM_ERRCB_PHASE_TRUNCATE,\n> > new_rel_pages, NULL,\n> > + false);\n> > +\n> > /*\n> > * Scan backwards from the end to verify that the end pages actually\n> > * contain no tuples. This is *necessary*, not optional, because\n> > * other backends could have added tuples to these pages whilst we\n> > * were vacuuming.\n> > */\n> > new_rel_pages = count_nondeletable_pages(onerel, vacrelstats);\n> >\n> > We need to set the error context after setting new_rel_pages.\n> \n> We want to cover the errors raised in count_nondeletable_pages(). In\n> an earlier version of the patch, we had TRUNCATE_PREFETCH phase which\n> use to cover those errors, but that was not good as we were\n> setting/resetting it multiple times and it was not clear such a\n> separate phase would add any value.\n\nI insisted on covering count_nondeletable_pages since it calls ReadBuffer(),\nbut I think we need to at least set vacrelsats->blkno = new_rel_pages, since it\nmay be different, right ?\n\n> > 2.\n> > + vacrelstats->relnamespace =\n> > get_namespace_name(RelationGetNamespace(onerel));\n> > + vacrelstats->relname = pstrdup(RelationGetRelationName(onerel));\n> >\n> > I think we can pfree these two variables to avoid a memory leak during\n> > vacuum on multiple relations.\n> \n> Yeah, I had also thought about it but I noticed that we are not\n> freeing for vacrelstats. Also, I think the memory is allocated in\n> TopTransactionContext which should be freed via\n> CommitTransactionCommand before vacuuming of the next relation, so not\n> sure if there is much value in freeing those variables.\n\nOne small reason to free them is that (as Tom mentioned upthread) it's good to\nensure that those variables are their own allocation, and not depending on\nbeing able to access relcache or anything else during an unexpected error.\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 24 Mar 2020 08:48:49 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Tue, Mar 24, 2020 at 7:18 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Tue, Mar 24, 2020 at 07:07:03PM +0530, Amit Kapila wrote:\n> > On Tue, Mar 24, 2020 at 6:18 PM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\n> > > 1.\n> > > + /* Update error traceback information */\n> > > + olderrcbarg = *vacrelstats;\n> > > + update_vacuum_error_cbarg(vacrelstats,\n> > > + VACUUM_ERRCB_PHASE_TRUNCATE,\n> > > new_rel_pages, NULL,\n> > > + false);\n> > > +\n> > > /*\n> > > * Scan backwards from the end to verify that the end pages actually\n> > > * contain no tuples. This is *necessary*, not optional, because\n> > > * other backends could have added tuples to these pages whilst we\n> > > * were vacuuming.\n> > > */\n> > > new_rel_pages = count_nondeletable_pages(onerel, vacrelstats);\n> > >\n> > > We need to set the error context after setting new_rel_pages.\n> >\n> > We want to cover the errors raised in count_nondeletable_pages(). In\n> > an earlier version of the patch, we had TRUNCATE_PREFETCH phase which\n> > use to cover those errors, but that was not good as we were\n> > setting/resetting it multiple times and it was not clear such a\n> > separate phase would add any value.\n>\n> I insisted on covering count_nondeletable_pages since it calls ReadBuffer(),\n> but I think we need to at least set vacrelsats->blkno = new_rel_pages, since it\n> may be different, right ?\n>\n\nyeah, that makes sense.\n\n> > > 2.\n> > > + vacrelstats->relnamespace =\n> > > get_namespace_name(RelationGetNamespace(onerel));\n> > > + vacrelstats->relname = pstrdup(RelationGetRelationName(onerel));\n> > >\n> > > I think we can pfree these two variables to avoid a memory leak during\n> > > vacuum on multiple relations.\n> >\n> > Yeah, I had also thought about it but I noticed that we are not\n> > freeing for vacrelstats. Also, I think the memory is allocated in\n> > TopTransactionContext which should be freed via\n> > CommitTransactionCommand before vacuuming of the next relation, so not\n> > sure if there is much value in freeing those variables.\n>\n> One small reason to free them is that (as Tom mentioned upthread) it's good to\n> ensure that those variables are their own allocation, and not depending on\n> being able to access relcache or anything else during an unexpected error.\n>\n\nThat is a good reason to allocate them separately but not for doing\nretail free especially when the caller of the function will free the\ncontext in which that is allocated. I think Sawada-San's concern was\nthat it will leak memory across the vacuum of multiple relations but\nthat is not the case here. Won't it look odd if we are freeing memory\nfor members of vacrelstats but not for vacrelstats itself?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 24 Mar 2020 19:30:45 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Tue, 24 Mar 2020 at 22:37, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Mar 24, 2020 at 6:18 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> >\n> > I've read through the latest version patch (v31), here are my comments:\n> >\n> > 1.\n> > + /* Update error traceback information */\n> > + olderrcbarg = *vacrelstats;\n> > + update_vacuum_error_cbarg(vacrelstats,\n> > + VACUUM_ERRCB_PHASE_TRUNCATE,\n> > new_rel_pages, NULL,\n> > + false);\n> > +\n> > /*\n> > * Scan backwards from the end to verify that the end pages actually\n> > * contain no tuples. This is *necessary*, not optional, because\n> > * other backends could have added tuples to these pages whilst we\n> > * were vacuuming.\n> > */\n> > new_rel_pages = count_nondeletable_pages(onerel, vacrelstats);\n> >\n> > We need to set the error context after setting new_rel_pages.\n> >\n>\n> We want to cover the errors raised in count_nondeletable_pages(). In\n> an earlier version of the patch, we had TRUNCATE_PREFETCH phase which\n> use to cover those errors, but that was not good as we were\n> setting/resetting it multiple times and it was not clear such a\n> separate phase would add any value.\n\nI got the point. But if we set the error context before that, I think\nwe need to change the error context message. The error context message\nof heap truncation phase is \"while truncating relation \\\"%s.%s\\\" to %u\nblocks\", but cbarg->blkno will be the number of blocks of the current\nrelation.\n\n case VACUUM_ERRCB_PHASE_TRUNCATE:\n if (BlockNumberIsValid(cbarg->blkno))\n errcontext(\"while truncating relation \\\"%s.%s\\\" to %u blocks\",\n cbarg->relnamespace, cbarg->relname, cbarg->blkno);\n break;\n\n>\n> > 2.\n> > + vacrelstats->relnamespace =\n> > get_namespace_name(RelationGetNamespace(onerel));\n> > + vacrelstats->relname = pstrdup(RelationGetRelationName(onerel));\n> >\n> > I think we can pfree these two variables to avoid a memory leak during\n> > vacuum on multiple relations.\n> >\n>\n> Yeah, I had also thought about it but I noticed that we are not\n> freeing for vacrelstats. Also, I think the memory is allocated in\n> TopTransactionContext which should be freed via\n> CommitTransactionCommand before vacuuming of the next relation, so not\n> sure if there is much value in freeing those variables.\n\nRight, thank you for explanation. I was concerned memory leak because\nrelation name and schema name could be large compared to vacrelstats\nbut I agree to free them at commit time.\n\n>\n> > 3.\n> > +/* Phases of vacuum during which we report error context. */\n> > +typedef enum\n> > +{\n> > + VACUUM_ERRCB_PHASE_UNKNOWN,\n> > + VACUUM_ERRCB_PHASE_SCAN_HEAP,\n> > + VACUUM_ERRCB_PHASE_VACUUM_INDEX,\n> > + VACUUM_ERRCB_PHASE_VACUUM_HEAP,\n> > + VACUUM_ERRCB_PHASE_INDEX_CLEANUP,\n> > + VACUUM_ERRCB_PHASE_TRUNCATE\n> > +} ErrCbPhase;\n> >\n> > Comparing to the vacuum progress phases, there is not a phase for\n> > final cleanup which includes updating the relation stats. Is there any\n> > reason why we don't have that phase for ErrCbPhase?\n> >\n>\n> I think we can add it if we want, but it was not clear to me if that is helpful.\n\nYeah, me too. The most likely place where an error happens is\nvac_update_relstats but the error message seems to be enough.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 24 Mar 2020 23:20:25 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Tue, Mar 24, 2020 at 09:47:30PM +0900, Masahiko Sawada wrote:\n> We need to set the error context after setting new_rel_pages.\n\nDone\n\n> The type name ErrCbPhase seems to be very generic name, how about\n> VacErrCbPhase or VacuumErrCbPhase?\n\nDone.\n\nThanks for your review comments.\n\n-- \nJustin",
"msg_date": "Tue, 24 Mar 2020 22:19:22 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Tue, Mar 24, 2020 at 7:51 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Tue, 24 Mar 2020 at 22:37, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Mar 24, 2020 at 6:18 PM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > >\n> > > I've read through the latest version patch (v31), here are my comments:\n> > >\n> > > 1.\n> > > + /* Update error traceback information */\n> > > + olderrcbarg = *vacrelstats;\n> > > + update_vacuum_error_cbarg(vacrelstats,\n> > > + VACUUM_ERRCB_PHASE_TRUNCATE,\n> > > new_rel_pages, NULL,\n> > > + false);\n> > > +\n> > > /*\n> > > * Scan backwards from the end to verify that the end pages actually\n> > > * contain no tuples. This is *necessary*, not optional, because\n> > > * other backends could have added tuples to these pages whilst we\n> > > * were vacuuming.\n> > > */\n> > > new_rel_pages = count_nondeletable_pages(onerel, vacrelstats);\n> > >\n> > > We need to set the error context after setting new_rel_pages.\n> > >\n> >\n> > We want to cover the errors raised in count_nondeletable_pages(). In\n> > an earlier version of the patch, we had TRUNCATE_PREFETCH phase which\n> > use to cover those errors, but that was not good as we were\n> > setting/resetting it multiple times and it was not clear such a\n> > separate phase would add any value.\n>\n> I got the point. But if we set the error context before that, I think\n> we need to change the error context message. The error context message\n> of heap truncation phase is \"while truncating relation \\\"%s.%s\\\" to %u\n> blocks\", but cbarg->blkno will be the number of blocks of the current\n> relation.\n>\n> case VACUUM_ERRCB_PHASE_TRUNCATE:\n> if (BlockNumberIsValid(cbarg->blkno))\n> errcontext(\"while truncating relation \\\"%s.%s\\\" to %u blocks\",\n> cbarg->relnamespace, cbarg->relname, cbarg->blkno);\n> break;\n>\n\nDo you mean to say that actually we are just prefetching or reading\nthe pages in count_nondeletable_pages() but the message doesn't have\nany such indication? If not that, what problem do you see with the\nmessage? What is your suggestion?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 25 Mar 2020 09:14:42 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Wed, Mar 25, 2020 at 8:49 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Tue, Mar 24, 2020 at 09:47:30PM +0900, Masahiko Sawada wrote:\n> > We need to set the error context after setting new_rel_pages.\n>\n> Done\n>\n> > The type name ErrCbPhase seems to be very generic name, how about\n> > VacErrCbPhase or VacuumErrCbPhase?\n>\n> Done.\n>\n> Thanks for your review comments.\n>\n\n@@ -870,6 +904,12 @@ lazy_scan_heap(Relation onerel, VacuumParams\n*params, LVRelStats *vacrelstats,\n else\n skipping_blocks = false;\n\n+ /* Setup error traceback support for ereport() */\n+ errcallback.callback = vacuum_error_callback;\n+ errcallback.arg = vacrelstats;\n+ errcallback.previous = error_context_stack;\n+ error_context_stack = &errcallback;\n\nI think by mistake you have re-introduced this chunk of code. We\ndon't need this as we have done it in the caller.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 25 Mar 2020 09:16:46 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Wed, 25 Mar 2020 at 12:44, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Mar 24, 2020 at 7:51 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Tue, 24 Mar 2020 at 22:37, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Mar 24, 2020 at 6:18 PM Masahiko Sawada\n> > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > >\n> > > >\n> > > > I've read through the latest version patch (v31), here are my comments:\n> > > >\n> > > > 1.\n> > > > + /* Update error traceback information */\n> > > > + olderrcbarg = *vacrelstats;\n> > > > + update_vacuum_error_cbarg(vacrelstats,\n> > > > + VACUUM_ERRCB_PHASE_TRUNCATE,\n> > > > new_rel_pages, NULL,\n> > > > + false);\n> > > > +\n> > > > /*\n> > > > * Scan backwards from the end to verify that the end pages actually\n> > > > * contain no tuples. This is *necessary*, not optional, because\n> > > > * other backends could have added tuples to these pages whilst we\n> > > > * were vacuuming.\n> > > > */\n> > > > new_rel_pages = count_nondeletable_pages(onerel, vacrelstats);\n> > > >\n> > > > We need to set the error context after setting new_rel_pages.\n> > > >\n> > >\n> > > We want to cover the errors raised in count_nondeletable_pages(). In\n> > > an earlier version of the patch, we had TRUNCATE_PREFETCH phase which\n> > > use to cover those errors, but that was not good as we were\n> > > setting/resetting it multiple times and it was not clear such a\n> > > separate phase would add any value.\n> >\n> > I got the point. But if we set the error context before that, I think\n> > we need to change the error context message. The error context message\n> > of heap truncation phase is \"while truncating relation \\\"%s.%s\\\" to %u\n> > blocks\", but cbarg->blkno will be the number of blocks of the current\n> > relation.\n> >\n> > case VACUUM_ERRCB_PHASE_TRUNCATE:\n> > if (BlockNumberIsValid(cbarg->blkno))\n> > errcontext(\"while truncating relation \\\"%s.%s\\\" to %u blocks\",\n> > cbarg->relnamespace, cbarg->relname, cbarg->blkno);\n> > break;\n> >\n>\n> Do you mean to say that actually we are just prefetching or reading\n> the pages in count_nondeletable_pages() but the message doesn't have\n> any such indication? If not that, what problem do you see with the\n> message? What is your suggestion?\n\nI meant that with the patch, suppose that the table has 100 blocks and\nwe're truncating it to 50 blocks in RelationTruncate(), the error\ncontext message will be \"while truncating relation \"aaa.bbb\" to 100\nblocks\", which is not correct. I think it should be \"while truncating\nrelation \"aaa.bbb\" to 50 blocks\". We can know the relation can be\ntruncated to 50 blocks by the result of count_nondeletable_pages(). So\nif we update the arguments before it we will use the number of blocks\nof relation before truncation.\n\nMy suggestion is either that we change the error message to, for\nexample, \"while truncating relation \"aaa.bbb\" having 100 blocks\", or\nthat we change the patch so that we can use \"50 blocks\" in the error\ncontext message.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 25 Mar 2020 13:34:43 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Wed, Mar 25, 2020 at 01:34:43PM +0900, Masahiko Sawada wrote:\n> I meant that with the patch, suppose that the table has 100 blocks and\n> we're truncating it to 50 blocks in RelationTruncate(), the error\n> context message will be \"while truncating relation \"aaa.bbb\" to 100\n> blocks\", which is not correct.\n\n> I think it should be \"while truncating\n> relation \"aaa.bbb\" to 50 blocks\". We can know the relation can be\n> truncated to 50 blocks by the result of count_nondeletable_pages(). So\n> if we update the arguments before it we will use the number of blocks\n> of relation before truncation.\n\nHm, yea, at that point it's:\n|new_rel_pages = RelationGetNumberOfBlocks(onerel);\n..so we can do better.\n\n> My suggestion is either that we change the error message to, for\n> example, \"while truncating relation \"aaa.bbb\" having 100 blocks\", or\n> that we change the patch so that we can use \"50 blocks\" in the error\n> context message.\n\nWe could do:\n\n update_vacuum_error_cbarg(vacrelstats,\n\t\t\t\t VACUUM_ERRCB_PHASE_TRUNCATE,\n\t\t\t\t InvalidBlockNumber, NULL, false);\n\n new_rel_pages = count_nondeletable_pages(onerel, vacrelstats);\n vacrelstats->blkno = new_rel_pages;\n\n...\n\n case VACUUM_ERRCB_PHASE_TRUNCATE:\n if (BlockNumberIsValid(cbarg->blkno))\n errcontext(\"while truncating relation \\\"%s.%s\\\" to %u blocks\",\n cbarg->relnamespace, cbarg->relname, cbarg->blkno);\n else\n /* Error happened before/during count_nondeletable_pages() */\n errcontext(\"while truncating relation \\\"%s.%s\\\"\",\n cbarg->relnamespace, cbarg->relname);\n break;\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 24 Mar 2020 23:46:51 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Wed, Mar 25, 2020 at 10:05 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Wed, 25 Mar 2020 at 12:44, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Mar 24, 2020 at 7:51 PM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > >\n> > > I got the point. But if we set the error context before that, I think\n> > > we need to change the error context message. The error context message\n> > > of heap truncation phase is \"while truncating relation \\\"%s.%s\\\" to %u\n> > > blocks\", but cbarg->blkno will be the number of blocks of the current\n> > > relation.\n> > >\n> > > case VACUUM_ERRCB_PHASE_TRUNCATE:\n> > > if (BlockNumberIsValid(cbarg->blkno))\n> > > errcontext(\"while truncating relation \\\"%s.%s\\\" to %u blocks\",\n> > > cbarg->relnamespace, cbarg->relname, cbarg->blkno);\n> > > break;\n> > >\n> >\n> > Do you mean to say that actually we are just prefetching or reading\n> > the pages in count_nondeletable_pages() but the message doesn't have\n> > any such indication? If not that, what problem do you see with the\n> > message? What is your suggestion?\n>\n> I meant that with the patch, suppose that the table has 100 blocks and\n> we're truncating it to 50 blocks in RelationTruncate(), the error\n> context message will be \"while truncating relation \"aaa.bbb\" to 100\n> blocks\", which is not correct. I think it should be \"while truncating\n> relation \"aaa.bbb\" to 50 blocks\". We can know the relation can be\n> truncated to 50 blocks by the result of count_nondeletable_pages(). So\n> if we update the arguments before it we will use the number of blocks\n> of relation before truncation.\n>\n\nWon't the latest patch by Justin will fix this as he has updated the\nblock count after count_nondeletable_pages? Apart from that, I feel\nthe first call to update_vacuum_error_cbarg in lazy_truncate_heap\nshould have input parameter as vacrelstats->nonempty_pages instead of\nnew_rel_pages to indicate the remaining pages after truncation?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 25 Mar 2020 10:22:21 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Wed, Mar 25, 2020 at 10:22:21AM +0530, Amit Kapila wrote:\n> On Wed, Mar 25, 2020 at 10:05 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Wed, 25 Mar 2020 at 12:44, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Mar 24, 2020 at 7:51 PM Masahiko Sawada\n> > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > >\n> > > >\n> > > > I got the point. But if we set the error context before that, I think\n> > > > we need to change the error context message. The error context message\n> > > > of heap truncation phase is \"while truncating relation \\\"%s.%s\\\" to %u\n> > > > blocks\", but cbarg->blkno will be the number of blocks of the current\n> > > > relation.\n> > > >\n> > > > case VACUUM_ERRCB_PHASE_TRUNCATE:\n> > > > if (BlockNumberIsValid(cbarg->blkno))\n> > > > errcontext(\"while truncating relation \\\"%s.%s\\\" to %u blocks\",\n> > > > cbarg->relnamespace, cbarg->relname, cbarg->blkno);\n> > > > break;\n> > > >\n> > >\n> > > Do you mean to say that actually we are just prefetching or reading\n> > > the pages in count_nondeletable_pages() but the message doesn't have\n> > > any such indication? If not that, what problem do you see with the\n> > > message? What is your suggestion?\n> >\n> > I meant that with the patch, suppose that the table has 100 blocks and\n> > we're truncating it to 50 blocks in RelationTruncate(), the error\n> > context message will be \"while truncating relation \"aaa.bbb\" to 100\n> > blocks\", which is not correct. I think it should be \"while truncating\n> > relation \"aaa.bbb\" to 50 blocks\". We can know the relation can be\n> > truncated to 50 blocks by the result of count_nondeletable_pages(). So\n> > if we update the arguments before it we will use the number of blocks\n> > of relation before truncation.\n> >\n> \n> Won't the latest patch by Justin will fix this as he has updated the\n> block count after count_nondeletable_pages? Apart from that, I feel\n\nThe issue is if the error happens *during* count_nondeletable_pages().\nWe don't want it to say \"truncating relation to 100 blocks\".\n\n> the first call to update_vacuum_error_cbarg in lazy_truncate_heap\n> should have input parameter as vacrelstats->nonempty_pages instead of\n> new_rel_pages to indicate the remaining pages after truncation?\n\nYea, I think that addresses the issue.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 25 Mar 2020 00:08:19 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Wed, 25 Mar 2020 at 14:08, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Wed, Mar 25, 2020 at 10:22:21AM +0530, Amit Kapila wrote:\n> > On Wed, Mar 25, 2020 at 10:05 AM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > > On Wed, 25 Mar 2020 at 12:44, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Tue, Mar 24, 2020 at 7:51 PM Masahiko Sawada\n> > > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > > >\n> > > > >\n> > > > > I got the point. But if we set the error context before that, I think\n> > > > > we need to change the error context message. The error context message\n> > > > > of heap truncation phase is \"while truncating relation \\\"%s.%s\\\" to %u\n> > > > > blocks\", but cbarg->blkno will be the number of blocks of the current\n> > > > > relation.\n> > > > >\n> > > > > case VACUUM_ERRCB_PHASE_TRUNCATE:\n> > > > > if (BlockNumberIsValid(cbarg->blkno))\n> > > > > errcontext(\"while truncating relation \\\"%s.%s\\\" to %u blocks\",\n> > > > > cbarg->relnamespace, cbarg->relname, cbarg->blkno);\n> > > > > break;\n> > > > >\n> > > >\n> > > > Do you mean to say that actually we are just prefetching or reading\n> > > > the pages in count_nondeletable_pages() but the message doesn't have\n> > > > any such indication? If not that, what problem do you see with the\n> > > > message? What is your suggestion?\n> > >\n> > > I meant that with the patch, suppose that the table has 100 blocks and\n> > > we're truncating it to 50 blocks in RelationTruncate(), the error\n> > > context message will be \"while truncating relation \"aaa.bbb\" to 100\n> > > blocks\", which is not correct. I think it should be \"while truncating\n> > > relation \"aaa.bbb\" to 50 blocks\". We can know the relation can be\n> > > truncated to 50 blocks by the result of count_nondeletable_pages(). So\n> > > if we update the arguments before it we will use the number of blocks\n> > > of relation before truncation.\n> > >\n> >\n> > Won't the latest patch by Justin will fix this as he has updated the\n> > block count after count_nondeletable_pages? Apart from that, I feel\n>\n> The issue is if the error happens *during* count_nondeletable_pages().\n> We don't want it to say \"truncating relation to 100 blocks\".\n\nRight.\n\n>\n> > the first call to update_vacuum_error_cbarg in lazy_truncate_heap\n> > should have input parameter as vacrelstats->nonempty_pages instead of\n> > new_rel_pages to indicate the remaining pages after truncation?\n>\n> Yea, I think that addresses the issue.\n\n+1\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 25 Mar 2020 14:16:35 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Wed, Mar 25, 2020 at 09:16:46AM +0530, Amit Kapila wrote:\n> I think by mistake you have re-introduced this chunk of code. We\n> don't need this as we have done it in the caller.\n\nYes, sorry.\n\nI used too much of git-am and git-rebase to make sure I didn't lose your\nchanges and instead reintroduced them.\n\nOn Wed, Mar 25, 2020 at 02:16:35PM +0900, Masahiko Sawada wrote:\n> > > Won't the latest patch by Justin will fix this as he has updated the\n> > > block count after count_nondeletable_pages? Apart from that, I feel\n> >\n> > The issue is if the error happens *during* count_nondeletable_pages().\n> > We don't want it to say \"truncating relation to 100 blocks\".\n> \n> Right.\n> \n> > > the first call to update_vacuum_error_cbarg in lazy_truncate_heap\n> > > should have input parameter as vacrelstats->nonempty_pages instead of\n> > > new_rel_pages to indicate the remaining pages after truncation?\n> >\n> > Yea, I think that addresses the issue.\n\nAttached patch addressing these.\n\n-- \nJustin",
"msg_date": "Wed, 25 Mar 2020 05:12:29 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Wed, Mar 25, 2020 at 3:42 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> Attached patch addressing these.\n>\n\nThanks, you forgot to remove the below declaration which I have\nremoved in attached.\n\n@@ -724,20 +758,20 @@ lazy_scan_heap(Relation onerel, VacuumParams\n*params, LVRelStats *vacrelstats,\n PROGRESS_VACUUM_MAX_DEAD_TUPLES\n };\n int64 initprog_val[3];\n+ ErrorContextCallback errcallback;\n\nApart from this, I have ran pgindent and now I think it is in good\nshape. Do you have any other comments? Sawada-San, can you also\ncheck the attached patch and let me know if you have any additional\ncomments.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 25 Mar 2020 16:54:41 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Wed, Mar 25, 2020 at 04:54:41PM +0530, Amit Kapila wrote:\n> On Wed, Mar 25, 2020 at 3:42 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > Attached patch addressing these.\n> >\n> \n> Thanks, you forgot to remove the below declaration which I have\n> removed in attached.\n\nYes I saw..\n\n> Apart from this, I have ran pgindent and now I think it is in good\n> shape. Do you have any other comments? Sawada-San, can you also\n\nI did just notice/remember while testing trucate that autovacuum does this:\n\nsrc/backend/postmaster/autovacuum.c: errcontext(\"automatic vacuum of table \\\"%s.%s.%s\\\"\",\n\nAnd that appears to be interacting correctly. For example if you add an\nelog(ERROR) and run UPDATE/DELETE, and wait autovacuum_naptime, then it shows\nboth contexts.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 25 Mar 2020 06:39:17 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Wed, 25 Mar 2020 at 20:24, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Mar 25, 2020 at 3:42 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > Attached patch addressing these.\n> >\n>\n> Thanks, you forgot to remove the below declaration which I have\n> removed in attached.\n>\n> @@ -724,20 +758,20 @@ lazy_scan_heap(Relation onerel, VacuumParams\n> *params, LVRelStats *vacrelstats,\n> PROGRESS_VACUUM_MAX_DEAD_TUPLES\n> };\n> int64 initprog_val[3];\n> + ErrorContextCallback errcallback;\n>\n> Apart from this, I have ran pgindent and now I think it is in good\n> shape. Do you have any other comments? Sawada-San, can you also\n> check the attached patch and let me know if you have any additional\n> comments.\n>\n\nThank you for updating the patch! I have a question about the following code:\n\n+ /* Update error traceback information */\n+ olderrcbarg = *vacrelstats;\n+ update_vacuum_error_cbarg(vacrelstats, VACUUM_ERRCB_PHASE_TRUNCATE,\n+ vacrelstats->nonempty_pages, NULL, false);\n+\n /*\n * Scan backwards from the end to verify that the end pages actually\n * contain no tuples. This is *necessary*, not optional, because\n * other backends could have added tuples to these pages whilst we\n * were vacuuming.\n */\n new_rel_pages = count_nondeletable_pages(onerel, vacrelstats);\n+ vacrelstats->blkno = new_rel_pages;\n\n if (new_rel_pages >= old_rel_pages)\n {\n /* can't do anything after all */\n UnlockRelation(onerel, AccessExclusiveLock);\n return;\n }\n\n /*\n * Okay to truncate.\n */\n RelationTruncate(onerel, new_rel_pages);\n\n+ /* Revert back to the old phase information for error traceback */\n+ update_vacuum_error_cbarg(vacrelstats,\n+ olderrcbarg.phase,\n+ olderrcbarg.blkno,\n+ olderrcbarg.indname,\n+ true);\n\nvacrelstats->nonempty_pages is the last non-empty block while\nnew_rel_pages, the result of count_nondeletable_pages(), is the number\nof blocks that we can truncate to in this attempt. Therefore\nvacrelstats->nonempty_pages <= new_rel_pages. This means that we set a\nlower block number to arguments and then set a higher block number\nafter count_nondeletable_pages, and then revert it back to\nVACUUM_ERRCB_PHASE_SCAN_HEAP phase and the number of blocks of\nrelation before truncation, after RelationTruncate(). It can be\nrepeated until no more truncating can be done. Why do we need to\nrevert back to the scan heap phase? If we can use\nvacrelstats->nonempty_pages in the error context message as the\nremaining blocks after truncation I think we can update callback\narguments once at the beginning of lazy_truncate_heap() and don't\nrevert to the previous phase, and pop the error context after exiting.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 25 Mar 2020 21:27:44 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Wed, Mar 25, 2020 at 09:27:44PM +0900, Masahiko Sawada wrote:\n> On Wed, 25 Mar 2020 at 20:24, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Mar 25, 2020 at 3:42 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > >\n> > > Attached patch addressing these.\n> > >\n> >\n> > Thanks, you forgot to remove the below declaration which I have\n> > removed in attached.\n> >\n> > @@ -724,20 +758,20 @@ lazy_scan_heap(Relation onerel, VacuumParams\n> > *params, LVRelStats *vacrelstats,\n> > PROGRESS_VACUUM_MAX_DEAD_TUPLES\n> > };\n> > int64 initprog_val[3];\n> > + ErrorContextCallback errcallback;\n> >\n> > Apart from this, I have ran pgindent and now I think it is in good\n> > shape. Do you have any other comments? Sawada-San, can you also\n> > check the attached patch and let me know if you have any additional\n> > comments.\n> >\n> \n> Thank you for updating the patch! I have a question about the following code:\n> \n> + /* Update error traceback information */\n> + olderrcbarg = *vacrelstats;\n> + update_vacuum_error_cbarg(vacrelstats, VACUUM_ERRCB_PHASE_TRUNCATE,\n> + vacrelstats->nonempty_pages, NULL, false);\n> +\n> /*\n> * Scan backwards from the end to verify that the end pages actually\n> * contain no tuples. This is *necessary*, not optional, because\n> * other backends could have added tuples to these pages whilst we\n> * were vacuuming.\n> */\n> new_rel_pages = count_nondeletable_pages(onerel, vacrelstats);\n> + vacrelstats->blkno = new_rel_pages;\n> \n> if (new_rel_pages >= old_rel_pages)\n> {\n> /* can't do anything after all */\n> UnlockRelation(onerel, AccessExclusiveLock);\n> return;\n> }\n> \n> /*\n> * Okay to truncate.\n> */\n> RelationTruncate(onerel, new_rel_pages);\n> \n> + /* Revert back to the old phase information for error traceback */\n> + update_vacuum_error_cbarg(vacrelstats,\n> + olderrcbarg.phase,\n> + olderrcbarg.blkno,\n> + olderrcbarg.indname,\n> + true);\n> \n> vacrelstats->nonempty_pages is the last non-empty block while\n> new_rel_pages, the result of count_nondeletable_pages(), is the number\n> of blocks that we can truncate to in this attempt. Therefore\n> vacrelstats->nonempty_pages <= new_rel_pages. This means that we set a\n> lower block number to arguments and then set a higher block number\n> after count_nondeletable_pages, and then revert it back to\n> VACUUM_ERRCB_PHASE_SCAN_HEAP phase and the number of blocks of\n> relation before truncation, after RelationTruncate(). It can be\n> repeated until no more truncating can be done. Why do we need to\n> revert back to the scan heap phase? If we can use\n> vacrelstats->nonempty_pages in the error context message as the\n> remaining blocks after truncation I think we can update callback\n> arguments once at the beginning of lazy_truncate_heap() and don't\n> revert to the previous phase, and pop the error context after exiting.\n\nPerhaps. We need to \"revert back\" for the vacuum phases, which can be called\nmultiple times, but we don't need to do that here.\n\nIn the future, if we decided to add something for final cleanup phase (say),\nit's fine (and maybe better) to exit truncate_heap() without resetting the\nargument, and we'd immediately set it to CLEANUP.\n\nI think the same thing applies to lazy_cleanup_index, too. It can be called\nfrom a parallel worker, but we never \"go back\" to a heap scan.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 25 Mar 2020 07:41:55 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Wed, Mar 25, 2020 at 6:11 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Wed, Mar 25, 2020 at 09:27:44PM +0900, Masahiko Sawada wrote:\n> > On Wed, 25 Mar 2020 at 20:24, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Wed, Mar 25, 2020 at 3:42 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > >\n> > > > Attached patch addressing these.\n> > > >\n> > >\n> > > Thanks, you forgot to remove the below declaration which I have\n> > > removed in attached.\n> > >\n> > > @@ -724,20 +758,20 @@ lazy_scan_heap(Relation onerel, VacuumParams\n> > > *params, LVRelStats *vacrelstats,\n> > > PROGRESS_VACUUM_MAX_DEAD_TUPLES\n> > > };\n> > > int64 initprog_val[3];\n> > > + ErrorContextCallback errcallback;\n> > >\n> > > Apart from this, I have ran pgindent and now I think it is in good\n> > > shape. Do you have any other comments? Sawada-San, can you also\n> > > check the attached patch and let me know if you have any additional\n> > > comments.\n> > >\n> >\n> > Thank you for updating the patch! I have a question about the following code:\n> >\n> > + /* Update error traceback information */\n> > + olderrcbarg = *vacrelstats;\n> > + update_vacuum_error_cbarg(vacrelstats, VACUUM_ERRCB_PHASE_TRUNCATE,\n> > + vacrelstats->nonempty_pages, NULL, false);\n> > +\n> > /*\n> > * Scan backwards from the end to verify that the end pages actually\n> > * contain no tuples. This is *necessary*, not optional, because\n> > * other backends could have added tuples to these pages whilst we\n> > * were vacuuming.\n> > */\n> > new_rel_pages = count_nondeletable_pages(onerel, vacrelstats);\n> > + vacrelstats->blkno = new_rel_pages;\n> >\n> > if (new_rel_pages >= old_rel_pages)\n> > {\n> > /* can't do anything after all */\n> > UnlockRelation(onerel, AccessExclusiveLock);\n> > return;\n> > }\n> >\n> > /*\n> > * Okay to truncate.\n> > */\n> > RelationTruncate(onerel, new_rel_pages);\n> >\n> > + /* Revert back to the old phase information for error traceback */\n> > + update_vacuum_error_cbarg(vacrelstats,\n> > + olderrcbarg.phase,\n> > + olderrcbarg.blkno,\n> > + olderrcbarg.indname,\n> > + true);\n> >\n> > vacrelstats->nonempty_pages is the last non-empty block while\n> > new_rel_pages, the result of count_nondeletable_pages(), is the number\n> > of blocks that we can truncate to in this attempt. Therefore\n> > vacrelstats->nonempty_pages <= new_rel_pages. This means that we set a\n> > lower block number to arguments and then set a higher block number\n> > after count_nondeletable_pages, and then revert it back to\n> > VACUUM_ERRCB_PHASE_SCAN_HEAP phase and the number of blocks of\n> > relation before truncation, after RelationTruncate(). It can be\n> > repeated until no more truncating can be done. Why do we need to\n> > revert back to the scan heap phase? If we can use\n> > vacrelstats->nonempty_pages in the error context message as the\n> > remaining blocks after truncation I think we can update callback\n> > arguments once at the beginning of lazy_truncate_heap() and don't\n> > revert to the previous phase, and pop the error context after exiting.\n>\n> Perhaps. We need to \"revert back\" for the vacuum phases, which can be called\n> multiple times, but we don't need to do that here.\n>\n\nYeah, but I think it would be better if are consistent because we have\nno control what the caller of the function intends to do after\nfinishing the current phase. I think we can add some comments where\nwe set up the context (in heap_vacuum_rel) like below so that the idea\nis more clear.\n\n\"The idea is to set up an error context callback to display additional\ninformation with any error during vacuum. During different phases of\nvacuum (heap scan, heap vacuum, index vacuum, index clean up, heap\ntruncate), we update the error context callback to display appropriate\ninformation.\n\nNote that different phases of vacuum overlap with each other, so once\na particular phase is over, we need to revert back to the old phase to\nkeep the phase information up-to-date.\"\n\nWhat do you think?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 26 Mar 2020 09:50:53 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Thu, Mar 26, 2020 at 09:50:53AM +0530, Amit Kapila wrote:\n> > > after count_nondeletable_pages, and then revert it back to\n> > > VACUUM_ERRCB_PHASE_SCAN_HEAP phase and the number of blocks of\n> > > relation before truncation, after RelationTruncate(). It can be\n> > > repeated until no more truncating can be done. Why do we need to\n> > > revert back to the scan heap phase? If we can use\n> > > vacrelstats->nonempty_pages in the error context message as the\n> > > remaining blocks after truncation I think we can update callback\n> > > arguments once at the beginning of lazy_truncate_heap() and don't\n> > > revert to the previous phase, and pop the error context after exiting.\n> >\n> > Perhaps. We need to \"revert back\" for the vacuum phases, which can be called\n> > multiple times, but we don't need to do that here.\n> \n> Yeah, but I think it would be better if are consistent because we have\n> no control what the caller of the function intends to do after\n> finishing the current phase. I think we can add some comments where\n> we set up the context (in heap_vacuum_rel) like below so that the idea\n> is more clear.\n> \n> \"The idea is to set up an error context callback to display additional\n> information with any error during vacuum. During different phases of\n> vacuum (heap scan, heap vacuum, index vacuum, index clean up, heap\n> truncate), we update the error context callback to display appropriate\n> information.\n> \n> Note that different phases of vacuum overlap with each other, so once\n> a particular phase is over, we need to revert back to the old phase to\n> keep the phase information up-to-date.\"\n\nSeems fine. Rather than saying \"different phases\" I, would say:\n\"The index vacuum and heap vacuum phases may be called multiple times in the\nmiddle of the heap scan phase.\"\n\nBut actually I think the concern is not that we unnecessarily \"Revert back to\nthe old phase\" but that we do it in a *loop*. Which I agree doesn't make\nsense, to go back and forth between \"scanning heap\" and \"truncating\". So I\nthink we should either remove the \"revert back\", or otherwise put it\nafter/outside the \"while\" loop, and change the \"return\" paths to use \"break\".\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 25 Mar 2020 23:41:15 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Thu, Mar 26, 2020 at 10:11 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> Seems fine. Rather than saying \"different phases\" I, would say:\n> \"The index vacuum and heap vacuum phases may be called multiple times in the\n> middle of the heap scan phase.\"\n>\n\nOkay, I have slightly adjusted the wording as per your suggestion.\n\n> But actually I think the concern is not that we unnecessarily \"Revert back to\n> the old phase\" but that we do it in a *loop*. Which I agree doesn't make\n> sense, to go back and forth between \"scanning heap\" and \"truncating\".\n>\n\nFair point. I have moved the change to the truncate phase at the\ncaller of lazy_heap_truncate() which should address this concern.\nSawada-San, does this address your concern?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 26 Mar 2020 12:03:42 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Thu, Mar 26, 2020 at 12:03 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Mar 26, 2020 at 10:11 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > Seems fine. Rather than saying \"different phases\" I, would say:\n> > \"The index vacuum and heap vacuum phases may be called multiple times in the\n> > middle of the heap scan phase.\"\n> >\n>\n> Okay, I have slightly adjusted the wording as per your suggestion.\n>\n> > But actually I think the concern is not that we unnecessarily \"Revert back to\n> > the old phase\" but that we do it in a *loop*. Which I agree doesn't make\n> > sense, to go back and forth between \"scanning heap\" and \"truncating\".\n> >\n>\n> Fair point. I have moved the change to the truncate phase at the\n> caller of lazy_heap_truncate() which should address this concern.\n> Sawada-San, does this address your concern?\n>\n\nForgot to attach the patch, doing now.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 26 Mar 2020 12:04:20 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Thu, 26 Mar 2020 at 15:34, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Mar 26, 2020 at 12:03 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Mar 26, 2020 at 10:11 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > >\n> > > Seems fine. Rather than saying \"different phases\" I, would say:\n> > > \"The index vacuum and heap vacuum phases may be called multiple times in the\n> > > middle of the heap scan phase.\"\n> > >\n> >\n> > Okay, I have slightly adjusted the wording as per your suggestion.\n> >\n> > > But actually I think the concern is not that we unnecessarily \"Revert back to\n> > > the old phase\" but that we do it in a *loop*. Which I agree doesn't make\n> > > sense, to go back and forth between \"scanning heap\" and \"truncating\".\n> > >\n> >\n> > Fair point. I have moved the change to the truncate phase at the\n> > caller of lazy_heap_truncate() which should address this concern.\n> > Sawada-San, does this address your concern?\n> >\n>\n> Forgot to attach the patch, doing now.\n\nThank you for updating the patch! The changes around\nlazy_truncate_heap() looks good to me.\n\nI have two comments;\n\n1.\n@@ -1844,9 +1914,15 @@ lazy_vacuum_page(Relation onerel, BlockNumber\nblkno, Buffer buffer,\n int uncnt = 0;\n TransactionId visibility_cutoff_xid;\n bool all_frozen;\n+ LVRelStats olderrcbarg;\n\n pgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_VACUUMED, blkno);\n\n+ /* Update error traceback information */\n+ olderrcbarg = *vacrelstats;\n+ update_vacuum_error_cbarg(vacrelstats, VACUUM_ERRCB_PHASE_VACUUM_HEAP,\n+ blkno, NULL, false);\n\nSince we update vacrelstats->blkno during in the loop in\nlazy_vacuum_heap() we unnecessarily update blkno twice to the same\nvalue. Also I think we don't need to revert back the callback\narguments in lazy_vacuum_page(). Perhaps we can either remove the\nchange of lazy_vacuum_page() or move the code updating\nvacrelstats->blkno to the beginning of lazy_vacuum_page(). I prefer\nthe latter.\n\n2.\n+/*\n+ * Update vacuum error callback for the current phase, block, and index.\n+ *\n+ * free_oldindname is true if the previous \"indname\" should be freed.\nIt must be\n+ * false if the caller has copied the old LVRelStats, to avoid keeping a\n+ * pointer to a freed allocation. In which case, the caller should call again\n+ * with free_oldindname as true to avoid a leak.\n+ */\n+static void\n+update_vacuum_error_cbarg(LVRelStats *errcbarg, int phase, BlockNumber blkno,\n+ char *indname, bool free_oldindname)\n\nI'm not sure why \"free_oldindname\" is necessary. Since we initialize\nvacrelstats->indname with NULL and revert the callback arguments at\nthe end of functions that needs update them, vacrelstats->indname is\nNULL at the beginning of lazy_vacuum_index() and lazy_cleanup_index().\nAnd we make a copy of index name in update_vacuum_error_cbarg(). So I\nthink we can pfree the old index name if errcbarg->indname is not NULL.\n\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 26 Mar 2020 20:56:54 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Thu, Mar 26, 2020 at 08:56:54PM +0900, Masahiko Sawada wrote:\n> 1.\n> @@ -1844,9 +1914,15 @@ lazy_vacuum_page(Relation onerel, BlockNumber\n> blkno, Buffer buffer,\n> int uncnt = 0;\n> TransactionId visibility_cutoff_xid;\n> bool all_frozen;\n> + LVRelStats olderrcbarg;\n> \n> pgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_VACUUMED, blkno);\n> \n> + /* Update error traceback information */\n> + olderrcbarg = *vacrelstats;\n> + update_vacuum_error_cbarg(vacrelstats, VACUUM_ERRCB_PHASE_VACUUM_HEAP,\n> + blkno, NULL, false);\n> \n> Since we update vacrelstats->blkno during in the loop in\n> lazy_vacuum_heap() we unnecessarily update blkno twice to the same\n> value. Also I think we don't need to revert back the callback\n> arguments in lazy_vacuum_page(). Perhaps we can either remove the\n> change of lazy_vacuum_page() or move the code updating\n> vacrelstats->blkno to the beginning of lazy_vacuum_page(). I prefer\n> the latter.\n\nWe want the error callback to be in place during lazy_scan_heap, since it\ncalls ReadBufferExtended().\n\nWe can't remove the change in lazy_vacuum_page, since it's also called from\nlazy_scan_heap, if there are no indexes.\n\nWe want lazy_vacuum_page to \"revert back\" since we go from \"scanning heap\" to\n\"vacuuming heap\". lazy_vacuum_page was the motivation for saving and restoring\nthe called arguments, otherwise lazy_scan_heap() would have to clean up after\nthe function it called, which was unclean. Now, every function cleans up after\nitself.\n\nDoes that address your comment ?\n\n> +static void\n> +update_vacuum_error_cbarg(LVRelStats *errcbarg, int phase, BlockNumber blkno,\n> + char *indname, bool free_oldindname)\n> \n> I'm not sure why \"free_oldindname\" is necessary. Since we initialize\n> vacrelstats->indname with NULL and revert the callback arguments at\n> the end of functions that needs update them, vacrelstats->indname is\n> NULL at the beginning of lazy_vacuum_index() and lazy_cleanup_index().\n> And we make a copy of index name in update_vacuum_error_cbarg(). So I\n> think we can pfree the old index name if errcbarg->indname is not NULL.\n\nWe want to avoid doing this:\n olderrcbarg = *vacrelstats // saves a pointer\n update_vacuum_error_cbarg(... NULL); // frees the pointer and sets indname to NULL\n update_vacuum_error_cbarg(... olderrcbarg.oldindnam) // puts back the pointer, which has been freed\n // hit an error, and the callback accesses the pfreed pointer\n\nI think that's only an issue for lazy_vacuum_index().\n\nAnd I think you're right: we only save state when the calling function has a\nindname=NULL, so we never \"put back\" a non-NULL indname. We go from having a\nindname=NULL at lazy_scan_heap to not not-NULL at lazy_vacuum_index, and never\nthe other way around. So once we've \"reverted back\", 1) the pointer is null;\nand, 2) the callback function doesn't access it for the previous/reverted phase\nanyway.\n\nHm, I was just wondering what happens if an error happens *during*\nupdate_vacuum_error_cbarg(). It seems like if we set\nerrcbarg->phase=VACUUM_INDEX before setting errcbarg->indname=indname, then an\nerror would cause a crash. And if we pfree and set indname before phase, it'd\nbe a problem when going from an index phase to non-index phase. So maybe we\nhave to set errcbarg->phase=VACUUM_ERRCB_PHASE_UNKNOWN while in the function,\nand errcbarg->phase=phase last.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 26 Mar 2020 10:04:57 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Thu, Mar 26, 2020 at 10:04:57AM -0500, Justin Pryzby wrote:\n> Does that address your comment ?\n\nI hope so.\n\n> > I'm not sure why \"free_oldindname\" is necessary. Since we initialize\n> > vacrelstats->indname with NULL and revert the callback arguments at\n> > the end of functions that needs update them, vacrelstats->indname is\n> > NULL at the beginning of lazy_vacuum_index() and lazy_cleanup_index().\n> > And we make a copy of index name in update_vacuum_error_cbarg(). So I\n> > think we can pfree the old index name if errcbarg->indname is not NULL.\n> \n> We want to avoid doing this:\n> olderrcbarg = *vacrelstats // saves a pointer\n> update_vacuum_error_cbarg(... NULL); // frees the pointer and sets indname to NULL\n> update_vacuum_error_cbarg(... olderrcbarg.oldindnam) // puts back the pointer, which has been freed\n> // hit an error, and the callback accesses the pfreed pointer\n> \n> I think that's only an issue for lazy_vacuum_index().\n> \n> And I think you're right: we only save state when the calling function has a\n> indname=NULL, so we never \"put back\" a non-NULL indname. We go from having a\n> indname=NULL at lazy_scan_heap to not not-NULL at lazy_vacuum_index, and never\n> the other way around. So once we've \"reverted back\", 1) the pointer is null;\n> and, 2) the callback function doesn't access it for the previous/reverted phase\n> anyway.\n\nI removed the free_oldindname argument.\n\n> Hm, I was just wondering what happens if an error happens *during*\n> update_vacuum_error_cbarg(). It seems like if we set\n> errcbarg->phase=VACUUM_INDEX before setting errcbarg->indname=indname, then an\n> error would cause a crash. And if we pfree and set indname before phase, it'd\n> be a problem when going from an index phase to non-index phase. So maybe we\n> have to set errcbarg->phase=VACUUM_ERRCB_PHASE_UNKNOWN while in the function,\n> and errcbarg->phase=phase last.\n\nAnd addressed that.\n\nAlso, I realized that lazy_cleanup_index has an early \"return\", so the \"Revert\nback\" was ineffective. We talked about how that wasn't needed, since we never\ngo back to a previous phase. Amit wanted to keep it there for consistency, but\nI'd prefer to put any extra effort into calling out the special treatment\nneeded/given to lazy_vacuum_heap/index, rather than making everything\n\"consistent\".\n\nAmit: I also moved the TRUNCATE_HEAP bit back to truncate_heap(), since 1) it's\nodd if we don't have anything in truncate_heap() about error reporting except\nfor \"vacrelstats->blkno = blkno\"; and, 2) it's nice to set the err callback arg\nright after pgstat_progress, and outside of any loop. In previous versions, it\nwas within the loop, because it closely wrapped RelationTruncate() and\ncount_nondeletable_pages() - a previous version used separate phases.\n\n-- \nJustin",
"msg_date": "Thu, 26 Mar 2020 17:17:52 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On 2020-Mar-26, Justin Pryzby wrote:\n\n> On Thu, Mar 26, 2020 at 10:04:57AM -0500, Justin Pryzby wrote:\n\n> > And I think you're right: we only save state when the calling function has a\n> > indname=NULL, so we never \"put back\" a non-NULL indname. We go from having a\n> > indname=NULL at lazy_scan_heap to not not-NULL at lazy_vacuum_index, and never\n> > the other way around.\n> \n> I removed the free_oldindname argument.\n\nHah, I was wondering about that free_oldindname business this morning as\nwell.\n\n> > ... So once we've \"reverted back\", 1) the pointer is null; and, 2)\n> > the callback function doesn't access it for the previous/reverted\n> > phase anyway.\n\nBTW I'm pretty sure this \"revert back\" phrasing is not good English --\nyou should just use \"revert\". Maybe get some native speaker's opinion\non it.\n\nAnd speaking of language, I find the particle \"cbarg\" rather very ugly,\nand it's *everywhere* -- function name, function argument, local\nvariable, enum values, enum name. It even spread to the typedefs.list\nfile! Is this a new virus??? Put some soap in it! Can't we use \"info\"\nor \"state\" or something similar, less infectious, instead?\n\nThanks\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 26 Mar 2020 19:49:51 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Thu, Mar 26, 2020 at 07:49:51PM -0300, Alvaro Herrera wrote:\n> > > ... So once we've \"reverted back\", 1) the pointer is null; and, 2)\n> > > the callback function doesn't access it for the previous/reverted\n> > > phase anyway.\n> \n> BTW I'm pretty sure this \"revert back\" phrasing is not good English --\n> you should just use \"revert\". Maybe get some native speaker's opinion\n> on it.\n\nI'm a native speaker; \"revert back\" might be called redundant but I think it's\ncommon usage.\n\n> And speaking of language, I find the particle \"cbarg\" rather very ugly,\n> and it's *everywhere* -- function name, function argument, local\n> variable, enum values, enum name. It even spread to the typedefs.list\n> file! Is this a new virus??? Put some soap in it! Can't we use \"info\"\n> or \"state\" or something similar, less infectious, instead?\n\nI renamed it since it was kind of opaque looking. It's in all the same places,\nso equally infectious; but I hope you like it better.\n\nCheers,\n-- \nJustin",
"msg_date": "Thu, 26 Mar 2020 18:33:21 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On 2020-Mar-26, Justin Pryzby wrote:\n\n> On Thu, Mar 26, 2020 at 07:49:51PM -0300, Alvaro Herrera wrote:\n\n> > BTW I'm pretty sure this \"revert back\" phrasing is not good English --\n> > you should just use \"revert\". Maybe get some native speaker's opinion\n> > on it.\n> \n> I'm a native speaker; \"revert back\" might be called redundant but I think it's\n> common usage.\n\nOh, okay.\n\n> > And speaking of language, I find the particle \"cbarg\" rather very ugly,\n> \n> I renamed it since it was kind of opaque looking. It's in all the same places,\n> so equally infectious; but I hope you like it better.\n\nI like it much better, thanks :-)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 26 Mar 2020 21:41:21 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Fri, Mar 27, 2020 at 3:47 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n>\n> > Hm, I was just wondering what happens if an error happens *during*\n> > update_vacuum_error_cbarg(). It seems like if we set\n> > errcbarg->phase=VACUUM_INDEX before setting errcbarg->indname=indname, then an\n> > error would cause a crash.\n> >\n\nCan't that be avoided if you check if cbarg->indname is non-null in\nvacuum_error_callback as we are already doing for\nVACUUM_ERRCB_PHASE_TRUNCATE?\n\n> > And if we pfree and set indname before phase, it'd\n> > be a problem when going from an index phase to non-index phase.\n\nHow is it possible that we move to the non-index phase without\nclearing indname as we always revert back the old phase information?\nI think it is possible only if we don't clear indname after the phase\nis over.\n\n> > So maybe we\n> > have to set errcbarg->phase=VACUUM_ERRCB_PHASE_UNKNOWN while in the function,\n> > and errcbarg->phase=phase last.\n\nI find that a bit ad-hoc, if possible, let's try to avoid it.\n\n>\n> And addressed that.\n>\n> Also, I realized that lazy_cleanup_index has an early \"return\", so the \"Revert\n> back\" was ineffective.\n>\n\nWe can call it immediately after index_vacuum_cleanup to avoid that.\n\n> We talked about how that wasn't needed, since we never\n> go back to a previous phase. Amit wanted to keep it there for consistency, but\n> I'd prefer to put any extra effort into calling out the special treatment\n> needed/given to lazy_vacuum_heap/index, rather than making everything\n> \"consistent\".\n>\n\nApart from being consistent, the point was it doesn't seem good that\nAPI being called to assume that there is nothing more the caller can\ndo. It might be problematic if we later want to enhance or add\nsomething to the caller.\n\n> Amit: I also moved the TRUNCATE_HEAP bit back to truncate_heap(),\n\nThere is no problem with it. We can do it either way and I have also\nconsidered it the way you have done but decide to keep in the caller\nbecause of the previous point I mentioned (not sure if it a good idea\nthat API being called can assume that there is nothing more the caller\ncan do after this).\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 27 Mar 2020 09:49:29 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Fri, Mar 27, 2020 at 09:49:29AM +0530, Amit Kapila wrote:\n> On Fri, Mar 27, 2020 at 3:47 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > > Hm, I was just wondering what happens if an error happens *during*\n> > > update_vacuum_error_cbarg(). It seems like if we set\n> > > errcbarg->phase=VACUUM_INDEX before setting errcbarg->indname=indname, then an\n> > > error would cause a crash.\n> > >\n> \n> Can't that be avoided if you check if cbarg->indname is non-null in\n> vacuum_error_callback as we are already doing for\n> VACUUM_ERRCB_PHASE_TRUNCATE?\n> \n> > > And if we pfree and set indname before phase, it'd\n> > > be a problem when going from an index phase to non-index phase.\n> \n> How is it possible that we move to the non-index phase without\n> clearing indname as we always revert back the old phase information?\n\nThe crash scenario I'm trying to avoid would be like statement_timeout or other\nasynchronous event occurring between two non-atomic operations.\n\nI said that there's an issue no matter what order we set indname/phase;\nIf we wrote:\n|cbarg->indname = indname;\n|cbarg->phase = phase;\n..and hit a timeout (or similar) between setting indname=NULL but before\nsetting phase=VACUUM_INDEX, then we can crash due to null pointer.\n\nBut if we write:\n|cbarg->phase = phase;\n|if (cbarg->indname) {pfree(cbarg->indname);}\n|cbarg->indname = indname ? pstrdup(indname) : NULL;\n..then we can still crash if we timeout between freeing cbarg->indname and\nsetting it to null, due to acccessing a pfreed allocation.\n\n> > > So maybe we\n> > > have to set errcbarg->phase=VACUUM_ERRCB_PHASE_UNKNOWN while in the function,\n> > > and errcbarg->phase=phase last.\n> \n> I find that a bit ad-hoc, if possible, let's try to avoid it.\n\nI think we can do what you suggesting, if the callback checks if (cbarg->indname!=NULL).\n\nWe'd have to write:\n// Must set indname *before* updating phase, in case an error occurs before\n// phase is set, to avoid crashing if we're going from an index phase to a\n// non-index phase (which should not read indname). Must not free indname\n// until it's set to null.\nchar *tmp = cbarg->indname;\ncbarg->indname = indname ? pstrdup(indname) : NULL;\ncbarg->phase = phase;\nif (tmp){pfree(tmp);}\n\nDo you think that's better ?\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 26 Mar 2020 23:44:24 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Thu, Mar 26, 2020 at 11:44:24PM -0500, Justin Pryzby wrote:\n> On Fri, Mar 27, 2020 at 09:49:29AM +0530, Amit Kapila wrote:\n> > On Fri, Mar 27, 2020 at 3:47 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > >\n> > > > Hm, I was just wondering what happens if an error happens *during*\n> > > > update_vacuum_error_cbarg(). It seems like if we set\n> > > > errcbarg->phase=VACUUM_INDEX before setting errcbarg->indname=indname, then an\n> > > > error would cause a crash.\n> > > >\n> > \n> > Can't that be avoided if you check if cbarg->indname is non-null in\n> > vacuum_error_callback as we are already doing for\n> > VACUUM_ERRCB_PHASE_TRUNCATE?\n> > \n> > > > And if we pfree and set indname before phase, it'd\n> > > > be a problem when going from an index phase to non-index phase.\n> > \n> > How is it possible that we move to the non-index phase without\n> > clearing indname as we always revert back the old phase information?\n> \n> The crash scenario I'm trying to avoid would be like statement_timeout or other\n> asynchronous event occurring between two non-atomic operations.\n> \n> I said that there's an issue no matter what order we set indname/phase;\n> If we wrote:\n> |cbarg->indname = indname;\n> |cbarg->phase = phase;\n> ..and hit a timeout (or similar) between setting indname=NULL but before\n> setting phase=VACUUM_INDEX, then we can crash due to null pointer.\n> \n> But if we write:\n> |cbarg->phase = phase;\n> |if (cbarg->indname) {pfree(cbarg->indname);}\n> |cbarg->indname = indname ? pstrdup(indname) : NULL;\n> ..then we can still crash if we timeout between freeing cbarg->indname and\n> setting it to null, due to acccessing a pfreed allocation.\n\nIf \"phase\" is updated before \"indname\", I'm able to induce a synthetic crash\nlike this:\n\n+if (errinfo->phase==VACUUM_ERRCB_PHASE_VACUUM_INDEX && errinfo->indname==NULL) \n+{\n+kill(getpid(), SIGINT);\n+pg_sleep(1); // that's needed since signals are delivered asynchronously\n+}\n\nAnd another crash if we do this after pfree but before setting indname.\n\n+if (errinfo->phase==VACUUM_ERRCB_PHASE_VACUUM_INDEX && errinfo->indname!=NULL)\n+{\n+kill(getpid(), SIGINT);\n+pg_sleep(1);\n+}\n\nI'm not sure if those are possible outside of \"induced\" errors. Maybe the\nfunction is essentially atomic due to no CHECK_FOR_INTERRUPTS or similar?\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 27 Mar 2020 01:16:01 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Fri, Mar 27, 2020 at 11:46 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Thu, Mar 26, 2020 at 11:44:24PM -0500, Justin Pryzby wrote:\n> > On Fri, Mar 27, 2020 at 09:49:29AM +0530, Amit Kapila wrote:\n> > > On Fri, Mar 27, 2020 at 3:47 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > >\n> > > > > Hm, I was just wondering what happens if an error happens *during*\n> > > > > update_vacuum_error_cbarg(). It seems like if we set\n> > > > > errcbarg->phase=VACUUM_INDEX before setting errcbarg->indname=indname, then an\n> > > > > error would cause a crash.\n> > > > >\n> > >\n> > > Can't that be avoided if you check if cbarg->indname is non-null in\n> > > vacuum_error_callback as we are already doing for\n> > > VACUUM_ERRCB_PHASE_TRUNCATE?\n> > >\n> > > > > And if we pfree and set indname before phase, it'd\n> > > > > be a problem when going from an index phase to non-index phase.\n> > >\n> > > How is it possible that we move to the non-index phase without\n> > > clearing indname as we always revert back the old phase information?\n> >\n> > The crash scenario I'm trying to avoid would be like statement_timeout or other\n> > asynchronous event occurring between two non-atomic operations.\n> >\n> > I said that there's an issue no matter what order we set indname/phase;\n> > If we wrote:\n> > |cbarg->indname = indname;\n> > |cbarg->phase = phase;\n> > ..and hit a timeout (or similar) between setting indname=NULL but before\n> > setting phase=VACUUM_INDEX, then we can crash due to null pointer.\n> >\n> > But if we write:\n> > |cbarg->phase = phase;\n> > |if (cbarg->indname) {pfree(cbarg->indname);}\n> > |cbarg->indname = indname ? pstrdup(indname) : NULL;\n> > ..then we can still crash if we timeout between freeing cbarg->indname and\n> > setting it to null, due to acccessing a pfreed allocation.\n>\n> If \"phase\" is updated before \"indname\", I'm able to induce a synthetic crash\n> like this:\n>\n> +if (errinfo->phase==VACUUM_ERRCB_PHASE_VACUUM_INDEX && errinfo->indname==NULL)\n> +{\n> +kill(getpid(), SIGINT);\n> +pg_sleep(1); // that's needed since signals are delivered asynchronously\n> +}\n>\n> And another crash if we do this after pfree but before setting indname.\n>\n> +if (errinfo->phase==VACUUM_ERRCB_PHASE_VACUUM_INDEX && errinfo->indname!=NULL)\n> +{\n> +kill(getpid(), SIGINT);\n> +pg_sleep(1);\n> +}\n>\n> I'm not sure if those are possible outside of \"induced\" errors. Maybe the\n> function is essentially atomic due to no CHECK_FOR_INTERRUPTS or similar?\n>\n\nYes, this is exactly the point. I think unless you have\nCHECK_FOR_INTERRUPTS in that function, the problems you are trying to\nthink won't happen.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 27 Mar 2020 11:50:30 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Fri, 27 Mar 2020 at 07:17, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Thu, Mar 26, 2020 at 10:04:57AM -0500, Justin Pryzby wrote:\n> > Does that address your comment ?\n>\n> I hope so.\n\nThank you for updating the patch. I'm concerned a bit about overhead\nof frequently updating and reverting the callback arguments in\nlazy_vacuum_page(). We call that function every time when we vacuum a\npage, but if the table has an index, we actually don't need to update\nthe callback arguments in that function. But I hope it's negligible\nsince all operation will be performed on memory.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 27 Mar 2020 16:58:52 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Fri, Mar 27, 2020 at 1:29 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Fri, 27 Mar 2020 at 07:17, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > On Thu, Mar 26, 2020 at 10:04:57AM -0500, Justin Pryzby wrote:\n> > > Does that address your comment ?\n> >\n> > I hope so.\n>\n> Thank you for updating the patch. I'm concerned a bit about overhead\n> of frequently updating and reverting the callback arguments in\n> lazy_vacuum_page(). We call that function every time when we vacuum a\n> page, but if the table has an index, we actually don't need to update\n> the callback arguments in that function. But I hope it's negligible\n> since all operation will be performed on memory.\n>\n\nRight, it will be a few instructions. I think if there is any\noverhead of this, we can easily avoid that by (a) adding a check in\nupdate_vacuum_error_cbarg which tells if the phase is getting changed\nor not and if it is not changed, then return, (b) pass additional in\nlazy_vacuum_page() to indicate whether we need to change the phase,\n(c) just invoke update_vacuum_error_cbarg() in the caller. The\ncurrent way appears to be a bit neat than these options, so not sure\nif there is an advantage in changing it. Anyway, if we see any\nproblem with that it is trivial to change it.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 27 Mar 2020 14:25:42 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Fri, Mar 27, 2020 at 11:50:30AM +0530, Amit Kapila wrote:\n> > > The crash scenario I'm trying to avoid would be like statement_timeout or other\n> > > asynchronous event occurring between two non-atomic operations.\n> > >\n> > +if (errinfo->phase==VACUUM_ERRCB_PHASE_VACUUM_INDEX && errinfo->indname==NULL)\n> > +{\n> > +kill(getpid(), SIGINT);\n> > +pg_sleep(1); // that's needed since signals are delivered asynchronously\n> > +}\n> > I'm not sure if those are possible outside of \"induced\" errors. Maybe the\n> > function is essentially atomic due to no CHECK_FOR_INTERRUPTS or similar?\n> \n> Yes, this is exactly the point. I think unless you have\n> CHECK_FOR_INTERRUPTS in that function, the problems you are trying to\n> think won't happen.\n\nHm, but I caused a crash *without* adding CHECK_FOR_INTERRUPTS, just\nkill+sleep. The kill() could come from running pg_cancel_backend(). And the\nsleep() just encourages a context switch, which can happen at any time. I'm\nnot convinced that the function couldn't be interrupted by a signal.\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 27 Mar 2020 14:04:29 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Sat, Mar 28, 2020 at 12:34 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Fri, Mar 27, 2020 at 11:50:30AM +0530, Amit Kapila wrote:\n> > > > The crash scenario I'm trying to avoid would be like statement_timeout or other\n> > > > asynchronous event occurring between two non-atomic operations.\n> > > >\n> > > +if (errinfo->phase==VACUUM_ERRCB_PHASE_VACUUM_INDEX && errinfo->indname==NULL)\n> > > +{\n> > > +kill(getpid(), SIGINT);\n> > > +pg_sleep(1); // that's needed since signals are delivered asynchronously\n> > > +}\n> > > I'm not sure if those are possible outside of \"induced\" errors. Maybe the\n> > > function is essentially atomic due to no CHECK_FOR_INTERRUPTS or similar?\n> >\n> > Yes, this is exactly the point. I think unless you have\n> > CHECK_FOR_INTERRUPTS in that function, the problems you are trying to\n> > think won't happen.\n>\n> Hm, but I caused a crash *without* adding CHECK_FOR_INTERRUPTS, just\n> kill+sleep. The kill() could come from running pg_cancel_backend(). And the\n> sleep() just encourages a context switch, which can happen at any time.\n>\n\npg_sleep internally uses CHECK_FOR_INTERRUPTS() due to which it would\nhave accepted the signal sent via pg_cancel_backend(). Can you try\nyour scenario by temporarily removing CHECK_FOR_INTERRUPTS from\npg_sleep() or maybe better by using OS Sleep call?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 28 Mar 2020 06:28:38 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Sat, Mar 28, 2020 at 06:28:38AM +0530, Amit Kapila wrote:\n> > Hm, but I caused a crash *without* adding CHECK_FOR_INTERRUPTS, just\n> > kill+sleep. The kill() could come from running pg_cancel_backend(). And the\n> > sleep() just encourages a context switch, which can happen at any time.\n> \n> pg_sleep internally uses CHECK_FOR_INTERRUPTS() due to which it would\n> have accepted the signal sent via pg_cancel_backend(). Can you try\n> your scenario by temporarily removing CHECK_FOR_INTERRUPTS from\n> pg_sleep() or maybe better by using OS Sleep call?\n\nAh, that explains it. Right, I'm not able to induce a crash with usleep().\n\nDo you want me to resend a patch without that change ? I feel like continuing\nto trade patches is more likely to introduce new errors or lose someone else's\nchanges than to make much progress. The patch has been through enough\niterations and it's very easy to miss an issue if I try to eyeball it.\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 27 Mar 2020 20:16:32 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Sat, Mar 28, 2020 at 6:46 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Sat, Mar 28, 2020 at 06:28:38AM +0530, Amit Kapila wrote:\n> > > Hm, but I caused a crash *without* adding CHECK_FOR_INTERRUPTS, just\n> > > kill+sleep. The kill() could come from running pg_cancel_backend(). And the\n> > > sleep() just encourages a context switch, which can happen at any time.\n> >\n> > pg_sleep internally uses CHECK_FOR_INTERRUPTS() due to which it would\n> > have accepted the signal sent via pg_cancel_backend(). Can you try\n> > your scenario by temporarily removing CHECK_FOR_INTERRUPTS from\n> > pg_sleep() or maybe better by using OS Sleep call?\n>\n> Ah, that explains it. Right, I'm not able to induce a crash with usleep().\n>\n> Do you want me to resend a patch without that change ? I feel like continuing\n> to trade patches is more likely to introduce new errors or lose someone else's\n> changes than to make much progress. The patch has been through enough\n> iterations and it's very easy to miss an issue if I try to eyeball it.\n>\n\nI can do it but we have to agree on the other two points (a) I still\nfeel that switching to the truncate phase should be done at the place\nfrom where we are calling lazy_truncate_heap and (b)\nlazy_cleanup_index should switch back the error phase after calling\nindex_vacuum_cleanup. I have explained my reasoning for these points\na few emails back.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 28 Mar 2020 06:59:10 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Sat, Mar 28, 2020 at 06:59:10AM +0530, Amit Kapila wrote:\n> On Sat, Mar 28, 2020 at 6:46 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > On Sat, Mar 28, 2020 at 06:28:38AM +0530, Amit Kapila wrote:\n> > > > Hm, but I caused a crash *without* adding CHECK_FOR_INTERRUPTS, just\n> > > > kill+sleep. The kill() could come from running pg_cancel_backend(). And the\n> > > > sleep() just encourages a context switch, which can happen at any time.\n> > >\n> > > pg_sleep internally uses CHECK_FOR_INTERRUPTS() due to which it would\n> > > have accepted the signal sent via pg_cancel_backend(). Can you try\n> > > your scenario by temporarily removing CHECK_FOR_INTERRUPTS from\n> > > pg_sleep() or maybe better by using OS Sleep call?\n> >\n> > Ah, that explains it. Right, I'm not able to induce a crash with usleep().\n> >\n> > Do you want me to resend a patch without that change ? I feel like continuing\n> > to trade patches is more likely to introduce new errors or lose someone else's\n> > changes than to make much progress. The patch has been through enough\n> > iterations and it's very easy to miss an issue if I try to eyeball it.\n> \n> I can do it but we have to agree on the other two points (a) I still\n> feel that switching to the truncate phase should be done at the place\n> from where we are calling lazy_truncate_heap and (b)\n> lazy_cleanup_index should switch back the error phase after calling\n> index_vacuum_cleanup. I have explained my reasoning for these points\n> a few emails back.\n\nI have no objection to either. It was intuitive to me to do it how I\noriginally wrote it but I'm not wedded to it.\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 27 Mar 2020 20:34:34 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Sat, Mar 28, 2020 at 7:04 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Sat, Mar 28, 2020 at 06:59:10AM +0530, Amit Kapila wrote:\n> > On Sat, Mar 28, 2020 at 6:46 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > >\n> > > On Sat, Mar 28, 2020 at 06:28:38AM +0530, Amit Kapila wrote:\n> > > > > Hm, but I caused a crash *without* adding CHECK_FOR_INTERRUPTS, just\n> > > > > kill+sleep. The kill() could come from running pg_cancel_backend(). And the\n> > > > > sleep() just encourages a context switch, which can happen at any time.\n> > > >\n> > > > pg_sleep internally uses CHECK_FOR_INTERRUPTS() due to which it would\n> > > > have accepted the signal sent via pg_cancel_backend(). Can you try\n> > > > your scenario by temporarily removing CHECK_FOR_INTERRUPTS from\n> > > > pg_sleep() or maybe better by using OS Sleep call?\n> > >\n> > > Ah, that explains it. Right, I'm not able to induce a crash with usleep().\n> > >\n> > > Do you want me to resend a patch without that change ? I feel like continuing\n> > > to trade patches is more likely to introduce new errors or lose someone else's\n> > > changes than to make much progress. The patch has been through enough\n> > > iterations and it's very easy to miss an issue if I try to eyeball it.\n> >\n> > I can do it but we have to agree on the other two points (a) I still\n> > feel that switching to the truncate phase should be done at the place\n> > from where we are calling lazy_truncate_heap and (b)\n> > lazy_cleanup_index should switch back the error phase after calling\n> > index_vacuum_cleanup. I have explained my reasoning for these points\n> > a few emails back.\n>\n> I have no objection to either. It was intuitive to me to do it how I\n> originally wrote it but I'm not wedded to it.\n>\n\nPlease find attached the updated patch with all the changes discussed.\nLet me know if I have missed anything?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Sat, 28 Mar 2020 09:52:51 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Sat, 28 Mar 2020 at 13:23, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Mar 28, 2020 at 7:04 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > On Sat, Mar 28, 2020 at 06:59:10AM +0530, Amit Kapila wrote:\n> > > On Sat, Mar 28, 2020 at 6:46 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > >\n> > > > On Sat, Mar 28, 2020 at 06:28:38AM +0530, Amit Kapila wrote:\n> > > > > > Hm, but I caused a crash *without* adding CHECK_FOR_INTERRUPTS, just\n> > > > > > kill+sleep. The kill() could come from running pg_cancel_backend(). And the\n> > > > > > sleep() just encourages a context switch, which can happen at any time.\n> > > > >\n> > > > > pg_sleep internally uses CHECK_FOR_INTERRUPTS() due to which it would\n> > > > > have accepted the signal sent via pg_cancel_backend(). Can you try\n> > > > > your scenario by temporarily removing CHECK_FOR_INTERRUPTS from\n> > > > > pg_sleep() or maybe better by using OS Sleep call?\n> > > >\n> > > > Ah, that explains it. Right, I'm not able to induce a crash with usleep().\n> > > >\n> > > > Do you want me to resend a patch without that change ? I feel like continuing\n> > > > to trade patches is more likely to introduce new errors or lose someone else's\n> > > > changes than to make much progress. The patch has been through enough\n> > > > iterations and it's very easy to miss an issue if I try to eyeball it.\n> > >\n> > > I can do it but we have to agree on the other two points (a) I still\n> > > feel that switching to the truncate phase should be done at the place\n> > > from where we are calling lazy_truncate_heap and (b)\n> > > lazy_cleanup_index should switch back the error phase after calling\n> > > index_vacuum_cleanup. I have explained my reasoning for these points\n> > > a few emails back.\n> >\n> > I have no objection to either. It was intuitive to me to do it how I\n> > originally wrote it but I'm not wedded to it.\n> >\n>\n> Please find attached the updated patch with all the changes discussed.\n> Let me know if I have missed anything?\n>\n\nThank you for updating the patch! Looks good to me.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 29 Mar 2020 12:33:57 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Sun, Mar 29, 2020 at 9:04 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Sat, 28 Mar 2020 at 13:23, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > Please find attached the updated patch with all the changes discussed.\n> > Let me know if I have missed anything?\n> >\n>\n> Thank you for updating the patch! Looks good to me.\n>\n\nOkay, I will push this tomorrow.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 29 Mar 2020 11:35:44 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Sun, Mar 29, 2020 at 11:35 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sun, Mar 29, 2020 at 9:04 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Sat, 28 Mar 2020 at 13:23, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > >\n> > > Please find attached the updated patch with all the changes discussed.\n> > > Let me know if I have missed anything?\n> > >\n> >\n> > Thank you for updating the patch! Looks good to me.\n> >\n>\n> Okay, I will push this tomorrow.\n>\n\nPushed. I see one buildfarm failure [1] but that doesn't seem to be\nrelated to this patch.\n\n[1] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=petalura&dt=2020-03-30%2002%3A20%3A03\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 30 Mar 2020 08:42:46 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Fri, Mar 27, 2020 at 5:03 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n\nNow that the main patch is committed, I have reviewed the other two patches.\n\nv37-0002-Drop-reltuples\n1.\n@@ -2289,11 +2289,10 @@ vacuum_one_index(Relation indrel,\nIndexBulkDeleteResult **stats,\n\n /* Do vacuum or cleanup of the index */\n if (lvshared->for_cleanup)\n- lazy_cleanup_index(indrel, stats, lvshared->reltuples,\n- lvshared->estimated_count, vacrelstats);\n+ lazy_cleanup_index(indrel, stats, vacrelstats);\n else\n lazy_vacuum_index(indrel, stats, dead_tuples,\n- lvshared->reltuples, vacrelstats);\n+ vacrelstats);\n\nI don't think the above change is correct. How will vacrelstats have\ncorrect values when vacuum_one_index is called via parallel workers\n(via parallel_vacuum_main)?\n\nThe v37-0003-Avoid-some-calls-to-RelationGetRelationName.patch looks\ngood to me. I have added the commit message in the patch.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 30 Mar 2020 14:31:53 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Mon, Mar 30, 2020 at 02:31:53PM +0530, Amit Kapila wrote:\n> Now that the main patch is committed, I have reviewed the other two patches.\n\nThanks for that\n\nOn Mon, Mar 30, 2020 at 02:31:53PM +0530, Amit Kapila wrote:\n> The v37-0003-Avoid-some-calls-to-RelationGetRelationName.patch looks\n> good to me. I have added the commit message in the patch.\n\nI realized the 0003 patch has an error in lazy_vacuum_index; it should be:\n\n- RelationGetRelationName(indrel),\n+ vacrelstats->indname,\n\nThat was maybe due to originally using a separate errinfo for each phase, with\none \"char *relname\" and no \"char *indrel\".\n\n> I don't think the above change is correct. How will vacrelstats have\n> correct values when vacuum_one_index is called via parallel workers\n> (via parallel_vacuum_main)?\n\nYou're right: parallel main's vacrelstats was added by this patchset and only\nthe error context fields were initialized. I fixed it up in the attached by\nalso setting vacrelstats->new_rel_tuples and old_live_tuples. It's not clear\nif this is worth it just to save an argument to two functions?\n\n-- \nJustin",
"msg_date": "Mon, 30 Mar 2020 11:26:16 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Mon, Mar 30, 2020 at 9:56 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Mon, Mar 30, 2020 at 02:31:53PM +0530, Amit Kapila wrote:\n> > The v37-0003-Avoid-some-calls-to-RelationGetRelationName.patch looks\n> > good to me. I have added the commit message in the patch.\n>\n> I realized the 0003 patch has an error in lazy_vacuum_index; it should be:\n>\n> - RelationGetRelationName(indrel),\n> + vacrelstats->indname,\n>\n\nHmm, it is like that in the patch I have sent yesterday. Are you\nreferring to the patch I have sent yesterday or some older version?\nOne thing I have noticed is that there is some saving by using\nvacrelstats->relnamespace as that avoids sys cache lookup. OTOH,\nusing vacrelstats->relname doesn't save much, but maybe for the sake\nof consistency, we can use it.\n\n> That was maybe due to originally using a separate errinfo for each phase, with\n> one \"char *relname\" and no \"char *indrel\".\n>\n> > I don't think the above change is correct. How will vacrelstats have\n> > correct values when vacuum_one_index is called via parallel workers\n> > (via parallel_vacuum_main)?\n>\n> You're right: parallel main's vacrelstats was added by this patchset and only\n> the error context fields were initialized. I fixed it up in the attached by\n> also setting vacrelstats->new_rel_tuples and old_live_tuples. It's not clear\n> if this is worth it just to save an argument to two functions?\n>\n\nRight, it is not clear to me whether that is an improvement, so I\nsuggest let's leave that patch for now.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 31 Mar 2020 07:50:45 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Tue, Mar 31, 2020 at 07:50:45AM +0530, Amit Kapila wrote:\n> On Mon, Mar 30, 2020 at 9:56 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > On Mon, Mar 30, 2020 at 02:31:53PM +0530, Amit Kapila wrote:\n> > > The v37-0003-Avoid-some-calls-to-RelationGetRelationName.patch looks\n> > > good to me. I have added the commit message in the patch.\n> >\n> > I realized the 0003 patch has an error in lazy_vacuum_index; it should be:\n> >\n> > - RelationGetRelationName(indrel),\n> > + vacrelstats->indname,\n> >\n> \n> Hmm, it is like that in the patch I have sent yesterday. Are you\n> referring to the patch I have sent yesterday or some older version?\n\nOh good. That was a recent fix I made, and I was afraid I'd never sent it, and\nnot sure if you'd used it. Looks like it was fixed since v36... As you can\nsee, I'm losing track of my branches. It will be nice to finally put this to\nrest.\n\n> One thing I have noticed is that there is some saving by using\n> vacrelstats->relnamespace as that avoids sys cache lookup. OTOH,\n> using vacrelstats->relname doesn't save much, but maybe for the sake\n> of consistency, we can use it.\n\nMostly I wrote that to avoid repeatedly calling functions/macro with long name.\nI consider it a minor cleanup. I think we should put them to use. The\nLVRelStats describes them as not being specifically for the error context.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 30 Mar 2020 22:23:08 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On Tue, Mar 31, 2020 at 8:53 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Tue, Mar 31, 2020 at 07:50:45AM +0530, Amit Kapila wrote:\n> > One thing I have noticed is that there is some saving by using\n> > vacrelstats->relnamespace as that avoids sys cache lookup. OTOH,\n> > using vacrelstats->relname doesn't save much, but maybe for the sake\n> > of consistency, we can use it.\n>\n> Mostly I wrote that to avoid repeatedly calling functions/macro with long name.\n> I consider it a minor cleanup. I think we should put them to use. The\n> LVRelStats describes them as not being specifically for the error context.\n>\n\nPushed. I think we are done here. The patch is marked as committed in\nCF. Thank you!\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 1 Apr 2020 07:54:45 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On 2020-04-01 07:54:45 +0530, Amit Kapila wrote:\n> Pushed. I think we are done here. The patch is marked as committed in\n> CF. Thank you!\n\nAwesome! Thanks for all your work on this, all. This'll make it a lot\neasier to debug errors during autovacuum.\n\n\n",
"msg_date": "Wed, 1 Apr 2020 12:11:15 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
},
{
"msg_contents": "On 2020-Apr-01, Andres Freund wrote:\n\n> On 2020-04-01 07:54:45 +0530, Amit Kapila wrote:\n> > Pushed. I think we are done here. The patch is marked as committed in\n> > CF. Thank you!\n> \n> Awesome! Thanks for all your work on this, all. This'll make it a lot\n> easier to debug errors during autovacuum.\n\nSeconded!\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 1 Apr 2020 16:31:53 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: error context for vacuum to include block number"
}
] |
[
{
"msg_contents": "I noticed that the nbtree README has an obsolete reference to the\nformer design of get_actual_variable_range() in the \"Scans during\nRecovery\" section. It used to use a dirty snapshot, but doesn't\nanymore. These days, get_actual_variable_range() uses\nSnapshotNonVacuumable -- see commits 3ca930fc and d3751adc. I would\nlike to keep the README current.\n\nMy understanding is that we can trust RecentGlobalXmin to be something\nuseful and current during recovery, in general, so the selfuncs.c\nindex-only scan (which uses SnapshotNonVacuumable + RecentGlobalXmin)\ncan be trusted to work just as well as it would on the primary. Does\nthat sound correct?\n\nThe background here is that I plan on finishing off the work started\nby Simon's commit 3e4b7d87; I want to *completely* remove now-dead\ncode that was used for \"recovery pin scans\". 3e4b7d87 disabled these\n\"pin scans\" without removing them altogether, which just seems sloppy\nnow. There are quite a lot of comments that needlessly talk about this\npin scan mechanism in far removed places like nbtxlog.h. Also, we\nwaste a small amount of space in xl_btree_vacuum WAL records, since we\ndon't need to WAL-log lastBlockVacuumed (we also don't need to call\n_bt_delitems_vacuum() one last time in the case where we don't have\nanything to kill on the last block, just so the pin scan can happen --\nit won't ever happen).\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 20 Nov 2019 13:43:05 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Why is get_actual_variable_range()'s use of SnapshotNonVacuumable\n safe during recovery?"
},
{
"msg_contents": "On Wed, Nov 20, 2019 at 1:43 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> My understanding is that we can trust RecentGlobalXmin to be something\n> useful and current during recovery, in general, so the selfuncs.c\n> index-only scan (which uses SnapshotNonVacuumable + RecentGlobalXmin)\n> can be trusted to work just as well as it would on the primary. Does\n> that sound correct?\n\nNobody wants to chime in on this?\n\nI would like to fix the nbtree README soon. It's kind of standing in\nthe way of my plan to finish off the work started by Simon's commit\n3e4b7d87, and remove the remaining remnants of nbtree VACUUM \"pin\nscans\". Apart from anything else, the current organisation of the code\nis contradictory.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 13 Dec 2019 16:48:39 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Why is get_actual_variable_range()'s use of SnapshotNonVacuumable\n safe during recovery?"
}
] |
[
{
"msg_contents": "Hi,\n\nAs noted by Amit Khandhekar yesterday[1], BufFileLoad() silently eats\npread()'s error and makes them indistinguishable from EOF.\n\nSome observations:\n\n1. BufFileRead() returns size_t (not ssize_t), so it's an\nfread()-style interface, not a read()-style interface that could use\n-1 to signal errors. Unlike fread(), it doesn't seem to have anything\ncorresponding to feof()/ferror()/clearerr() that lets the caller\ndistinguish between EOF and error, so our hash and tuplestore spill\ncode simply trusts that if there is a 0 size read where it expects a\ntuple boundary, it must be EOF.\n\n2. BufFileWrite() is the same, but just like fwrite(), a short write\nmust always mean error, so there is no problem here.\n\n3. The calling code assumes that unexpected short read or write sets\nerrno, which isn't the documented behaviour of fwrite() and fread(),\nso there we're doing something a bit different (which is fine, just\npointing it out; we're sort of somewhere between the <stdio.h> and\n<unistd.h> functions, in terms of error reporting).\n\nI think the choices are: (1) switch to ssize_t and return -1, (2) add\nat least one of BufFileEof(), BufFileError(), (3) have BufFileRead()\nraise errors with elog(). I lean towards (2), and I doubt we need\nBufFileClear() because the only thing we ever do in client code on\nerror is immediately burn the world down.\n\nIf we simply added an error flag to track if FileRead() had ever\nsignalled an error, we could change nodeHashjoin.c to do something\nalong these lines:\n\n- if (nread == 0) /* end of file */\n+ if (!BufFileError(file) && nread == 0) /* end of file */\n\n... and likewise for tuplestore.c:\n\n- if (nbytes != 0 || !eofOK)\n+ if (BufFileError(file) || (nbytes == 0 && !eofOK))\n ereport(ERROR,\n\nAbout the only advantage to the current design I can see if that you\ncan probably make your query finish faster by pulling out your temp\ntablespace USB stick at the right time. Or am I missing some\ncomplication?\n\n[1] https://www.postgresql.org/message-id/CAJ3gD9emnEys%3DR%2BT3OVt_87DuMpMfS4KvoRV6e%3DiSi5PT%2B9f3w%40mail.gmail.com\n\n\n",
"msg_date": "Thu, 21 Nov 2019 10:50:54 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "BufFileRead() error signalling"
},
{
"msg_contents": "On Thu, Nov 21, 2019 at 10:50 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> As noted by Amit Khandhekar yesterday[1], BufFileLoad() silently eats\n\nErm, Khandekar, sorry for the extra h!\n\n\n",
"msg_date": "Thu, 21 Nov 2019 10:52:51 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: BufFileRead() error signalling"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> As noted by Amit Khandhekar yesterday[1], BufFileLoad() silently eats\n> pread()'s error and makes them indistinguishable from EOF.\n\nThat's definitely bad.\n\n> I think the choices are: (1) switch to ssize_t and return -1, (2) add\n> at least one of BufFileEof(), BufFileError(), (3) have BufFileRead()\n> raise errors with elog(). I lean towards (2), and I doubt we need\n> BufFileClear() because the only thing we ever do in client code on\n> error is immediately burn the world down.\n\nI'd vote for (3), I think. Making the callers responsible for error\nchecks just leaves us with a permanent hazard of errors-of-omission,\nand as you say, there's really no use-case where we'd be trying to\nrecover from the error.\n\nI think that the motivation for making the caller do it might've\nbeen an idea that the caller could provide a more useful error\nmessage, but I'm not real sure that that's true --- the caller\ndoesn't know the physical file's name, and it doesn't necessarily\nhave the right errno either.\n\nPossibly we could address any loss of usefulness by requiring callers\nto pass some sort of context identification to BufFileCreate?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 20 Nov 2019 17:31:51 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BufFileRead() error signalling"
},
{
"msg_contents": "On Thu, Nov 21, 2019 at 11:31 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > I think the choices are: (1) switch to ssize_t and return -1, (2) add\n> > at least one of BufFileEof(), BufFileError(), (3) have BufFileRead()\n> > raise errors with elog(). I lean towards (2), and I doubt we need\n> > BufFileClear() because the only thing we ever do in client code on\n> > error is immediately burn the world down.\n>\n> I'd vote for (3), I think. Making the callers responsible for error\n> checks just leaves us with a permanent hazard of errors-of-omission,\n> and as you say, there's really no use-case where we'd be trying to\n> recover from the error.\n\nOk. Here is a first attempt at that. I also noticed that some\ncallers of BufFileFlush() eat or disguise I/O errors too, so here's a\npatch for that, though I'm a little confused about the exact meaning\nof EOF from BufFileSeek().\n\n> I think that the motivation for making the caller do it might've\n> been an idea that the caller could provide a more useful error\n> message, but I'm not real sure that that's true --- the caller\n> doesn't know the physical file's name, and it doesn't necessarily\n> have the right errno either.\n\nYeah, the errno is undefined right now since we don't know if there\nwas an error.\n\n> Possibly we could address any loss of usefulness by requiring callers\n> to pass some sort of context identification to BufFileCreate?\n\nHmm. It's an idea. While thinking about the cohesion of this\nmodule's API, I thought it seemed pretty strange to have\nBufFileWrite() using a different style of error reporting, so here's\nan experimental 0003 patch to make it consistent. I realise that an\nAPI change might affect extensions, so I'm not sure if it's a good\nidea even for master (obviously it's not back-patchable). You could\nbe right that more context would be good at least in the case of\nENOSPC: knowing that (say) a hash join or a sort or CTE is implicated\nwould be helpful.",
"msg_date": "Sat, 30 Nov 2019 15:46:16 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: BufFileRead() error signalling"
},
{
"msg_contents": "You are checking file->dirty twice, first before calling the function and within the function too. Same for the Assert. For example.\r\nsize_t\r\nBufFileRead(BufFile *file, void *ptr, size_t size)\r\n{ \r\n size_t nread = 0;\r\n size_t nthistime;\r\n if (file->dirty)\r\n { \r\n BufFileFlush(file);\r\n Assert(!file->dirty);\r\n }\r\nstatic void\r\n BufFileFlush(BufFile *file)\r\n {\r\n if (file->dirty)\r\n BufFileDumpBuffer(file);\r\n Assert(!file->dirty);\n\nThe new status of this patch is: Waiting on Author\n",
"msg_date": "Tue, 10 Dec 2019 13:06:19 +0000",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BufFileRead() error signalling"
},
{
"msg_contents": "On Wed, Dec 11, 2019 at 2:07 AM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n> You are checking file->dirty twice, first before calling the function and within the function too. Same for the Assert. For example.\n\nTrue. Thanks for the review. Before I post a new version, any\nopinions on whether to back-patch, and whether to do 0003 in master\nonly, or at all?\n\n\n",
"msg_date": "Sat, 25 Jan 2020 17:11:24 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: BufFileRead() error signalling"
},
{
"msg_contents": "On Fri, Jan 24, 2020 at 11:12 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, Dec 11, 2019 at 2:07 AM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n> > You are checking file->dirty twice, first before calling the function and within the function too. Same for the Assert. For example.\n>\n> True. Thanks for the review. Before I post a new version, any\n> opinions on whether to back-patch, and whether to do 0003 in master\n> only, or at all?\n\nRather than answering your actual question, I'd like to complain about this:\n\n if (BufFileRead(file, ptr, BLCKSZ) != BLCKSZ)\n- elog(ERROR, \"could not read temporary file: %m\");\n+ elog(ERROR, \"could not read temporary file\");\n\nI recognize that your commit message explains this change by saying\nthat this code will now never be reached except as a result of a short\nread, but I don't think that will necessarily be clear to future\nreaders of those code, or people who get the error message. It seems\nlike if we're going to do do this, the error messages ought to be\nchanged to complain about a short read rather than an inability to\nread for unspecified reasons. However, I wonder why we don't make\nBufFileRead() throw all of the errors including complaining about\nshort reads. I would like to confess my undying (and probably\nunrequited) love for the following code from md.c:\n\n errmsg(\"could not read block\n%u in file \\\"%s\\\": read only %d of %d bytes\",\n\nNow that is an error message! I am not confused! I don't know why that\nhappened, but I sure know what happened!\n\nI think we should aim for that kind of style everywhere, so that\ncomplaints about reading and writing files are typically reported as\neither of these:\n\ncould not read file \"%s\": %m\ncould not read file \"%s\": read only %d of %d bytes\n\nThere is an existing precedent in twophase.c which works almost this way:\n\n if (r != stat.st_size)\n {\n if (r < 0)\n ereport(ERROR,\n (errcode_for_file_access(),\n errmsg(\"could not read file\n\\\"%s\\\": %m\", path)));\n else\n ereport(ERROR,\n (errmsg(\"could not read file\n\\\"%s\\\": read %d of %zu\",\n path, r,\n(Size) stat.st_size)));\n }\n\nI'd advocate for a couple more words in the latter message (\"only\" and\n\"bytes\") but it's still excellent.\n\nOK, now that I've waxed eloquent on that topic, let me have a go at\nyour actual questions. Regarding back-patching, I don't mind\nback-patching error handling patches like this, but I don't feel it's\nnecessary if we have no evidence that data is actually getting\ncorrupted as a result of the problem and the chances of it actually\nhappening seems remote. It's worth keeping in mind that changes to\nmessage strings will tend to degrade translatability unless the new\nstrings are copies of existing strings. Regarding 0003, it seems good\nto me to make BufFileRead() and BufFileWrite() consistent in terms of\nerror-handling behavior, so +1 for the concept (but I haven't reviewed\nthe code).\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 27 Jan 2020 10:09:30 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BufFileRead() error signalling"
},
{
"msg_contents": "On Mon, Jan 27, 2020 at 10:09:30AM -0500, Robert Haas wrote:\n> I recognize that your commit message explains this change by saying\n> that this code will now never be reached except as a result of a short\n> read, but I don't think that will necessarily be clear to future\n> readers of those code, or people who get the error message. It seems\n> like if we're going to do do this, the error messages ought to be\n> changed to complain about a short read rather than an inability to\n> read for unspecified reasons. However, I wonder why we don't make\n> BufFileRead() throw all of the errors including complaining about\n> short reads. I would like to confess my undying (and probably\n> unrequited) love for the following code from md.c:\n> \n> errmsg(\"could not read block\n> %u in file \\\"%s\\\": read only %d of %d bytes\",\n> \n> Now that is an error message! I am not confused! I don't know why that\n> happened, but I sure know what happened!\n\nI was briefly looking at 0001, and count -1 from me for the\nformulation of the error messages used in those patches.\n\n> I think we should aim for that kind of style everywhere, so that\n> complaints about reading and writing files are typically reported as\n> either of these:\n> \n> could not read file \"%s\": %m\n> could not read file \"%s\": read only %d of %d bytes\n\nThat's actually not the best fit, because this does not take care of\nthe pluralization of the second message if you have only 1 byte to\nread ;)\n\nA second point to take into account is that the unification of error\nmessages makes for less translation work, which is always welcome.\nThose points have been discussed on this thread:\nhttps://www.postgresql.org/message-id/20180520000522.GB1603@paquier.xyz\n\nThe related commit is 811b6e3, and the pattern we agreed on for a\npartial read was:\n\"could not read file \\\"%s\\\": read %d of %zu\"\n\nThen, if the syscall had an error we'd fall down to that:\n\"could not read file \\\"%s\\\": %m\"\n--\nMichael",
"msg_date": "Tue, 28 Jan 2020 11:03:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: BufFileRead() error signalling"
},
{
"msg_contents": "On Mon, Jan 27, 2020 at 9:03 PM Michael Paquier <michael@paquier.xyz> wrote:\n> That's actually not the best fit, because this does not take care of\n> the pluralization of the second message if you have only 1 byte to\n> read ;)\n\nBut ... if you have only one byte to read, you cannot have a short read.\n\n> A second point to take into account is that the unification of error\n> messages makes for less translation work, which is always welcome.\n> Those points have been discussed on this thread:\n> https://www.postgresql.org/message-id/20180520000522.GB1603@paquier.xyz\n\nI quickly reread that thread and I don't see that there's any firm\nconsensus there in favor of \"read %d of %zu\" over \"read only %d of %zu\nbytes\". Now, if most people prefer the former, so be it, but I don't\nthink that's clear from that thread.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 28 Jan 2020 15:51:54 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BufFileRead() error signalling"
},
{
"msg_contents": "On Tue, Jan 28, 2020 at 03:51:54PM -0500, Robert Haas wrote:\n> I quickly reread that thread and I don't see that there's any firm\n> consensus there in favor of \"read %d of %zu\" over \"read only %d of %zu\n> bytes\". Now, if most people prefer the former, so be it, but I don't\n> think that's clear from that thread.\n\nThe argument of consistency falls in favor of the former on HEAD:\n$ git grep \"could not read\" | grep \"read %d of %zu\" | wc -l\n59\n$ git grep \"could not read\" | grep \"read only %d of %zu\" | wc -l\n0\n--\nMichael",
"msg_date": "Wed, 29 Jan 2020 15:26:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: BufFileRead() error signalling"
},
{
"msg_contents": "On Wed, Jan 29, 2020 at 1:26 AM Michael Paquier <michael@paquier.xyz> wrote:\n> On Tue, Jan 28, 2020 at 03:51:54PM -0500, Robert Haas wrote:\n> > I quickly reread that thread and I don't see that there's any firm\n> > consensus there in favor of \"read %d of %zu\" over \"read only %d of %zu\n> > bytes\". Now, if most people prefer the former, so be it, but I don't\n> > think that's clear from that thread.\n>\n> The argument of consistency falls in favor of the former on HEAD:\n> $ git grep \"could not read\" | grep \"read %d of %zu\" | wc -l\n> 59\n> $ git grep \"could not read\" | grep \"read only %d of %zu\" | wc -l\n> 0\n\nTrue. I didn't realize that 'read %d of %zu' was so widely used.\n\nYour grep misses one instance of 'read only %d of %d bytes' because\nyou grepped for %zu specifically, but that doesn't really change the\noverall picture.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 29 Jan 2020 10:01:31 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BufFileRead() error signalling"
},
{
"msg_contents": "On Wed, Jan 29, 2020 at 10:01:31AM -0500, Robert Haas wrote:\n> Your grep misses one instance of 'read only %d of %d bytes' because\n> you grepped for %zu specifically, but that doesn't really change the\n> overall picture.\n\nYes, the one in pg_checksums.c. That could actually be changed with a\ncast to Size. (Note that there is a second one related to writes but\nthere is a precedent in md.c, and a similar one in rewriteheap.c..)\n\nSorry for the digression.\n--\nMichael",
"msg_date": "Thu, 30 Jan 2020 14:27:51 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: BufFileRead() error signalling"
},
{
"msg_contents": "On Mon, Jan 27, 2020 at 7:09 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> Rather than answering your actual question, I'd like to complain about\n> this:\n>\n> if (BufFileRead(file, ptr, BLCKSZ) != BLCKSZ)\n> - elog(ERROR, \"could not read temporary file: %m\");\n> + elog(ERROR, \"could not read temporary file\");\n>\n> I recognize that your commit message explains this change by saying\n> that this code will now never be reached except as a result of a short\n> read, but I don't think that will necessarily be clear to future\n> readers of those code, or people who get the error message. It seems\n> like if we're going to do do this, the error messages ought to be\n> changed to complain about a short read rather than an inability to\n> read for unspecified reasons. However, I wonder why we don't make\n> BufFileRead() throw all of the errors including complaining about\n> short reads. I would like to confess my undying (and probably\n> unrequited) love for the following code from md.c:\n>\n> errmsg(\"could not read block\n> %u in file \\\"%s\\\": read only %d of %d bytes\",\n>\n>\nIt would be cool to have the block number in more cases in error\nmessages. For example, in sts_parallel_scan_next()\n\n/* Seek and load the chunk header. */\nif (BufFileSeekBlock(accessor->read_file, read_page) != 0)\n ereport(ERROR,\n (errcode_for_file_access(),\n errmsg(\"could not read from shared tuplestore temporary\nfile\"),\n errdetail_internal(\"Could not seek to next block.\")));\n\nI'm actually in favor of having the block number in this error\nmessage. I think it would be helpful for multi-batch parallel\nhashjoin. If a worker reading one SharedTuplestoreChunk encounters an\nerror and another worker on a different SharedTuplstoreChunk of the\nsame file does not, I would find it useful to know more about which\nblock it was on when the error was encountered.\n\n-- \nMelanie Plageman\n\nOn Mon, Jan 27, 2020 at 7:09 AM Robert Haas <robertmhaas@gmail.com> wrote:\nRather than answering your actual question, I'd like to complain about this:\n\n if (BufFileRead(file, ptr, BLCKSZ) != BLCKSZ)\n- elog(ERROR, \"could not read temporary file: %m\");\n+ elog(ERROR, \"could not read temporary file\");\n\nI recognize that your commit message explains this change by saying\nthat this code will now never be reached except as a result of a short\nread, but I don't think that will necessarily be clear to future\nreaders of those code, or people who get the error message. It seems\nlike if we're going to do do this, the error messages ought to be\nchanged to complain about a short read rather than an inability to\nread for unspecified reasons. However, I wonder why we don't make\nBufFileRead() throw all of the errors including complaining about\nshort reads. I would like to confess my undying (and probably\nunrequited) love for the following code from md.c:\n\n errmsg(\"could not read block\n%u in file \\\"%s\\\": read only %d of %d bytes\",\n\nIt would be cool to have the block number in more cases in errormessages. For example, in sts_parallel_scan_next()/* Seek and load the chunk header. */if (BufFileSeekBlock(accessor->read_file, read_page) != 0) ereport(ERROR, (errcode_for_file_access(), errmsg(\"could not read from shared tuplestore temporary file\"), errdetail_internal(\"Could not seek to next block.\")));I'm actually in favor of having the block number in this errormessage. I think it would be helpful for multi-batch parallelhashjoin. If a worker reading one SharedTuplestoreChunk encounters anerror and another worker on a different SharedTuplstoreChunk of thesame file does not, I would find it useful to know more about whichblock it was on when the error was encountered.-- Melanie Plageman",
"msg_date": "Thu, 30 Jan 2020 12:38:22 -0800",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BufFileRead() error signalling"
},
{
"msg_contents": "Hi Thomas,\n\nOn 11/29/19 9:46 PM, Thomas Munro wrote:\n> \n> Ok. Here is a first attempt at that.\n\nIt's been a few CFs since this patch received an update, though there \nhas been plenty of discussion.\n\nPerhaps it would be best to mark it RwF until you have a chance to \nproduce an update patch?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Wed, 1 Apr 2020 11:43:45 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: BufFileRead() error signalling"
},
{
"msg_contents": "On 2020-Jan-29, Michael Paquier wrote:\n\n> On Tue, Jan 28, 2020 at 03:51:54PM -0500, Robert Haas wrote:\n> > I quickly reread that thread and I don't see that there's any firm\n> > consensus there in favor of \"read %d of %zu\" over \"read only %d of %zu\n> > bytes\". Now, if most people prefer the former, so be it, but I don't\n> > think that's clear from that thread.\n> \n> The argument of consistency falls in favor of the former on HEAD:\n> $ git grep \"could not read\" | grep \"read %d of %zu\" | wc -l\n> 59\n> $ git grep \"could not read\" | grep \"read only %d of %zu\" | wc -l\n> 0\n\nIn the discussion that led to 811b6e36a9e2 I did suggest to use \"read\nonly M of N\" instead, but there wasn't enough discussion on that fine\npoint so we settled on what you now call prevalent without a lot of\nsupport specifically on that. I guess it was enough of an improvement\nover what was there. But like Robert, I too prefer the wording that\nincludes \"only\" and \"bytes\" over the wording that doesn't.\n\nI'll let it be known that from a translator's point of view, it's a\nten-seconds job to update a fuzzy string from not including \"only\" and\n\"bytes\" to one that does. So let's not make that an argument for not\nchanging.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 27 May 2020 11:59:59 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: BufFileRead() error signalling"
},
{
"msg_contents": "On 2020-Jan-27, Robert Haas wrote:\n\n> OK, now that I've waxed eloquent on that topic, let me have a go at\n> your actual questions. Regarding back-patching, I don't mind\n> back-patching error handling patches like this, but I don't feel it's\n> necessary if we have no evidence that data is actually getting\n> corrupted as a result of the problem and the chances of it actually\n> happening seems remote.\n\nI do have evidence of postgres crashes because of a problem that could\nbe explained by this bug, so I +1 backpatching this to all supported\nbranches.\n\n(The problem I saw is a hash-join spilling data to temp tablespace,\nwhich fills up but somehow goes undetected, then when reading the data\nback it causes heap_fill_tuple to crash.)\n\nThomas, if you're no longer interested in seeing this done, please let\nme know and I can see to it.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 27 May 2020 12:16:07 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: BufFileRead() error signalling"
},
{
"msg_contents": "On Wed, May 27, 2020 at 12:16 PM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n> I do have evidence of postgres crashes because of a problem that could\n> be explained by this bug, so I +1 backpatching this to all supported\n> branches.\n>\n> (The problem I saw is a hash-join spilling data to temp tablespace,\n> which fills up but somehow goes undetected, then when reading the data\n> back it causes heap_fill_tuple to crash.)\n\nFWIW, that seems like a plenty good enough reason for back-patching to me.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 27 May 2020 12:29:00 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BufFileRead() error signalling"
},
{
"msg_contents": "On Thu, May 28, 2020 at 4:16 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> On 2020-Jan-27, Robert Haas wrote:\n> > OK, now that I've waxed eloquent on that topic, let me have a go at\n> > your actual questions. Regarding back-patching, I don't mind\n> > back-patching error handling patches like this, but I don't feel it's\n> > necessary if we have no evidence that data is actually getting\n> > corrupted as a result of the problem and the chances of it actually\n> > happening seems remote.\n>\n> I do have evidence of postgres crashes because of a problem that could\n> be explained by this bug, so I +1 backpatching this to all supported\n> branches.\n>\n> (The problem I saw is a hash-join spilling data to temp tablespace,\n> which fills up but somehow goes undetected, then when reading the data\n> back it causes heap_fill_tuple to crash.)\n\nOoh.\n\n> Thomas, if you're no longer interested in seeing this done, please let\n> me know and I can see to it.\n\nMy indecision on the back-patching question has been resolved by your\ncrash report and a search on codesearch.debian.org. BufFileRead() and\nBufFileWrite() aren't referenced by any of the extensions they\npackage, so by that standard it's OK to change this stuff in back\nbranches. I'll post a rebased a patch with Robert and Ibrar's changes\nfor last reviews later today.\n\n\n",
"msg_date": "Thu, 28 May 2020 09:58:43 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: BufFileRead() error signalling"
},
{
"msg_contents": "On 2020-May-28, Thomas Munro wrote:\n\n> My indecision on the back-patching question has been resolved by your\n> crash report and a search on codesearch.debian.org.\n\nGreat news!\n\n> BufFileRead() and BufFileWrite() aren't referenced by any of the\n> extensions they package, so by that standard it's OK to change this\n> stuff in back branches.\n\nThis makes me a bit uncomfortable. For example,\nhttps://inst.eecs.berkeley.edu/~cs186/fa03/hwk5/assign5.html (admittedly\na very old class) encourages students to use this API to create an\naggregate. It might not be the smartest thing in the world, but I'd\nprefer not to break such things if they exist proprietarily. Can we\nkeep the API unchanged in stable branches and just ereport the errors?\n\n> I'll post a rebased a patch with Robert and Ibrar's changes\n> for last reviews later today.\n\n... walks away wondering about BufFileSeekBlock's API ...\n\n(BufFileSeek seems harder to change, due to tuplestore.c)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 27 May 2020 18:58:04 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: BufFileRead() error signalling"
},
{
"msg_contents": "On Wed, May 27, 2020 at 11:59:59AM -0400, Alvaro Herrera wrote:\n> In the discussion that led to 811b6e36a9e2 I did suggest to use \"read\n> only M of N\" instead, but there wasn't enough discussion on that fine\n> point so we settled on what you now call prevalent without a lot of\n> support specifically on that. I guess it was enough of an improvement\n> over what was there. But like Robert, I too prefer the wording that\n> includes \"only\" and \"bytes\" over the wording that doesn't.\n> \n> I'll let it be known that from a translator's point of view, it's a\n> ten-seconds job to update a fuzzy string from not including \"only\" and\n> \"bytes\" to one that does. So let's not make that an argument for not\n> changing.\n\nUsing \"only\" would be fine by me, though I tend to prefer the existing\none. Now I think that we should avoid \"bytes\" to not have to worry\nabout pluralization of error messages. This has been a concern in the\npast (see e5d11b9 and the likes).\n--\nMichael",
"msg_date": "Thu, 28 May 2020 16:10:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: BufFileRead() error signalling"
},
{
"msg_contents": "On Thu, May 28, 2020 at 7:10 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Wed, May 27, 2020 at 11:59:59AM -0400, Alvaro Herrera wrote:\n> > In the discussion that led to 811b6e36a9e2 I did suggest to use \"read\n> > only M of N\" instead, but there wasn't enough discussion on that fine\n> > point so we settled on what you now call prevalent without a lot of\n> > support specifically on that. I guess it was enough of an improvement\n> > over what was there. But like Robert, I too prefer the wording that\n> > includes \"only\" and \"bytes\" over the wording that doesn't.\n> >\n> > I'll let it be known that from a translator's point of view, it's a\n> > ten-seconds job to update a fuzzy string from not including \"only\" and\n> > \"bytes\" to one that does. So let's not make that an argument for not\n> > changing.\n>\n> Using \"only\" would be fine by me, though I tend to prefer the existing\n> one. Now I think that we should avoid \"bytes\" to not have to worry\n> about pluralization of error messages. This has been a concern in the\n> past (see e5d11b9 and the likes).\n\nSorry for the delay in producing a new patch. Here's one that tries\nto take into account the various feedback in this thread:\n\nIbrar said:\n> You are checking file->dirty twice, first before calling the function\n> and within the function too. Same for the Assert.\n\nFixed.\n\nRobert, Melanie, Michael and Alvaro put forward various arguments\nabout the form of \"short read\" and block seek error message. While\nremoving bogus cases of \"%m\", I changed them all to say \"read only %zu\nof %zu bytes\" in the same place. I removed the bogus \"%m\" from\nBufFileSeek() and BufFileSeekBlock() callers (its call to\nBufFileFlush() already throws). I added block numbers to the error\nmessages about failure to seek by block.\n\nFollowing existing practice, I made write failure assume ENOSPC if\nerrno is 0, so that %m would make sense (I am not sure what platform\nreports out-of-space that way, but there are certainly a lot of copies\nof that code in the tree so it seemed to be missing here).\n\nI didn't change BufFileWrite() to be void, to be friendly to existing\ncallers outside the tree (if there are any), though I removed all the\ncode that checks the return code. We can make it void later.\n\nFor the future: it feels a bit like we're missing a one line way to\nsay \"read this many bytes and error out if you run out\".",
"msg_date": "Fri, 5 Jun 2020 18:03:59 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: BufFileRead() error signalling"
},
{
"msg_contents": "On Fri, Jun 05, 2020 at 06:03:59PM +1200, Thomas Munro wrote:\n> I didn't change BufFileWrite() to be void, to be friendly to existing\n> callers outside the tree (if there are any), though I removed all the\n> code that checks the return code. We can make it void later.\n\nMissing one entry in AppendStringToManifest(). It sounds right to not\nchange the signature of the routine on back-branches to any ABI\nbreakages. It think that it could change on HEAD.\n\nAnyway, why are we sure that it is fine to not complain even if\nBufFileWrite() does a partial write? fwrite() is mentioned at the top\nof the thread, but why is that OK?\n\n> For the future: it feels a bit like we're missing a one line way to\n> say \"read this many bytes and error out if you run out\".\n\n- ereport(ERROR,\n- (errcode_for_file_access(),\n- errmsg(\"could not write block %ld of temporary file:\n- %m\",\n- blknum)));\n- }\n+ elog(ERROR, \"could not seek block %ld temporary file\", blknum);\n\nYou mean \"in temporary file\" in the new message, no?\n\n+ ereport(ERROR,\n+ (errcode_for_file_access(),\n+ errmsg(\"could not write to \\\"%s\\\" : %m\",\n+ FilePathName(thisfile))));\n\nNit: \"could not write [to] file \\\"%s\\\": %m\" is a more common error\nstring.\n--\nMichael",
"msg_date": "Fri, 5 Jun 2020 17:14:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: BufFileRead() error signalling"
},
{
"msg_contents": "On Fri, Jun 5, 2020 at 8:14 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Fri, Jun 05, 2020 at 06:03:59PM +1200, Thomas Munro wrote:\n> > I didn't change BufFileWrite() to be void, to be friendly to existing\n> > callers outside the tree (if there are any), though I removed all the\n> > code that checks the return code. We can make it void later.\n>\n> Missing one entry in AppendStringToManifest().\n\nFixed.\n\n> Anyway, why are we sure that it is fine to not complain even if\n> BufFileWrite() does a partial write? fwrite() is mentioned at the top\n> of the thread, but why is that OK?\n\nIt's not OK. If any system call fails, we'll now ereport()\nimmediately. Now there can't be unhandled or unreported errors, and\nit's no longer possible for the caller to confuse EOF with errors.\nHence the change in descriptions:\n\n- * Like fread() except we assume 1-byte element size.\n+ * Like fread() except we assume 1-byte element size and report I/O errors via\n+ * ereport().\n\n- * Like fwrite() except we assume 1-byte element size.\n+ * Like fwrite() except we assume 1-byte element size and report errors via\n+ * ereport().\n\nStepping back a bit, one of the problems here is that we tried to\nmodel the functions on <stdio.h> fread(), but we didn't provide the\ncompanion ferror() and feof() functions, and then we were a bit fuzzy\non when errno is set, even though the <stdio.h> functions don't\ndocument that. There were various ways forward, but the one that this\npatch follows is to switch to our regular error reporting system. The\nonly thing that really costs us is marginally more vague error\nmessages. Perhaps that could eventually be fixed by passing in some\nmore context into calls like BufFileCreateTemp(), for use in error\nmessages and perhaps also path names.\n\nDoes this make sense?\n\n> + elog(ERROR, \"could not seek block %ld temporary file\", blknum);\n>\n> You mean \"in temporary file\" in the new message, no?\n\nFixed.\n\n> + ereport(ERROR,\n> + (errcode_for_file_access(),\n> + errmsg(\"could not write to \\\"%s\\\" : %m\",\n> + FilePathName(thisfile))));\n>\n> Nit: \"could not write [to] file \\\"%s\\\": %m\" is a more common error\n> string.\n\nFixed.",
"msg_date": "Mon, 8 Jun 2020 17:50:44 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: BufFileRead() error signalling"
},
{
"msg_contents": "On 2020-Jun-08, Thomas Munro wrote:\n\n> Stepping back a bit, one of the problems here is that we tried to\n> model the functions on <stdio.h> fread(), but we didn't provide the\n> companion ferror() and feof() functions, and then we were a bit fuzzy\n> on when errno is set, even though the <stdio.h> functions don't\n> document that. There were various ways forward, but the one that this\n> patch follows is to switch to our regular error reporting system. The\n> only thing that really costs us is marginally more vague error\n> messages. Perhaps that could eventually be fixed by passing in some\n> more context into calls like BufFileCreateTemp(), for use in error\n> messages and perhaps also path names.\n\nI think using our standard \"exception\" mechanism makes sense. As for\nadditional context, I think usefulness of the error messages would be\nimproved by showing the file path (because then user knows which\nfilesystem/tablespace was full, for example), but IMO any additional\ncontext on top of that is of marginal additional benefit. If we really\ncared, we could have errcontext() callbacks in the sites of interest,\nbut that would be a separate patch and perhaps not backpatchable.\n\n> > + elog(ERROR, \"could not seek block %ld temporary file\", blknum);\n> >\n> > You mean \"in temporary file\" in the new message, no?\n> \n> Fixed.\n\nThe wording we use is \"could not seek TO block N\". (Or used to use,\nbefore switching to pread/pwrite in most places, it seems).\n\nThanks!\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 8 Jun 2020 10:49:00 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: BufFileRead() error signalling"
},
{
"msg_contents": "On Tue, Jun 9, 2020 at 2:49 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> I think using our standard \"exception\" mechanism makes sense. As for\n> additional context, I think usefulness of the error messages would be\n> improved by showing the file path (because then user knows which\n> filesystem/tablespace was full, for example), but IMO any additional\n> context on top of that is of marginal additional benefit. If we really\n> cared, we could have errcontext() callbacks in the sites of interest,\n> but that would be a separate patch and perhaps not backpatchable.\n\nCool. It does show the path, so that'll tell you which file system is\nfull or broken.\n\nI thought a bit about the ENOSPC thing, and took that change out.\nSince commit 1173344e we handle write() returning a positive number\nless than the full size by predicting that a follow-up call to write()\nwould surely return ENOSPC, without the hassle of trying to write\nmore, or having a separate error message sans %m. But\nBufFileDumpBuffer() does try again, and only raises an error if the\nsystem call returns < 0 (well, it says <= 0, but 0 is impossible\naccording to POSIX, at least assuming you didn't try to write zero\nbytes, and we already exclude that). So ENOSPC-prediction is\nunnecessary here.\n\n> The wording we use is \"could not seek TO block N\". (Or used to use,\n> before switching to pread/pwrite in most places, it seems).\n\nFixed in a couple of places.",
"msg_date": "Tue, 9 Jun 2020 12:21:53 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: BufFileRead() error signalling"
},
{
"msg_contents": "On Mon, Jun 08, 2020 at 05:50:44PM +1200, Thomas Munro wrote:\n> On Fri, Jun 5, 2020 at 8:14 PM Michael Paquier <michael@paquier.xyz> wrote:\\\n>> Anyway, why are we sure that it is fine to not complain even if\n>> BufFileWrite() does a partial write? fwrite() is mentioned at the\n>> top\n>> of the thread, but why is that OK?\n>\n> It's not OK. If any system call fails, we'll now ereport()\n> immediately. Now there can't be unhandled or unreported errors, and\n> it's no longer possible for the caller to confuse EOF with errors.\n> Hence the change in descriptions:\n\nOh, OK. I looked at that again this morning and I see your point now.\nI was wondering if it could be possible to have BufFileWrite() write\nless data than what is expected with errno=0. The code of HEAD would\nissue a confusing error message like \"could not write: Success\" in\nsuch a case, still it would fail on ERROR. And I thought that your\npatch would do a different thing and would cause this code path to not\nfail in such a case, but the point I missed on the first read of your\npatch is that BufFileWrite() is written is such a way that we would\nactually just keep looping until the amount of data to write is\nwritten, meaning that we should never see anymore the case where\nBufFileWrite() returns a size that does not match with the expected\nsize to write.\n\nOn Tue, Jun 09, 2020 at 12:21:53PM +1200, Thomas Munro wrote:\n> On Tue, Jun 9, 2020 at 2:49 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>> I think using our standard \"exception\" mechanism makes sense. As for\n>> additional context, I think usefulness of the error messages would be\n>> improved by showing the file path (because then user knows which\n>> filesystem/tablespace was full, for example), but IMO any additional\n>> context on top of that is of marginal additional benefit. If we really\n>> cared, we could have errcontext() callbacks in the sites of interest,\n>> but that would be a separate patch and perhaps not backpatchable.\n> \n> Cool. It does show the path, so that'll tell you which file system is\n> full or broken.\n\nThere are some places in logtape.c, *tuplestore.c and gist where there\nis no file path. That would be nice to have, but that's not really\nthe problem of this patch.\n\n> I thought a bit about the ENOSPC thing, and took that change out.\n> Since commit 1173344e we handle write() returning a positive number\n> less than the full size by predicting that a follow-up call to write()\n> would surely return ENOSPC, without the hassle of trying to write\n> more, or having a separate error message sans %m. But\n> BufFileDumpBuffer() does try again, and only raises an error if the\n> system call returns < 0 (well, it says <= 0, but 0 is impossible\n> according to POSIX, at least assuming you didn't try to write zero\n> bytes, and we already exclude that). So ENOSPC-prediction is\n> unnecessary here.\n\n+1. Makes sense.\n\n>> The wording we use is \"could not seek TO block N\". (Or used to use,\n>> before switching to pread/pwrite in most places, it seems).\n> \n> Fixed in a couple of places.\n\nLooks fine to me. Are you planning to send an extra patch to switch\nBufFileWrite() to void for 14~?\n--\nMichael",
"msg_date": "Tue, 9 Jun 2020 11:32:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: BufFileRead() error signalling"
},
{
"msg_contents": "On Tue, Jun 9, 2020 at 2:32 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Looks fine to me. Are you planning to send an extra patch to switch\n> BufFileWrite() to void for 14~?\n\nThanks! Pushed. I went ahead and made it void in master only.\n\n\n",
"msg_date": "Tue, 16 Jun 2020 17:41:31 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: BufFileRead() error signalling"
},
{
"msg_contents": "On Tue, Jun 16, 2020 at 05:41:31PM +1200, Thomas Munro wrote:\n> Thanks! Pushed. I went ahead and made it void in master only.\n\nThanks.\n--\nMichael",
"msg_date": "Tue, 16 Jun 2020 16:28:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: BufFileRead() error signalling"
}
] |
[
{
"msg_contents": "Hello.\n\nI happened to find that the commit 71dcd74 added the function\n\"network_sortsupport\" with OID = 8190. Is it right? Otherwise we\nshould we move it to, say, 4035.\n\n(I understand that OID [8000, 9999] are development-use.)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 21 Nov 2019 10:44:30 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "an OID >= 8000 in master"
},
{
"msg_contents": "On Thu, Nov 21, 2019 at 10:44:30AM +0900, Kyotaro Horiguchi wrote:\n> I happened to find that the commit 71dcd74 added the function\n> \"network_sortsupport\" with OID = 8190. Is it right? Otherwise we\n> should we move it to, say, 4035.\n> \n> (I understand that OID [8000, 9999] are development-use.)\n\nYep, agreed. This looks like an oversight. Peter?\n--\nMichael",
"msg_date": "Thu, 21 Nov 2019 11:06:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: an OID >= 8000 in master"
},
{
"msg_contents": "On Wed, Nov 20, 2019 at 5:44 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> I happened to find that the commit 71dcd74 added the function\n> \"network_sortsupport\" with OID = 8190. Is it right? Otherwise we\n> should we move it to, say, 4035.\n>\n> (I understand that OID [8000, 9999] are development-use.)\n\nI committed this patch using an OID in that range intentionally.\nCommit a6417078 established a new project policy around OID\nassignment. It will be renumbered at the end of the release cycle.\n\nThe unused_oids script will now actively suggest that patch authors\nuse a particular random OID from this range, so this will probably be\nvery common soon.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 20 Nov 2019 18:07:12 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: an OID >= 8000 in master"
},
{
"msg_contents": "On Wed, Nov 20, 2019 at 6:07 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Yep, agreed. This looks like an oversight. Peter?\n\nIt's not an oversight. See the commit message of a6417078, and the\nadditions that were made to the RELEASE_CHANGES file.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 20 Nov 2019 18:10:09 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: an OID >= 8000 in master"
},
{
"msg_contents": "At Wed, 20 Nov 2019 18:10:09 -0800, Peter Geoghegan <pg@bowt.ie> wrote in \n> On Wed, Nov 20, 2019 at 6:07 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > Yep, agreed. This looks like an oversight. Peter?\n> \n> It's not an oversight. See the commit message of a6417078, and the\n> additions that were made to the RELEASE_CHANGES file.\n\nMmm...\n\na6417078:\n\n> * After feature freeze in each development cycle, run renumber_oids.pl\n> to move all such OIDs down to lower numbers, thus freeing the high OID\n> range for the next development cycle.\n\nI thought that commits don't use the development OIDs and thought that\nwe won't have conflict perfectly.\n\nSo, still any ongoing patch can stamp on another when it is committed\nby certain probability (even if it's rather low)). And consecutive\nhigh-OID \"hole\"s are going to be shortened and decrease throgh a year.\n\n\nBy the way even if we work this way, developers tend to pick up low\nrange OIDs since it is printed at the beginning of the output. I think\nwe should hide the whole list of unused oids defaultly and just\nsuggest random one.\n\n$ ./unused_oids\nSuggested random unused OID: 8057 (133 consecutive OID(s) available starting here)\nIf you need more OIDs, try run this script again or unused_oids -v\n(for example) to show the complete list of unused OIDs.\n\n$ ./unused_oids\nSuggested random unused OID: 8182 (8 consecutive OID(s) available starting here)\nIf you need more OIDs, try run this script again or unused_oids -v\n(for example) to show the complete list of unused OIDs.\n$ ./unused_oids -v\n4 - 9\n210\n270 - 273\n...\n8191\n8193 - 9999\nPatches should use a more-or-less consecutive range of OIDs.\nBest practice is to start with a random choice in the range 8000-9999.\nSuggested random unused OID: 8342 (1658 consecutive OID(s) available starting here)\n\nThoughts?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 21 Nov 2019 13:33:48 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: an OID >= 8000 in master"
},
{
"msg_contents": "On Wed, Nov 20, 2019 at 8:33 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> So, still any ongoing patch can stamp on another when it is committed\n> by certain probability (even if it's rather low)). And consecutive\n> high-OID \"hole\"s are going to be shortened and decrease throgh a year.\n\nRight.\n\n> By the way even if we work this way, developers tend to pick up low\n> range OIDs since it is printed at the beginning of the output. I think\n> we should hide the whole list of unused oids defaultly and just\n> suggest random one.\n\nIt is still within the discretion of committers to use the\nnon-reserved/development OID ranges directly. For example, a committer\nmay prefer to use an OID that is close to the OIDs already used for a\nset of related objects, if the related objects are already in a stable\nrelease. (I'm not sure that it's really worth doing that, but that's\nwhat the policy is.)\n\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 20 Nov 2019 20:44:18 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: an OID >= 8000 in master"
},
{
"msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> At Wed, 20 Nov 2019 18:10:09 -0800, Peter Geoghegan <pg@bowt.ie> wrote in \n>> On Wed, Nov 20, 2019 at 6:07 PM Michael Paquier <michael@paquier.xyz> wrote:\n>>> Yep, agreed. This looks like an oversight. Peter?\n\n>> It's not an oversight. See the commit message of a6417078, and the\n>> additions that were made to the RELEASE_CHANGES file.\n\nYes, the idea is that picking random OIDs in the 8000-9999 range is\nless likely to cause conflicts between patches than our old habits.\n\n> I thought that commits don't use the development OIDs and thought that\n> we won't have conflict perfectly.\n\nI do not think there is any easy solution that guarantees that.\nWe could imagine having some sort of pre-registration mechanism,\nmaybe, but it seems like more trouble than benefit.\n\n> By the way even if we work this way, developers tend to pick up low\n> range OIDs since it is printed at the beginning of the output. I think\n> we should hide the whole list of unused oids defaultly and just\n> suggest random one.\n\n-1, that pretty much destroys the point of the unused_oids script.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 20 Nov 2019 23:45:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: an OID >= 8000 in master"
},
{
"msg_contents": "\nAt Wed, 20 Nov 2019 20:44:18 -0800, Peter Geoghegan <pg@bowt.ie> wrote in \n> It is still within the discretion of committers to use the\n> non-reserved/development OID ranges directly. For example, a committer\n\nThat happens at feature freeze. Understood.\n\nAt Wed, 20 Nov 2019 23:45:21 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> > I thought that commits don't use the development OIDs and thought that\n> > we won't have conflict perfectly.\n> \n> I do not think there is any easy solution that guarantees that.\n> We could imagine having some sort of pre-registration mechanism,\n> maybe, but it seems like more trouble than benefit.\n\nIf we don't intend what Peter pointed (arrangement of low-OIDs at\nfeature freeze), it can be done by moving OIDs to lower values at\ncommit. (I don't mean commiters should do that, it may be bothersome.)\n\n> > By the way even if we work this way, developers tend to pick up low\n> > range OIDs since it is printed at the beginning of the output. I think\n> > we should hide the whole list of unused oids defaultly and just\n> > suggest random one.\n> \n> -1, that pretty much destroys the point of the unused_oids script.\n\nIs the \"point\" is what the name suggests? The tool is, for developers,\na means of finding OIDs *usable for their project*. It doesn't seem\nappropriate to show OIDs that developers are supposed to refrain from\nusing. In my proposal the tool still shows all unused OIDs as the name\nsuggests when some option specified.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 21 Nov 2019 15:02:23 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: an OID >= 8000 in master"
},
{
"msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> At Wed, 20 Nov 2019 23:45:21 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n>> I do not think there is any easy solution that guarantees that.\n>> We could imagine having some sort of pre-registration mechanism,\n>> maybe, but it seems like more trouble than benefit.\n\n> If we don't intend what Peter pointed (arrangement of low-OIDs at\n> feature freeze), it can be done by moving OIDs to lower values at\n> commit. (I don't mean commiters should do that, it may be bothersome.)\n\nYes, that's exactly the point: when we discussed this new policy,\nit was agreed that making committers deal with the issue in each\ncommit was an unreasonable burden. Aside from just being more\nwork, there's the risk that two committers working on different\npatches concurrently would choose to map the development OIDs to\nthe same \"final\" OIDs. It seems better to deal with the problem\nonce at feature freeze.\n\nAnyway, we've only had this policy in place for a few months.\nI'm not eager to redesign it until we've had more experience.\n\n>>> By the way even if we work this way, developers tend to pick up low\n>>> range OIDs since it is printed at the beginning of the output. I think\n>>> we should hide the whole list of unused oids defaultly and just\n>>> suggest random one.\n\n>> -1, that pretty much destroys the point of the unused_oids script.\n\n> Is the \"point\" is what the name suggests? The tool is, for developers,\n> a means of finding OIDs *usable for their project*. It doesn't seem\n> appropriate to show OIDs that developers are supposed to refrain from\n> using. In my proposal the tool still shows all unused OIDs as the name\n> suggests when some option specified.\n\nThe existing output seems perfectly clear to me. What you propose\njust adds complication and reduces usefulness.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 21 Nov 2019 10:35:25 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: an OID >= 8000 in master"
},
{
"msg_contents": "At Thu, 21 Nov 2019 10:35:25 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> > If we don't intend what Peter pointed (arrangement of low-OIDs at\n> > feature freeze), it can be done by moving OIDs to lower values at\n> > commit. (I don't mean commiters should do that, it may be bothersome.)\n> \n> Yes, that's exactly the point: when we discussed this new policy,\n> it was agreed that making committers deal with the issue in each\n> commit was an unreasonable burden. Aside from just being more\n> work, there's the risk that two committers working on different\n> patches concurrently would choose to map the development OIDs to\n> the same \"final\" OIDs. It seems better to deal with the problem\n> once at feature freeze.\n> \n> Anyway, we've only had this policy in place for a few months.\n> I'm not eager to redesign it until we've had more experience.\n\nThanks for the explantion. I understood and agreed.\n\n> >>> By the way even if we work this way, developers tend to pick up low\n> >>> range OIDs since it is printed at the beginning of the output. I think\n> >>> we should hide the whole list of unused oids defaultly and just\n> >>> suggest random one.\n> \n> >> -1, that pretty much destroys the point of the unused_oids script.\n> \n> > Is the \"point\" is what the name suggests? The tool is, for developers,\n> > a means of finding OIDs *usable for their project*. It doesn't seem\n> > appropriate to show OIDs that developers are supposed to refrain from\n> > using. In my proposal the tool still shows all unused OIDs as the name\n> > suggests when some option specified.\n> \n> The existing output seems perfectly clear to me. What you propose\n> just adds complication and reduces usefulness.\n\nFor clarity, it's perfect also to me. I don't insist on the change\nsince no supporters come up.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 22 Nov 2019 13:47:39 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: an OID >= 8000 in master"
}
] |
[
{
"msg_contents": "Hi all,\n\nAfter working on dc816e58, I have noticed that what we are doing with\nattribute mappings is not that good. In a couple of code paths of the\nrewriter, the executor, and more particularly ALTER TABLE, when\nworking on the creation of inherited relations or partitions an\nattribute mapping gets used to make sure that the new cloned elements\n(indexes, fk, etc.) have correct definitions linked correctly from the\nparent to the child's attributes.\n\nSometimes things can go wrong, because the attribute array is just an\nAttrNumber pointer and it is up to the caller building the map to\nguess which length it has. Existing callers do that fine, but this\ncan lead to errors as recent history has proved.\n\nAttached is a patch to refactor all that which simply adds the\nattribute mapping length directly with the attribute list. The nice\neffect of the refactoring is that now callers willing to get attribute\nmaps don't need to think about which length it should have, and this\nallows to perform checks on the expected number of attributes in the\nmap particularly in the executor part. A couple of structures also\nhave their logic simplified.\n\nOn top of that, I have spotted two fishy attribute mapping calculations\nin addFkRecurseReferencing() when adding a foreign key for partitions\nwhen there are dropped columns and in CloneFkReferencing(). The\nmapping was using the number of attributes from the foreign key, which\ncan be lower than the mapping of the parent if there are dropped\ncolumns in-between. I am pretty sure that if some attributes of the\nparent are dropped (aka mapping set to 0 in the middle of its array\nthen we could finish with an incorrect attribute mapping, and I\nsuspect that this could lead to failures similar to what was fixed in\ndc816e58, but I have not spent much time yet into that part.\n\nI'll add this patch to the next CF for review. The patch compiles and\npasses all regression tests.\n\nThanks,\n--\nMichael",
"msg_date": "Thu, 21 Nov 2019 13:25:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Rework manipulation and structure of attribute mappings"
},
{
"msg_contents": "Hi Michael,\n\nThanks for working on this. I guess this is a follow up to:\nhttps://www.postgresql.org/message-id/20191102052001.GB1614%40paquier.xyz\n\nOn Thu, Nov 21, 2019 at 1:26 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> Hi all,\n>\n> After working on dc816e58, I have noticed that what we are doing with\n> attribute mappings is not that good. In a couple of code paths of the\n> rewriter, the executor, and more particularly ALTER TABLE, when\n> working on the creation of inherited relations or partitions an\n> attribute mapping gets used to make sure that the new cloned elements\n> (indexes, fk, etc.) have correct definitions linked correctly from the\n> parent to the child's attributes.\n>\n> Sometimes things can go wrong, because the attribute array is just an\n> AttrNumber pointer and it is up to the caller building the map to\n> guess which length it has. Existing callers do that fine, but this\n> can lead to errors as recent history has proved.\n>\n> Attached is a patch to refactor all that which simply adds the\n> attribute mapping length directly with the attribute list. The nice\n> effect of the refactoring is that now callers willing to get attribute\n> maps don't need to think about which length it should have, and this\n> allows to perform checks on the expected number of attributes in the\n> map particularly in the executor part. A couple of structures also\n> have their logic simplified.\n\nThe refactoring to use AttrMap looks good, though attmap.c as a\nseparate module contains too little functionality (just palloc and\npfree) to be a separate file, IMHO. If we are to build a separate\nmodule, why not move a bit more functionality into it from\ntupconvert.c. How about move all the code that actually creates the\nmap to attmap.c? The entry points would be all the\nconvert_tuples_by_name_map() and convert_tuples_by_name_map_if_req()\nfunctions that return AttrMap, rather than simply make_attrmap(int\nlen) which can be a static routine. Actually, we should also refactor\nconvert_tuples_by_position() to carve out the code that builds the\nAttrMap into a separate function and move it to attrmap.c.\n\nTo be honest, \"convert_tuples_\" part in those names might start\nsounding a bit outdated in the future, so we should really consider\ninventing a new interface map_attributes(TupleDesc indesc, TupleDesc\noutdesc), because most call sites that fetch the AttrMap directly\ndon't really use it for \"converting tuples\", but to convert\nexpressions or to map key arrays.\n\nAfter all the movement, tupconvert.c will only retain the\nfunctionality to build a TupleConversionMap (wrapping the AttrMap) and\nto convert HeapTuples, that is, execute_attr_map_tuple() and\nexecute_attr_map_slot(), which makes sense.\n\nThoughts?\n\n> On top of that, I have spotted two fishy attribute mapping calculations\n> in addFkRecurseReferencing() when adding a foreign key for partitions\n> when there are dropped columns and in CloneFkReferencing(). The\n> mapping was using the number of attributes from the foreign key, which\n> can be lower than the mapping of the parent if there are dropped\n> columns in-between. I am pretty sure that if some attributes of the\n> parent are dropped (aka mapping set to 0 in the middle of its array\n> then we could finish with an incorrect attribute mapping, and I\n> suspect that this could lead to failures similar to what was fixed in\n> dc816e58, but I have not spent much time yet into that part.\n\nActually, the patch can make addFkRecurseReferencing() crash, because\nthe fkattnum[] array doesn't really contain attmap->maplen elements:\n\n- for (int j = 0; j < numfks; j++)\n- mapped_fkattnum[j] = attmap[fkattnum[j] - 1];\n+ for (int j = 0; j < attmap->maplen; j++)\n+ mapped_fkattnum[j] = attmap->attnums[fkattnum[j] - 1];\n\nYou failed to notice that j is really used as index into fkattnum[],\nnot the map array returned by convert_tuples_by_name(). So, I think\nthe original coding is fine here.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Fri, 22 Nov 2019 14:21:41 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rework manipulation and structure of attribute mappings"
},
{
"msg_contents": "On Fri, Nov 22, 2019 at 02:21:41PM +0900, Amit Langote wrote:\n> Thanks for working on this. I guess this is a follow up to:\n> https://www.postgresql.org/message-id/20191102052001.GB1614%40paquier.xyz\n\nExactly. I got that in my mind for a couple of days, so I gave it a\nshot and the result was kind of nice. And here we are.\n\n> On Thu, Nov 21, 2019 at 1:26 PM Michael Paquier <michael@paquier.xyz> wrote:\n> The refactoring to use AttrMap looks good, though attmap.c as a\n> separate module contains too little functionality (just palloc and\n> pfree) to be a separate file, IMHO. If we are to build a separate\n> module, why not move a bit more functionality into it from\n> tupconvert.c. How about move all the code that actually creates the\n> map to attmap.c? The entry points would be all the\n> convert_tuples_by_name_map() and convert_tuples_by_name_map_if_req()\n> functions that return AttrMap, rather than simply make_attrmap(int\n> len) which can be a static routine.\n\nYeah, the current part is a little bit shy about that. Moving\nconvert_tuples_by_name_map() and the second one to attmap.c makes\nsense.\n\n> Actually, we should also refactor\n> convert_tuples_by_position() to carve out the code that builds the\n> AttrMap into a separate function and move it to attmap.c.\n\nNot sure how to name that. One logic uses a match based on the\nattribute name, and the other uses the type. So the one based on the\nattribute name should be something like build_attrmap_by_name() and\nthe second attrmap_build_by_position()? We could use a better\nconvention like AttrMapBuildByPosition for example. Any suggestions\nof names are welcome. Please note that I still have a commit fest to \nrun and finish, so I'll unlikely come back to that before December.\nLet's continue with the tuning of the function names though.\n\n> To be honest, \"convert_tuples_\" part in those names might start\n> sounding a bit outdated in the future, so we should really consider\n> inventing a new interface map_attributes(TupleDesc indesc, TupleDesc\n> outdesc), because most call sites that fetch the AttrMap directly\n> don't really use it for \"converting tuples\", but to convert\n> expressions or to map key arrays.\n>\n> After all the movement, tupconvert.c will only retain the\n> functionality to build a TupleConversionMap (wrapping the AttrMap) and\n> to convert HeapTuples, that is, execute_attr_map_tuple() and\n> execute_attr_map_slot(), which makes sense.\n\nAgreed. Let's design that carefully.\n\n> Actually, the patch can make addFkRecurseReferencing() crash, because\n> the fkattnum[] array doesn't really contain attmap->maplen elements:\n> \n> - for (int j = 0; j < numfks; j++)\n> - mapped_fkattnum[j] = attmap[fkattnum[j] - 1];\n> + for (int j = 0; j < attmap->maplen; j++)\n> + mapped_fkattnum[j] = attmap->attnums[fkattnum[j] - 1];\n> \n> You failed to notice that j is really used as index into fkattnum[],\n> not the map array returned by convert_tuples_by_name(). So, I think\n> the original coding is fine here.\n\nOuch, yes. The regression tests did not complain on this one. It\nmeans that we could improve the coverage. The second, though... I\nneed to check it more closely.\n--\nMichael",
"msg_date": "Fri, 22 Nov 2019 16:57:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Rework manipulation and structure of attribute mappings"
},
{
"msg_contents": "On Fri, Nov 22, 2019 at 4:57 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Fri, Nov 22, 2019 at 02:21:41PM +0900, Amit Langote wrote:\n> > Actually, we should also refactor\n> > convert_tuples_by_position() to carve out the code that builds the\n> > AttrMap into a separate function and move it to attmap.c.\n>\n> Not sure how to name that. One logic uses a match based on the\n> attribute name, and the other uses the type. So the one based on the\n> attribute name should be something like build_attrmap_by_name() and\n> the second attrmap_build_by_position()? We could use a better\n> convention like AttrMapBuildByPosition for example. Any suggestions\n> of names are welcome.\n\nActually, I was just suggesting that we create a new function\nconvert_tuples_by_position_map() and put the logic that compares the\ntwo TupleDescs to create the AttrMap in it, just like\nconvert_tuples_by_name_map(). Now you could say that there would be\nno point in having such a function, because no caller currently wants\nto use such a map (that is, without the accompanying\nTupleConversionMap), but maybe there will be in the future. Though\nirrespective of that consideration, I guess you'd agree to group\nsimilar code in a single source file.\n\nRegarding coming up with any new name, having a word in the name that\ndistinguishes the aspect of mapping by attribute name vs. type\n(position) should suffice. We can always do the renaming in a\nseparate patch.\n\n> Please note that I still have a commit fest to\n> run and finish, so I'll unlikely come back to that before December.\n> Let's continue with the tuning of the function names though.\n\nAs it's mainly just moving around code, I gave it a shot; patch\nattached (applies on top of yours). I haven't invented any new names\nyet, but let's keep discussing that as you say.\n\nThanks,\nAmit",
"msg_date": "Mon, 25 Nov 2019 17:55:50 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rework manipulation and structure of attribute mappings"
},
{
"msg_contents": "On Mon, Nov 25, 2019 at 05:55:50PM +0900, Amit Langote wrote:\n> Actually, I was just suggesting that we create a new function\n> convert_tuples_by_position_map() and put the logic that compares the\n> two TupleDescs to create the AttrMap in it, just like\n> convert_tuples_by_name_map(). Now you could say that there would be\n> no point in having such a function, because no caller currently wants\n> to use such a map (that is, without the accompanying\n> TupleConversionMap), but maybe there will be in the future. Though\n> irrespective of that consideration, I guess you'd agree to group\n> similar code in a single source file.\n\nHmm. I would rather keep the attribute map generation and the tuple\nconversion part, the latter depending on the former, into two\ndifferent files. That's what I did in the v2 attached.\n\n> As it's mainly just moving around code, I gave it a shot; patch\n> attached (applies on top of yours). I haven't invented any new names\n> yet, but let's keep discussing that as you say.\n\nI see. That saved me some time, thanks. It is not really intuitive\nto name routines about tuple conversion from tupconvert.c to\nattrmap.c, so I'd think about renaming those routines as well, like\nbuild_attrmap_by_name/position instead. That's more consistent with\nthe rest I added.\n\nAnother thing is that we have duplicated code at the end of\nbuild_attrmap_by_name_if_req and build_attrmap_by_position, which I\nthink would be better refactored as a static function part of\nattmap.c. This way the if_req() flavor gets much simpler.\n\nI have also fixed the issue with the FK mapping in\naddFkRecurseReferencing() you reported previously.\n\nWhat do you think about that? I would like to think that we are\ngetting at something rather committable here, though I feel that I\nneed to review more the comments around the new files and if we could\ndocument better AttrMap and its properties.\n--\nMichael",
"msg_date": "Wed, 4 Dec 2019 17:03:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Rework manipulation and structure of attribute mappings"
},
{
"msg_contents": "Thanks for the updated patch.\n\nOn Wed, Dec 4, 2019 at 5:03 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Mon, Nov 25, 2019 at 05:55:50PM +0900, Amit Langote wrote:\n> > Actually, I was just suggesting that we create a new function\n> > convert_tuples_by_position_map() and put the logic that compares the\n> > two TupleDescs to create the AttrMap in it, just like\n> > convert_tuples_by_name_map(). Now you could say that there would be\n> > no point in having such a function, because no caller currently wants\n> > to use such a map (that is, without the accompanying\n> > TupleConversionMap), but maybe there will be in the future. Though\n> > irrespective of that consideration, I guess you'd agree to group\n> > similar code in a single source file.\n>\n> Hmm. I would rather keep the attribute map generation and the tuple\n> conversion part, the latter depending on the former, into two\n> different files. That's what I did in the v2 attached.\n\nCheck.\n\n> > As it's mainly just moving around code, I gave it a shot; patch\n> > attached (applies on top of yours). I haven't invented any new names\n> > yet, but let's keep discussing that as you say.\n>\n> I see. That saved me some time, thanks. It is not really intuitive\n> to name routines about tuple conversion from tupconvert.c to\n> attrmap.c, so I'd think about renaming those routines as well, like\n> build_attrmap_by_name/position instead. That's more consistent with\n> the rest I added.\n\nSorry I don't understand this. Do you mean we should rename the\nroutines left behind in tupconvert.c? For example,\nconvert_tuples_by_name() doesn't really \"convert\" tuples, only builds\na map needed to do so. Maybe build_tuple_conversion_map_by_name()\nwould be a more suitable name.\n\n> Another thing is that we have duplicated code at the end of\n> build_attrmap_by_name_if_req and build_attrmap_by_position, which I\n> think would be better refactored as a static function part of\n> attmap.c. This way the if_req() flavor gets much simpler.\n\nNeat.\n\n> I have also fixed the issue with the FK mapping in\n> addFkRecurseReferencing() you reported previously.\n\nCheck.\n\n> What do you think about that? I would like to think that we are\n> getting at something rather committable here, though I feel that I\n> need to review more the comments around the new files and if we could\n> document better AttrMap and its properties.\n\nRegarding that, comment on a comment added by the patch:\n\n+ * Attribute mapping structure\n+ *\n+ * An attribute mapping tracks the relationship of a child relation and\n+ * its parent for inheritance and partitions. This is used mainly for\n+ * cloned object creations (indexes, foreign keys, etc.) when creating\n+ * an inherited child relation, and for runtime-execution attribute\n+ * mapping.\n+ *\n+ * Dropped attributes are marked with 0 and the length of the map is set\n+ * to be the number of attributes of the parent, which takes into account\n+ * its dropped attributes.\n\nMaybe we don't need to repeat here what AttrMap is used for (it's\nalready written in attmap.c), only write what it is and why it's\nneeded in the first place. Maybe like this:\n\n/*\n * Attribute mapping structure\n *\n * This maps attribute numbers between a pair of relations, designated 'input'\n * and 'output' (most typically inheritance parent and child relations), whose\n * common columns have different attribute numbers. Such difference may arise\n * due to the columns being ordered differently in the two relations or the\n * two relations having dropped columns at different positions.\n *\n * 'maplen' is set to the number of attributes of the 'output' relation,\n * taking into account any of its dropped attributes, with the corresponding\n * elements of the 'attnums' array set to zero.\n */\n\nThanks,\nAmit\n\n\n",
"msg_date": "Fri, 6 Dec 2019 18:03:12 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rework manipulation and structure of attribute mappings"
},
{
"msg_contents": "On Fri, Dec 06, 2019 at 06:03:12PM +0900, Amit Langote wrote:\n> On Wed, Dec 4, 2019 at 5:03 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> I see. That saved me some time, thanks. It is not really intuitive\n>> to name routines about tuple conversion from tupconvert.c to\n>> attrmap.c, so I'd think about renaming those routines as well, like\n>> build_attrmap_by_name/position instead. That's more consistent with\n>> the rest I added.\n> \n> Sorry I don't understand this. Do you mean we should rename the\n> routines left behind in tupconvert.c? For example,\n> convert_tuples_by_name() doesn't really \"convert\" tuples, only builds\n> a map needed to do so. Maybe build_tuple_conversion_map_by_name()\n> would be a more suitable name.\n\nI had no plans to touch this area nor to rename this layer because\nthat was a bit out of the original scope of this patch which is to\nremove the confusion and random bets with map lengths. I see your\npoint though and actually a name like what you are suggesting reflects\nbetter what the routine does in reality. :p\n\n> Maybe we don't need to repeat here what AttrMap is used for (it's\n> already written in attmap.c), only write what it is and why it's\n> needed in the first place. Maybe like this:\n> \n> /*\n> * Attribute mapping structure\n> *\n> * This maps attribute numbers between a pair of relations, designated 'input'\n> * and 'output' (most typically inheritance parent and child relations), whose\n> * common columns have different attribute numbers. Such difference may arise\n> * due to the columns being ordered differently in the two relations or the\n> * two relations having dropped columns at different positions.\n> *\n> * 'maplen' is set to the number of attributes of the 'output' relation,\n> * taking into account any of its dropped attributes, with the corresponding\n> * elements of the 'attnums' array set to zero.\n> */\n\nThat sounds better, thanks.\n\nWhile on it, I have also spent some time checking after the FK-related\npoints that I suspected as fishy at the beginning of the thread but I\nhave not been able to break it. We also have coverage for problems\nrelated to dropped columns in foreign_key.sql (grep for fdrop1), which\nis more than enough. There was actually one extra issue in the patch\nas of CloneFkReferencing() when filling in mapped_conkey based on the\nnumber of keys in the constraint.\n\nSo, a couple of hours after looking at the code I am finishing with\nthe updated and indented version attached. What do you think?\n--\nMichael",
"msg_date": "Mon, 9 Dec 2019 11:56:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Rework manipulation and structure of attribute mappings"
},
{
"msg_contents": "Hi Michael,\n\nOn Mon, Dec 9, 2019 at 11:57 AM Michael Paquier <michael@paquier.xyz> wrote:\n> On Fri, Dec 06, 2019 at 06:03:12PM +0900, Amit Langote wrote:\n> > Sorry I don't understand this. Do you mean we should rename the\n> > routines left behind in tupconvert.c? For example,\n> > convert_tuples_by_name() doesn't really \"convert\" tuples, only builds\n> > a map needed to do so. Maybe build_tuple_conversion_map_by_name()\n> > would be a more suitable name.\n>\n> I had no plans to touch this area nor to rename this layer because\n> that was a bit out of the original scope of this patch which is to\n> remove the confusion and random bets with map lengths. I see your\n> point though and actually a name like what you are suggesting reflects\n> better what the routine does in reality. :p\n\nMaybe another day. :)\n\n> So, a couple of hours after looking at the code I am finishing with\n> the updated and indented version attached. What do you think?\n\nThanks for the updated patch. I don't have any comments, except that\nthe text I suggested couple of weeks ago no longer reads clear:\n\n+ * by DDL operating on inheritance and partition trees to convert fully\n+ * transformed expression trees from parent rowtype to child rowtype or\n+ * vice-versa.\n\nMaybe:\n\n...to adjust the Vars in fully transformed expression trees to bear\noutput relation's attribute numbers.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Tue, 17 Dec 2019 13:54:27 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rework manipulation and structure of attribute mappings"
},
{
"msg_contents": "On Tue, Dec 17, 2019 at 01:54:27PM +0900, Amit Langote wrote:\n> Thanks for the updated patch. I don't have any comments, except that\n> the text I suggested couple of weeks ago no longer reads clear:\n\nI have spent a couple of extra hours on the patch, and committed it.\nThere was one issue in logicalrelation.h which failed to compile\nstandalone.\n\n> + * by DDL operating on inheritance and partition trees to convert fully\n> + * transformed expression trees from parent rowtype to child rowtype or\n> + * vice-versa.\n> \n> Maybe:\n> \n> ...to adjust the Vars in fully transformed expression trees to bear\n> output relation's attribute numbers.\n\nI have used something more generic at the end:\n+ * mappings by comparing input and output TupleDescs. Such mappings\n+ * are typically used by DDL operating on inheritance and partition trees\n+ * to do a conversion between rowtypes logically equivalent but with\n+ * columns in a different order, taking into account dropped columns.\n--\nMichael",
"msg_date": "Wed, 18 Dec 2019 16:26:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Rework manipulation and structure of attribute mappings"
}
] |
[
{
"msg_contents": "Hi\n\nisn't src/tutorial/func.c obsolete? There is not any benefit for users.\n\nRegards\n\nPavel\n\nHiisn't src/tutorial/func.c obsolete? There is not any benefit for users.RegardsPavel",
"msg_date": "Thu, 21 Nov 2019 19:58:13 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "obsolete example"
},
{
"msg_contents": "Em qui., 21 de nov. de 2019 às 15:59, Pavel Stehule\n<pavel.stehule@gmail.com> escreveu:\n>\n> isn't src/tutorial/func.c obsolete? There is not any benefit for users.\n>\nversion-0 calling conventions were removed in v10. It seems an\noversight at commit 5ded4bd2140. Tutorial needs some care (I'm not\nvolunteering to improve it). I suggest unbreak the funcs module with\n'mv funcs_new.c func.c'.\n\n\n-- \n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\n\n",
"msg_date": "Thu, 21 Nov 2019 20:19:56 -0300",
"msg_from": "Euler Taveira <euler@timbira.com.br>",
"msg_from_op": false,
"msg_subject": "Re: obsolete example"
},
{
"msg_contents": "On Thu, Nov 21, 2019 at 08:19:56PM -0300, Euler Taveira wrote:\n> Em qui., 21 de nov. de 2019 às 15:59, Pavel Stehule\n> <pavel.stehule@gmail.com> escreveu:\n>>\n>> isn't src/tutorial/func.c obsolete? There is not any benefit for users.\n>\n> version-0 calling conventions were removed in v10. It seems an\n> oversight at commit 5ded4bd2140. Tutorial needs some care (I'm not\n> volunteering to improve it). I suggest unbreak the funcs module with\n> 'mv funcs_new.c func.c'.\n\nNo objections from here, let's get rid of it. The docs actually make\nuse of the V1 versions, and funcs_new.c is not even compiled (it does\ncompile). Any objections to the attached? On top of moving the file,\nthere is one comment to update and a sentence to remove. Some\nprogress is always better than no progress.\n--\nMichael",
"msg_date": "Fri, 22 Nov 2019 09:13:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: obsolete example"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> No objections from here, let's get rid of it. The docs actually make\n> use of the V1 versions, and funcs_new.c is not even compiled (it does\n> compile). Any objections to the attached? On top of moving the file,\n> there is one comment to update and a sentence to remove. Some\n> progress is always better than no progress.\n\n+1\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 21 Nov 2019 19:39:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: obsolete example"
},
{
"msg_contents": "pá 22. 11. 2019 v 1:13 odesílatel Michael Paquier <michael@paquier.xyz>\nnapsal:\n\n> On Thu, Nov 21, 2019 at 08:19:56PM -0300, Euler Taveira wrote:\n> > Em qui., 21 de nov. de 2019 às 15:59, Pavel Stehule\n> > <pavel.stehule@gmail.com> escreveu:\n> >>\n> >> isn't src/tutorial/func.c obsolete? There is not any benefit for users.\n> >\n> > version-0 calling conventions were removed in v10. It seems an\n> > oversight at commit 5ded4bd2140. Tutorial needs some care (I'm not\n> > volunteering to improve it). I suggest unbreak the funcs module with\n> > 'mv funcs_new.c func.c'.\n>\n> No objections from here, let's get rid of it. The docs actually make\n> use of the V1 versions, and funcs_new.c is not even compiled (it does\n> compile). Any objections to the attached? On top of moving the file,\n> there is one comment to update and a sentence to remove. Some\n> progress is always better than no progress.\n>\n\n+1\n\nPavel\n\n> --\n> Michael\n>\n\npá 22. 11. 2019 v 1:13 odesílatel Michael Paquier <michael@paquier.xyz> napsal:On Thu, Nov 21, 2019 at 08:19:56PM -0300, Euler Taveira wrote:\n> Em qui., 21 de nov. de 2019 às 15:59, Pavel Stehule\n> <pavel.stehule@gmail.com> escreveu:\n>>\n>> isn't src/tutorial/func.c obsolete? There is not any benefit for users.\n>\n> version-0 calling conventions were removed in v10. It seems an\n> oversight at commit 5ded4bd2140. Tutorial needs some care (I'm not\n> volunteering to improve it). I suggest unbreak the funcs module with\n> 'mv funcs_new.c func.c'.\n\nNo objections from here, let's get rid of it. The docs actually make\nuse of the V1 versions, and funcs_new.c is not even compiled (it does\ncompile). Any objections to the attached? On top of moving the file,\nthere is one comment to update and a sentence to remove. Some\nprogress is always better than no progress.+1Pavel\n--\nMichael",
"msg_date": "Fri, 22 Nov 2019 06:11:32 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: obsolete example"
},
{
"msg_contents": "On Fri, Nov 22, 2019 at 06:11:32AM +0100, Pavel Stehule wrote:\n> +1\n\nOkay, done. I have added a .gitignore while on it in the path for the\nfiles generated.\n--\nMichael",
"msg_date": "Fri, 22 Nov 2019 21:22:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: obsolete example"
}
] |
[
{
"msg_contents": "Hackers,\n\nI stumbled upon an assertion while testing master for possible\nbugs. I am reporting it here in the hope that this report will\nbe useful. The attached small regression test patch consistently\ntriggers an assert in predicate.c:\n\n TRAP: FailedAssertion(\"!isCommit || \nSxactIsPrepared(MySerializableXact)\", File: \"predicate.c\", Line: 3372)\n\nI originally hit this from sources with less than recent\ncode checked out, but the error is the same in a recent,\nfresh `git clone` (4a0aab14dcb35550b55e623a3c194442c5666084)\nThe problem does not reproduce for me in REL_12_STABLE, though the\nsame assertion does exist in that branch.\n\nI built on my laptop:\n\n Linux 4.19.0-5-amd64 #1 SMP Debian 4.19.37-3 (2019-05-15) x86_64 \nGNU/Linux\n\nI built using\n\n `./configure --enable-cassert --enable-tap-tests --with-perl \n--with-python --with-tcl`\n\nThe perl, python, and tcl options don't appear to matter, as nothing\nchanges using\n\n `./configure --enable-cassert && make -j4 && make check-world`\n\n-- \nMark Dilger",
"msg_date": "Thu, 21 Nov 2019 18:20:12 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": true,
"msg_subject": "Assertion failing in master, predicate.c"
},
{
"msg_contents": "On 11/21/19 6:20 PM, Mark Dilger wrote:\n> Hackers,\n> \n> I stumbled upon an assertion while testing master for possible\n> bugs. I am reporting it here in the hope that this report will\n> be useful. The attached small regression test patch consistently\n> triggers an assert in predicate.c:\n> \n> TRAP: FailedAssertion(\"!isCommit || \n> SxactIsPrepared(MySerializableXact)\", File: \"predicate.c\", Line: 3372)\n\nI have winnowed down the test a bit further. The attached\nsmaller patch still triggers the same assertion as the prior\npatch did.\n\n-- \nMark Dilger",
"msg_date": "Thu, 21 Nov 2019 18:36:33 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Assertion failing in master, predicate.c"
},
{
"msg_contents": "Mark Dilger <hornschnorter@gmail.com> writes:\n> I have winnowed down the test a bit further. The attached\n> smaller patch still triggers the same assertion as the prior\n> patch did.\n\nFWIW, I can reproduce the assertion failure with your first test,\nbut not with this simplified one.\n\nI also confirm that it only happens in HEAD, not v12. I've not\nactually bisected, but a look at the git history for predicate.c\nsure makes it look like db2687d1f (\"Optimize PredicateLockTuple\")\nmust be to blame.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 21 Nov 2019 23:03:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Assertion failing in master, predicate.c"
},
{
"msg_contents": "\n\nOn 11/21/19 8:03 PM, Tom Lane wrote:\n> Mark Dilger <hornschnorter@gmail.com> writes:\n>> I have winnowed down the test a bit further. The attached\n>> smaller patch still triggers the same assertion as the prior\n>> patch did.\n> \n> FWIW, I can reproduce the assertion failure with your first test,\n> but not with this simplified one.\n\nThanks for checking!\n\n> I also confirm that it only happens in HEAD, not v12. I've not\n> actually bisected, but a look at the git history for predicate.c\n> sure makes it look like db2687d1f (\"Optimize PredicateLockTuple\")\n> must be to blame.\n\n`git bisect` shows the problem occurs earlier than that, and by\nchance the first bad commit was one of yours. I'm not surprised\nthat your commit was regarding LISTEN/NOTIFY, as the error is\nalways triggered with a LISTEN statement. (I've now hit this\nmany times in many tests of multiple SQL statements, and the\nlast statement before the error is always a LISTEN.)\n\nI'm cc'ing Martijn because he is mentioned in that commit.\n\n\n51004c7172b5c35afac050f4d5849839a06e8d3b is the first bad commit\ncommit 51004c7172b5c35afac050f4d5849839a06e8d3b\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Sun Sep 22 11:46:29 2019 -0400\n\n Make some efficiency improvements in LISTEN/NOTIFY.\n\n Move the responsibility for advancing the NOTIFY queue tail pointer\n from the listener(s) to the notification sender, and only have the\n sender do it once every few queue pages, rather than after every batch\n of notifications as at present. This reduces the number of times we\n execute asyncQueueAdvanceTail, and reduces contention when there are\n multiple listeners (since that function requires exclusive lock).\n This change relies on the observation that we don't really need the \ntail\n pointer to be exactly up-to-date. It's certainly not necessary to\n attempt to release disk space more often than once per SLRU segment.\n The only other usage of the tail pointer is that an incoming listener,\n if it's the only listener in its database, will need to scan the queue\n forward from the tail; but that's surely a less performance-critical\n path than routine sending and receiving of notifies. We compromise by\n advancing the tail pointer after every 4 pages of output, so that it\n shouldn't get more than a few pages behind.\n\n Also, when sending signals to other backends after adding notify\n message(s) to the queue, recognize that only backends in our own\n database are going to care about those messages, so only such\n backends really need to be awakened promptly. Backends in other\n databases should get kicked if they're well behind on reading the\n queue, else they'll hold back the global tail pointer; but wakening\n them for every single message is pointless. This change can\n substantially reduce signal traffic if listeners are spread among\n many databases. It won't help for the common case of only a single\n active database, but the extra check costs very little.\n\n Martijn van Oosterhout, with some adjustments by me\n\n Discussion: \nhttps://postgr.es/m/CADWG95vtRBFDdrx1JdT1_9nhOFw48KaeTev6F_LtDQAFVpSPhA@mail.gmail.com\n Discussion: \nhttps://postgr.es/m/CADWG95uFj8rLM52Er80JnhRsTbb_AqPP1ANHS8XQRGbqLrU+jA@mail.gmail.com\n\n\n-- \nMark Dilger\n\n\n",
"msg_date": "Fri, 22 Nov 2019 10:57:22 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Assertion failing in master, predicate.c"
},
{
"msg_contents": "Mark Dilger <hornschnorter@gmail.com> writes:\n> On 11/21/19 8:03 PM, Tom Lane wrote:\n>> I also confirm that it only happens in HEAD, not v12. I've not\n>> actually bisected, but a look at the git history for predicate.c\n>> sure makes it look like db2687d1f (\"Optimize PredicateLockTuple\")\n>> must be to blame.\n\n> `git bisect` shows the problem occurs earlier than that, and by\n> chance the first bad commit was one of yours. I'm not surprised\n> that your commit was regarding LISTEN/NOTIFY, as the error is\n> always triggered with a LISTEN statement. (I've now hit this\n> many times in many tests of multiple SQL statements, and the\n> last statement before the error is always a LISTEN.)\n\nOh my, that's interesting! I had wondered a bit about the LISTEN\nchanges, but it's hard to see how those could have any connection\nto serializable mode. This will be an entertaining debugging\nexercise ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 22 Nov 2019 14:07:29 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Assertion failing in master, predicate.c"
},
{
"msg_contents": "\n\nOn 11/22/19 11:07 AM, Tom Lane wrote:\n> Mark Dilger <hornschnorter@gmail.com> writes:\n>> On 11/21/19 8:03 PM, Tom Lane wrote:\n>>> I also confirm that it only happens in HEAD, not v12. I've not\n>>> actually bisected, but a look at the git history for predicate.c\n>>> sure makes it look like db2687d1f (\"Optimize PredicateLockTuple\")\n>>> must be to blame.\n> \n>> `git bisect` shows the problem occurs earlier than that, and by\n>> chance the first bad commit was one of yours. I'm not surprised\n>> that your commit was regarding LISTEN/NOTIFY, as the error is\n>> always triggered with a LISTEN statement. (I've now hit this\n>> many times in many tests of multiple SQL statements, and the\n>> last statement before the error is always a LISTEN.)\n> \n> Oh my, that's interesting! I had wondered a bit about the LISTEN\n> changes, but it's hard to see how those could have any connection\n> to serializable mode. This will be an entertaining debugging\n> exercise ...\n\npredicate.c was changed a few times after REL_12_STABLE was\nbranched from master but before Thomas's change that you\ninitially thought might be to blame. I checked whether\nrolling back the changes in predicate.c while keeping your\nLISTEN/NOTIFY changes might fix the bug, but alas the bug\nis still present.\n\nI'll go familiarize myself with your LISTEN/NOTIFY changes\nnow....\n\n-- \nMark Dilger\n\n\n",
"msg_date": "Fri, 22 Nov 2019 11:22:40 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Assertion failing in master, predicate.c"
},
{
"msg_contents": "\n\nOn 11/22/19 11:22 AM, Mark Dilger wrote:\n> predicate.c was changed a few times after REL_12_STABLE was\n> branched from master but before Thomas's change that you\n> initially thought might be to blame. I checked whether\n> rolling back the changes in predicate.c while keeping your\n> LISTEN/NOTIFY changes might fix the bug, but alas the bug\n> is still present.\n\nOn closer inspection, those changes were merely cosmetic\nchanges to code comments. It is no surprise rolling those\nback made no difference.\n\n-- \nMark Dilger\n\n\n",
"msg_date": "Fri, 22 Nov 2019 11:32:48 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Assertion failing in master, predicate.c"
},
{
"msg_contents": "I wrote:\n> Mark Dilger <hornschnorter@gmail.com> writes:\n>> `git bisect` shows the problem occurs earlier than that, and by\n>> chance the first bad commit was one of yours. I'm not surprised\n>> that your commit was regarding LISTEN/NOTIFY, as the error is\n>> always triggered with a LISTEN statement. (I've now hit this\n>> many times in many tests of multiple SQL statements, and the\n>> last statement before the error is always a LISTEN.)\n\n> Oh my, that's interesting! I had wondered a bit about the LISTEN\n> changes, but it's hard to see how those could have any connection\n> to serializable mode. This will be an entertaining debugging\n> exercise ...\n\nIt looks to me like this is an ancient bug that just happened to be\nmade more probable by 51004c717. That Assert in predicate.c is\nbasically firing because MySerializableXact got created *after*\nPreCommit_CheckForSerializationFailure, which is what should have\nmarked it as prepared. And that will happen, if we're in serializable\nmode and this is the first LISTEN of the session, because\nCommitTransaction() calls PreCommit_Notify after it calls\nPreCommit_CheckForSerializationFailure, and PreCommit_Notify calls\nasyncQueueReadAllNotifications which wants to get a snapshot, and\nthe transaction had no snapshot before.\n\nThe only reason it's showing up now is that actually the logic is\n\n if (!QUEUE_POS_EQUAL(max, head))\n asyncQueueReadAllNotifications();\n\nthat is, we'll skip the problematic call if the notify queue is\nvisibly empty. But 51004c717 changed how aggressively we move\nthe queue tail forward, so that in this simple example we will\nnow see the queue as possibly not empty, where we would have\ndecided it was empty before.\n\nOf course, the bug exists anyway, because concurrent NOTIFY traffic\ncould certainly cause the queue to be nonempty at this point.\nI venture that the only reason we've not seen field reports of\nthis issue is that people don't run with asserts on in production\n(and, I guess, the problem is actually harmless except for the\nAssert). Or maybe people don't use serializable mode in apps\nthat use LISTEN/NOTIFY?\n\nAnyway, it seems like the simplest fix is to swap the order of\nthe PreCommit_CheckForSerializationFailure and PreCommit_Notify\nsteps in CommitTransaction. There's also PrepareTransaction\nto think about, but there again we could just move AtPrepare_Notify\nup; it's only going to throw an error anyway, so we might as well\ndo that sooner.\n\nAn alternative idea is to use some other way of getting a snapshot\nin asyncQueueReadAllNotifications, one that always gets a current\nsnapshot and doesn't enter predicate.c. But that might have semantic\nconsequences on the timing of notifications. I'm not really sure\nthat anybody's ever thought hard about how async.c ought to act\nin serializable mode, so this might or might not be a good change.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 22 Nov 2019 18:25:25 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Assertion failing in master, predicate.c"
},
{
"msg_contents": "\n\nOn 11/22/19 3:25 PM, Tom Lane wrote:\n> I wrote:\n>> Mark Dilger <hornschnorter@gmail.com> writes:\n>>> `git bisect` shows the problem occurs earlier than that, and by\n>>> chance the first bad commit was one of yours. I'm not surprised\n>>> that your commit was regarding LISTEN/NOTIFY, as the error is\n>>> always triggered with a LISTEN statement. (I've now hit this\n>>> many times in many tests of multiple SQL statements, and the\n>>> last statement before the error is always a LISTEN.)\n> \n>> Oh my, that's interesting! I had wondered a bit about the LISTEN\n>> changes, but it's hard to see how those could have any connection\n>> to serializable mode. This will be an entertaining debugging\n>> exercise ...\n> \n> It looks to me like this is an ancient bug that just happened to be\n> made more probable by 51004c717. That Assert in predicate.c is\n> basically firing because MySerializableXact got created *after*\n> PreCommit_CheckForSerializationFailure, which is what should have\n> marked it as prepared. And that will happen, if we're in serializable\n> mode and this is the first LISTEN of the session, because\n> CommitTransaction() calls PreCommit_Notify after it calls\n> PreCommit_CheckForSerializationFailure, and PreCommit_Notify calls\n> asyncQueueReadAllNotifications which wants to get a snapshot, and\n> the transaction had no snapshot before.\n> \n> The only reason it's showing up now is that actually the logic is\n> \n> if (!QUEUE_POS_EQUAL(max, head))\n> asyncQueueReadAllNotifications();\n> \n> that is, we'll skip the problematic call if the notify queue is\n> visibly empty. But 51004c717 changed how aggressively we move\n> the queue tail forward, so that in this simple example we will\n> now see the queue as possibly not empty, where we would have\n> decided it was empty before.\n\nRight, I've been staring at that code for the last couple hours,\ntrying to see a problem with it. I tried making the code a bit\nmore aggressive about moving the tail forward to see if that\nwould help, but the only fix that worked was completely reverting\nyours and Martijn's commit. It makes sense now.\n\n> Of course, the bug exists anyway, because concurrent NOTIFY traffic\n> could certainly cause the queue to be nonempty at this point.\n> I venture that the only reason we've not seen field reports of\n> this issue is that people don't run with asserts on in production\n> (and, I guess, the problem is actually harmless except for the\n> Assert). Or maybe people don't use serializable mode in apps\n> that use LISTEN/NOTIFY?\n> \n> Anyway, it seems like the simplest fix is to swap the order of\n> the PreCommit_CheckForSerializationFailure and PreCommit_Notify\n> steps in CommitTransaction. There's also PrepareTransaction\n> to think about, but there again we could just move AtPrepare_Notify\n> up; it's only going to throw an error anyway, so we might as well\n> do that sooner.\n\nI changed PrepareTransaction and CommitTransaction in the manner\nyou suggest, and the tests pass now. I have not yet looked over\nall the other possible implications of this change, so I'll go\ndo that for a while.\n\n> \n> An alternative idea is to use some other way of getting a snapshot\n> in asyncQueueReadAllNotifications, one that always gets a current\n> snapshot and doesn't enter predicate.c. But that might have semantic\n> consequences on the timing of notifications. I'm not really sure\n> that anybody's ever thought hard about how async.c ought to act\n> in serializable mode, so this might or might not be a good change.\n\nThe semantics of receiving a notification in serializable mode are\nnot clear, unless you just insist on not receiving any. The whole\npoint of serializable mode, as I understand it, it to be given the\nimpression that all your work happens either before or after other\ntransactions' work. It hardly makes sense to receive a notification\nmid transaction informing you of some other transaction having just\nchanged something.\n\nI don't propose any changes to this, though, since it may break\nexisting applications. I prefer the simplicity of your suggestion\nabove.\n\n-- \nMark Dilger\n\n\n",
"msg_date": "Fri, 22 Nov 2019 16:01:15 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Assertion failing in master, predicate.c"
},
{
"msg_contents": "Mark Dilger <hornschnorter@gmail.com> writes:\n> On 11/22/19 3:25 PM, Tom Lane wrote:\n>> An alternative idea is to use some other way of getting a snapshot\n>> in asyncQueueReadAllNotifications, one that always gets a current\n>> snapshot and doesn't enter predicate.c. But that might have semantic\n>> consequences on the timing of notifications. I'm not really sure\n>> that anybody's ever thought hard about how async.c ought to act\n>> in serializable mode, so this might or might not be a good change.\n\n> The semantics of receiving a notification in serializable mode are\n> not clear, unless you just insist on not receiving any. The whole\n> point of serializable mode, as I understand it, it to be given the\n> impression that all your work happens either before or after other\n> transactions' work. It hardly makes sense to receive a notification\n> mid transaction informing you of some other transaction having just\n> changed something.\n\nWell, you don't: notifications are only sent to the client between\ntransactions. After sleeping on it I have these thoughts:\n\n* The other two callers of asyncQueueReadAllNotifications have just\nstarted fresh transactions, so they have no issue. Regardless of\nthe session isolation level, they'll be reading the queue with a\nfreshly-taken snapsnot.\n\n* The point of calling asyncQueueReadAllNotifications in\nExec_ListenPreCommit is to advance over already-committed queue entries\nbefore we start sending any events to the client. Without this, a\nnewly-listening client could be sent some very stale events. (Note\nthat 51004c717 changed this from \"somewhat stale\" to \"very stale\".\nI had thought briefly about whether we could fix the problem by just\nremoving this call of asyncQueueReadAllNotifications, but I do not\nthink people would find that side-effect acceptable.)\n\n* Given that the idea is to ignore already-committed entries, it makes\nsense that if LISTEN is called inside a serializable transaction block\nthen the cutoff ought to be the transaction's snapshot. Otherwise we'd\nbe dropping notifications from transactions that the calling session\ncan't have seen the effects of. That defeats the whole point.\n\n* This says that not only is it okay to use a serializable snapshot\nas the reference, but we *should* do so; that is, it's actually wrong\nto use GetLatestSnapshot here, we should use GetTransactionSnapshot.\nThere's not going to be any real difference in read-committed mode,\nbut in repeatable-read or serializable mode we are risking dropping\nevents that it'd be better to send to the client.\n\nI'm disinclined to make such a change in the back branches, but it'd\nbe reasonable to do so in HEAD.\n\nMeanwhile, as far as fixing the assertion failure goes, I don't see\nany alternative except to rearrange the order of operations during\ncommit.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 23 Nov 2019 11:07:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Assertion failing in master, predicate.c"
},
{
"msg_contents": "I wrote:\n> * Given that the idea is to ignore already-committed entries, it makes\n> sense that if LISTEN is called inside a serializable transaction block\n> then the cutoff ought to be the transaction's snapshot. Otherwise we'd\n> be dropping notifications from transactions that the calling session\n> can't have seen the effects of. That defeats the whole point.\n\n> * This says that not only is it okay to use a serializable snapshot\n> as the reference, but we *should* do so; that is, it's actually wrong\n> to use GetLatestSnapshot here, we should use GetTransactionSnapshot.\n> There's not going to be any real difference in read-committed mode,\n> but in repeatable-read or serializable mode we are risking dropping\n> events that it'd be better to send to the client.\n\n> I'm disinclined to make such a change in the back branches, but it'd\n> be reasonable to do so in HEAD.\n\nI spent some time working on this, but then realized that the idea\nhas a fatal problem. We cannot guarantee receipt of all notifications\nsince the transaction snapshot, because if our session isn't yet\nlistening, there's nothing to stop other transactions from trimming\naway notify queue entries as soon as all the already-listening sessions\nhave read them.\n\nOne could imagine changing the queue-trimming rules to avoid this,\nbut I think it's pointless. The right way to use LISTEN is to be\nsure you commit it before inspecting database state, and that's\nindependent of isolation level.\n\nI'd written some documentation and comment changes around this,\nclaiming falsely that Repeatable Read or Serializable isolation\nwould now let you make more assumptions about the timing of the\nfirst received notification. I'll get rid of those claims and\njust commit the docs changes --- it seems worthwhile to clarify\nwhat is and isn't safe use of LISTEN. But the code should be\nleft as-is, I'm thinking.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 24 Nov 2019 17:21:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Assertion failing in master, predicate.c"
}
] |
[
{
"msg_contents": "Hi all\n\nI find a situation that WAL archive file is lost but any WAL segment file is not lost.\nIt causes for archive recovery to fail. Is this behavior a bug?\n\nexample:\n\n WAL segment files\n 000000010000000000000001\n 000000010000000000000002\n 000000010000000000000003\n\n Archive files\n 000000010000000000000001\n 000000010000000000000003\n\n Archive file 000000010000000000000002 is lost but WAL segment files\n is continuous. Recovery with archive (i.e. PITR) stops at the end of\n 000000010000000000000001.\n\nHow to reproduce:\n- Set up replication (primary and standby).\n- Set [archive_mode = always] in standby.\n- WAL receiver exits (i.e. because primary goes down)\n after receiver inserts the last record in some WAL segment file\n before receiver notifies the segement file to archiver(create .ready file).\n\nEven if WAL receiver restarts, the WAL segment file is not notified to \narchiver.\n\n\nRegards\nRyo Matsumura\n\n\n",
"msg_date": "Fri, 22 Nov 2019 05:31:55 +0000",
"msg_from": "\"matsumura.ryo@fujitsu.com\" <matsumura.ryo@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "WAL archive is lost"
},
{
"msg_contents": "On Fri, Nov 22, 2019 at 05:31:55AM +0000, matsumura.ryo@fujitsu.com wrote:\n>Hi all\n>\n>I find a situation that WAL archive file is lost but any WAL segment file is not lost.\n>It causes for archive recovery to fail. Is this behavior a bug?\n>\n>example:\n>\n> WAL segment files\n> 000000010000000000000001\n> 000000010000000000000002\n> 000000010000000000000003\n>\n> Archive files\n> 000000010000000000000001\n> 000000010000000000000003\n>\n> Archive file 000000010000000000000002 is lost but WAL segment files\n> is continuous. Recovery with archive (i.e. PITR) stops at the end of\n> 000000010000000000000001.\n>\n>How to reproduce:\n>- Set up replication (primary and standby).\n>- Set [archive_mode = always] in standby.\n>- WAL receiver exits (i.e. because primary goes down)\n> after receiver inserts the last record in some WAL segment file\n> before receiver notifies the segement file to archiver(create .ready file).\n>\n>Even if WAL receiver restarts, the WAL segment file is not notified to\n>archiver.\n>\n\nThat does indeed seem like a bug. We should certainly archive all WAL\nsegments, irrespectedly of primary shutdowns/restarts/whatever. I guess\nwe should make sure the archiver is properly notified befor ethe exit.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 22 Nov 2019 20:44:40 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: WAL archive is lost"
},
{
"msg_contents": "On Fri, Nov 22, 2019 at 8:04 AM matsumura.ryo@fujitsu.com <\nmatsumura.ryo@fujitsu.com> wrote:\n\n> Hi all\n>\n> I find a situation that WAL archive file is lost but any WAL segment file\n> is not lost.\n> It causes for archive recovery to fail. Is this behavior a bug?\n>\n> example:\n>\n> WAL segment files\n> 000000010000000000000001\n> 000000010000000000000002\n> 000000010000000000000003\n>\n> Archive files\n> 000000010000000000000001\n> 000000010000000000000003\n>\n> Archive file 000000010000000000000002 is lost but WAL segment files\n> is continuous. Recovery with archive (i.e. PITR) stops at the end of\n> 000000010000000000000001.\n>\n\nWill it not archive 000000010000000000000002 eventually, like at the\nconclusion of the next restartpoint? or does it get recycled/removed\nwithout ever being archived? Or does it just hang out forever in pg_wal?\n\n\n\n> How to reproduce:\n> - Set up replication (primary and standby).\n> - Set [archive_mode = always] in standby.\n> - WAL receiver exits (i.e. because primary goes down)\n> after receiver inserts the last record in some WAL segment file\n> before receiver notifies the segement file to archiver(create .ready\n> file).\n>\n\nDo you have a trick for reliably achieving this last step?\n\nCheers,\n\nJeff\n\nOn Fri, Nov 22, 2019 at 8:04 AM matsumura.ryo@fujitsu.com <matsumura.ryo@fujitsu.com> wrote:Hi all\n\nI find a situation that WAL archive file is lost but any WAL segment file is not lost.\nIt causes for archive recovery to fail. Is this behavior a bug?\n\nexample:\n\n WAL segment files\n 000000010000000000000001\n 000000010000000000000002\n 000000010000000000000003\n\n Archive files\n 000000010000000000000001\n 000000010000000000000003\n\n Archive file 000000010000000000000002 is lost but WAL segment files\n is continuous. Recovery with archive (i.e. PITR) stops at the end of\n 000000010000000000000001.Will it not archive \n\n000000010000000000000002 eventually, like at the conclusion of the next restartpoint? or does it get recycled/removed without ever being archived? Or does it just hang out forever in pg_wal? \n\nHow to reproduce:\n- Set up replication (primary and standby).\n- Set [archive_mode = always] in standby.\n- WAL receiver exits (i.e. because primary goes down)\n after receiver inserts the last record in some WAL segment file\n before receiver notifies the segement file to archiver(create .ready file).Do you have a trick for reliably achieving this last step?Cheers,Jeff",
"msg_date": "Sat, 23 Nov 2019 09:10:35 -0500",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: WAL archive is lost"
},
{
"msg_contents": "Tomas-san and Jeff-san\r\n\r\nI'm very sorry for my slow response.\r\n\r\nTomas-san wrote:\r\n> That does indeed seem like a bug. We should certainly archive all WAL\r\n> segments, irrespectedly of primary shutdowns/restarts/whatever.\r\n\r\nI think so, too.\r\n\r\nTomas-san wrote:\r\n> I guess we should make sure the archiver is properly notified befor\r\n> ethe exit.\r\n\r\nJust an idea.\r\nIf walrcv_receive(libpqrcv_receive) returns by error value when \r\nsocket error is occured, it is enable for walreceiver to walk\r\nendofwal-route that calls XLogArchiveNotify() in the end of\r\noutter loop of walreceiver.\r\n\r\n 593 XLogArchiveNotify(xlogfname);\r\n 594 }\r\n 595 recvFile = -1;\r\n 596\r\n 597 elog(DEBUG1, \"walreceiver ended streaming and awaits new instructions\");\r\n 598 Wal\r\n\r\nJeff-san wrote:\r\n> Will it not archive 000000010000000000000002 eventually, like at the\r\n> conclusion of the next restartpoint? or does it get recycled/removed\r\n> without ever being archived? Or does it just hang out forever in pg_wal?\r\n\r\n000000010000000000000002 hang out forever.\r\n000000010000000000000002 will be never archived, recycled, and removed.\r\n\r\nI found that even if archive_mode is not set to 'always',\r\nit will be never recycled and removed.\r\n\r\nJeff-san wrote:\r\n> Do you have a trick for reliably achieving this last step?\r\n\r\nIf possible, stop walsender just after it sends the end record of in one\r\nWAL segement file or SWITCH_LOG, and then stop primary immediately.\r\n\r\nThere are two pattern that cause this issue.\r\n\r\nPattern 1.\r\nIf primary is shut down immediately when walreceiver receives the end\r\nrecord of one WAL segment file and then wait for next record by walrcv_receive(),\r\nwalreceiver exits without XLogArchiveNotify() or XLogArchiveForceDone() in\r\nXLogWalRcvWrite() because walrcv_receive() reports ERROR.\r\nEven if the startup process restarts walreceiver and requests to start\r\nfrom the top of next segement file. Then, walreceiver receives it and\r\nwrites by XLogWalRcvWrite() but it doesn't walk the route to XLogArchiveNotify()\r\nbecause it has not opened any file (recvFile == -1).\r\n\r\nPattern 2.\r\nOnly trigger is different.\r\nIf primary is shut down immediately when walreceiver receives SWITCH_LOG\r\nand then wait for next record by walrcv_receive(), walreceiver exits\r\nwithout notification to archiver.\r\nThe startup process will tell for walreceiver to start receiving from\r\nthe top of next segment file.\r\n\r\nRegards\r\nRyo Matsumura\r\n",
"msg_date": "Fri, 29 Nov 2019 01:44:39 +0000",
"msg_from": "\"matsumura.ryo@fujitsu.com\" <matsumura.ryo@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: WAL archive is lost"
}
] |
[
{
"msg_contents": "Hi all\n\nLibpq may be blocked by recv without checking data arrival\nwhen libpq could not send data enough.\nI think it should check by pqReadReady() for avoiding blocking.\n\n Note: I didn't encounter any issue that the pqReadData is blocked.\n\n[src/interfaces/libpq/fe-misc.c]\n pqSendSome(PGconn *conn, int len)\n :\n sent = pqsecure_write(conn, ptr, Min(len, 65536));\n if (sent < 0)\n :\n else\n {\n len -= sent;\n }\n if (len > 0)\n {\n if (pqReadData(conn) < 0) // read without checking\n\nMust the pqReadData() return without blocking if it could not send enough?\nIt may be 'yes', but I think there is no guarantee that there is some data\nand pqReadData() is not blocked.\n\nI think the following is better. How about it?\n< if (pqReadData(conn) < 0)\n> if (pqReadReady(conn) && pqReadData(conn) < 0)\n\nRegards\nRyo Matsumura\n\n\n",
"msg_date": "Fri, 22 Nov 2019 07:32:48 +0000",
"msg_from": "\"matsumura.ryo@fujitsu.com\" <matsumura.ryo@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "libpq calls blocking recv when it could not send data enough."
}
] |
[
{
"msg_contents": "Hi,\nTypo mystake?\nMemset only fill a pointer size, not the size of struct.\n\nBest regards.\nRanier Vilela\n\n--- \\dll\\postgresql-12.0\\a\\backend\\access\\rmgrdesc\\xactdesc.c\tMon Sep 30 17:06:55 2019\n+++ xactdesc.c\tFri Nov 22 13:40:13 2019\n@@ -35,7 +35,7 @@\n {\n \tchar\t *data = ((char *) xlrec) + MinSizeOfXactCommit;\n \n-\tmemset(parsed, 0, sizeof(*parsed));\n+\tmemset(parsed, 0, sizeof(xl_xact_parsed_commit));\n \n \tparsed->xinfo = 0;\t\t\t/* default, if no XLOG_XACT_HAS_INFO is\n \t\t\t\t\t\t\t\t * present */\n@@ -130,7 +130,7 @@\n {\n \tchar\t *data = ((char *) xlrec) + MinSizeOfXactAbort;\n \n-\tmemset(parsed, 0, sizeof(*parsed));\n+\tmemset(parsed, 0, sizeof(xl_xact_parsed_commit));\n \n \tparsed->xinfo = 0;\t\t\t/* default, if no XLOG_XACT_HAS_INFO is\n \t\t\t\t\t\t\t\t * present */",
"msg_date": "Fri, 22 Nov 2019 16:50:45 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH][BUG FIX] Uninitialized variable parsed"
},
{
"msg_contents": "\n\nOn 11/22/19 8:50 AM, Ranier Vilela wrote:\n> Hi,\n> Typo mystake?\n> Memset only fill a pointer size, not the size of struct.\n\nHello Ranier,\n\nI think you may be misunderstanding how sizeof(*parsed)\nworks in the attached code. Try compiling and running:\n\n #include <stdio.h>\n\n typedef struct mystruct {\n long long a[512];\n } mystruct;\n\n int main (void)\n {\n mystruct *myptr;\n printf(\"sizeof *myptr = %u\\n\", sizeof(*myptr));\n printf(\"sizeof mystruct * = %u\\n\", sizeof(mystruct *));\n return 0;\n }\n\nOn my hardware, I get:\n\n$ ./a.out\nsizeof *myptr = 4096\nsizeof mystruct * = 8\n\nWhich I think demonstrates that sizeof(*myptr) works just as\nwell as sizeof(mystruct) would work. It is also better style\nsince, if somebody changes the type of myptr, the sizeof()\ndoes not need to be adjusted in kind.\n\n\n> Best regards.\n> Ranier Vilela\n> \n> --- \\dll\\postgresql-12.0\\a\\backend\\access\\rmgrdesc\\xactdesc.c\tMon Sep 30 17:06:55 2019\n> +++ xactdesc.c\tFri Nov 22 13:40:13 2019\n> @@ -35,7 +35,7 @@\n> {\n> \tchar\t *data = ((char *) xlrec) + MinSizeOfXactCommit;\n> \n> -\tmemset(parsed, 0, sizeof(*parsed));\n> +\tmemset(parsed, 0, sizeof(xl_xact_parsed_commit));\n> \n> \tparsed->xinfo = 0;\t\t\t/* default, if no XLOG_XACT_HAS_INFO is\n> \t\t\t\t\t\t\t\t * present */\n> @@ -130,7 +130,7 @@\n> {\n> \tchar\t *data = ((char *) xlrec) + MinSizeOfXactAbort;\n> \n> -\tmemset(parsed, 0, sizeof(*parsed));\n> +\tmemset(parsed, 0, sizeof(xl_xact_parsed_commit));\n> \n> \tparsed->xinfo = 0;\t\t\t/* default, if no XLOG_XACT_HAS_INFO is\n> \t\t\t\t\t\t\t\t * present */\n> \n\n-- \nMark Dilger\n\n\n",
"msg_date": "Fri, 22 Nov 2019 10:08:43 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH][BUG FIX] Uninitialized variable parsed"
}
] |
[
{
"msg_contents": "Hi,\nPointer addition with NULL, is technically undefined behavior.\n\nBest regards.\nRanier Vilela\n\n--- \\dll\\postgresql-12.0\\a\\backend\\access\\transam\\xlog.c\tMon Sep 30 17:06:55 2019\n+++ xlog.c\tFri Nov 22 13:57:17 2019\n@@ -1861,7 +1861,7 @@\n \t{\n \t\tAssert(((XLogPageHeader) cachedPos)->xlp_magic == XLOG_PAGE_MAGIC);\n \t\tAssert(((XLogPageHeader) cachedPos)->xlp_pageaddr == ptr - (ptr % XLOG_BLCKSZ));\n-\t\treturn cachedPos + ptr % XLOG_BLCKSZ;\n+\t\treturn ptr % XLOG_BLCKSZ;\n \t}\n \n \t/*",
"msg_date": "Fri, 22 Nov 2019 17:19:11 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH][BUG FIX] Pointer arithmetic with NULL"
},
{
"msg_contents": "On Fri, Nov 22, 2019 at 05:19:11PM +0000, Ranier Vilela wrote:\n>Hi,\n>Pointer addition with NULL, is technically undefined behavior.\n>\n>Best regards.\n>Ranier Vilela\n>\n>--- \\dll\\postgresql-12.0\\a\\backend\\access\\transam\\xlog.c\tMon Sep 30 17:06:55 2019\n>+++ xlog.c\tFri Nov 22 13:57:17 2019\n>@@ -1861,7 +1861,7 @@\n> \t{\n> \t\tAssert(((XLogPageHeader) cachedPos)->xlp_magic == XLOG_PAGE_MAGIC);\n> \t\tAssert(((XLogPageHeader) cachedPos)->xlp_pageaddr == ptr - (ptr % XLOG_BLCKSZ));\n>-\t\treturn cachedPos + ptr % XLOG_BLCKSZ;\n>+\t\treturn ptr % XLOG_BLCKSZ;\n> \t}\n>\n> \t/*\n\nBut the value is not necessarily NULL, because it's defined like this:\n\n\tstatic char *cachedPos = NULL;\n\nthat is, it's a static value - i.e. retained across multiple calls. The\nquestion is whether we can get into that branch before it's set, but\nit's certainly not correct to just remove it ...\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 22 Nov 2019 21:07:41 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH][BUG FIX] Pointer arithmetic with NULL"
}
] |
[
{
"msg_contents": "Hi,\nTypo mystake?\nPointer var initilialized with boolean.\n\nBest regards.\nRanier Vilela\n\n--- \\dll\\postgresql-12.0\\a\\backend\\commands\\trigger.c\tMon Sep 30 17:06:55 2019\n+++ trigger.c\tFri Nov 22 14:20:56 2019\n@@ -2536,7 +2536,7 @@\n \t\t\t\t\t TupleTableSlot *slot)\n {\n \tTriggerDesc *trigdesc = relinfo->ri_TrigDesc;\n-\tHeapTuple\tnewtuple = false;\n+\tHeapTuple\tnewtuple = NULL;\n \tbool\t\tshould_free;\n \tTriggerData LocTriggerData;\n \tint\t\t\ti;\n@@ -3178,7 +3178,7 @@\n {\n \tTriggerDesc *trigdesc = relinfo->ri_TrigDesc;\n \tTupleTableSlot *oldslot = ExecGetTriggerOldSlot(estate, relinfo);\n-\tHeapTuple\tnewtuple = false;\n+\tHeapTuple\tnewtuple = NULL;\n \tbool\t\tshould_free;\n \tTriggerData LocTriggerData;\n \tint\t\t\ti;",
"msg_date": "Fri, 22 Nov 2019 17:29:54 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH][BUG FIX] Pointer var initilialized with boolean."
},
{
"msg_contents": "On Fri, Nov 22, 2019 at 9:30 AM Ranier Vilela <ranier_gyn@hotmail.com> wrote:\n>\n> Hi,\n> Typo mystake?\n> Pointer var initilialized with boolean.\n\nThis was already fixed by commit 0cafdd03a850265006c0ada1b0bf4f83e087a409.\n\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 22 Nov 2019 09:37:08 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH][BUG FIX] Pointer var initilialized with boolean."
}
] |
[
{
"msg_contents": "Hi,\nMaybe it doesn't matter, but, I think it's worth discussing.\nThe expression \"if(pstate)\" is redundant or is possible null dereference.\n\nBest regards.\nRanier Vilela\n\n--- \\dll\\postgresql-12.0\\a\\backend\\commands\\copy.c\tMon Sep 30 17:06:55 2019\n+++ copy.c\tFri Nov 22 18:33:05 2019\n@@ -3426,8 +3426,7 @@\n \tcstate->raw_buf_index = cstate->raw_buf_len = 0;\n \n \t/* Assign range table, we'll need it in CopyFrom. */\n-\tif (pstate)\n-\t\tcstate->range_table = pstate->p_rtable;\n+\tcstate->range_table = pstate->p_rtable;\n \n \ttupDesc = RelationGetDescr(cstate->rel);\n \tnum_phys_attrs = tupDesc->natts;",
"msg_date": "Fri, 22 Nov 2019 21:41:44 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Tiny optmization."
},
{
"msg_contents": "On Fri, Nov 22, 2019 at 09:41:44PM +0000, Ranier Vilela wrote:\n>Hi,\n>Maybe it doesn't matter, but, I think it's worth discussing.\n>The expression \"if(pstate)\" is redundant or is possible null dereference.\n>\n\nEh? Redundant with what? Why would it be a null dereference? It's a\nparameter passed from outside, and we're not checking it before. And\nthe if condition is there exactly to prevent null dereference.\n\nIt's generally a good idea to inspect existing callers of the modified\nfunction and try running tests before submitting a patch. In this case\nthere's a BeginCopyFrom() call in contrib/file_fdw, passing NULL as the\nfirst parameter, and if you run `make check` for that module it falls\nflat on it's face due to a segfault.\n\nregards\n\n>\n>--- \\dll\\postgresql-12.0\\a\\backend\\commands\\copy.c\tMon Sep 30 17:06:55 2019\n>+++ copy.c\tFri Nov 22 18:33:05 2019\n>@@ -3426,8 +3426,7 @@\n> \tcstate->raw_buf_index = cstate->raw_buf_len = 0;\n>\n> \t/* Assign range table, we'll need it in CopyFrom. */\n>-\tif (pstate)\n>-\t\tcstate->range_table = pstate->p_rtable;\n>+\tcstate->range_table = pstate->p_rtable;\n>\n> \ttupDesc = RelationGetDescr(cstate->rel);\n> \tnum_phys_attrs = tupDesc->natts;\n\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 22 Nov 2019 23:05:08 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Tiny optmization."
},
{
"msg_contents": "Hi,\nRedudant because he it's been dereferenced here:\n\nline 3410:\n cstate = BeginCopy(pstate, true, rel, NULL, InvalidOid, attnamelist, options);\n\nBest regards.\nRanier Vilela\n\n________________________________________\nDe: Tomas Vondra <tomas.vondra@2ndquadrant.com>\nEnviado: sexta-feira, 22 de novembro de 2019 22:05\nPara: Ranier Vilela\nCc: pgsql-hackers@postgresql.org\nAssunto: Re: [PATCH] Tiny optmization.\n\nOn Fri, Nov 22, 2019 at 09:41:44PM +0000, Ranier Vilela wrote:\n>Hi,\n>Maybe it doesn't matter, but, I think it's worth discussing.\n>The expression \"if(pstate)\" is redundant or is possible null dereference.\n>\n\nEh? Redundant with what? Why would it be a null dereference? It's a\nparameter passed from outside, and we're not checking it before. And\nthe if condition is there exactly to prevent null dereference.\n\nIt's generally a good idea to inspect existing callers of the modified\nfunction and try running tests before submitting a patch. In this case\nthere's a BeginCopyFrom() call in contrib/file_fdw, passing NULL as the\nfirst parameter, and if you run `make check` for that module it falls\nflat on it's face due to a segfault.\n\nregards\n\n>\n>--- \\dll\\postgresql-12.0\\a\\backend\\commands\\copy.c Mon Sep 30 17:06:55 2019\n>+++ copy.c Fri Nov 22 18:33:05 2019\n>@@ -3426,8 +3426,7 @@\n> cstate->raw_buf_index = cstate->raw_buf_len = 0;\n>\n> /* Assign range table, we'll need it in CopyFrom. */\n>- if (pstate)\n>- cstate->range_table = pstate->p_rtable;\n>+ cstate->range_table = pstate->p_rtable;\n>\n> tupDesc = RelationGetDescr(cstate->rel);\n> num_phys_attrs = tupDesc->natts;\n\n\n\n--\nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 22 Nov 2019 22:10:29 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Tiny optmization."
},
{
"msg_contents": "Ranier Vilela <ranier_gyn@hotmail.com> writes:\n> Redudant because he it's been dereferenced here:\n> line 3410:\n> cstate = BeginCopy(pstate, true, rel, NULL, InvalidOid, attnamelist, options);\n\nNot necessarily ... the rel!=NULL code path there doesn't touch pstate,\nand that seems to be what contrib/file_fdw is relying on.\n\nArguably, the rel==NULL code path in BeginCopy should be prepared to\nsupport pstate being null, too. But what you proposed here is certainly\nnot OK.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 22 Nov 2019 17:17:33 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Tiny optmization."
},
{
"msg_contents": "On Fri, Nov 22, 2019 at 10:10:29PM +0000, Ranier Vilela wrote:\n>Hi,\n>Redudant because he it's been dereferenced here:\n>\n>line 3410:\n> cstate = BeginCopy(pstate, true, rel, NULL, InvalidOid, attnamelist, options);\n>\n\nThere's no pstate dereference here. It just passed the value to\nBeginCopy.\n\nBTW please don't top post, reply inline. It's much easier to follow the\ndiscussion.\n\n\n>Best regards.\n>Ranier Vilela\n>\n>________________________________________\n>De: Tomas Vondra <tomas.vondra@2ndquadrant.com>\n>Enviado: sexta-feira, 22 de novembro de 2019 22:05\n>Para: Ranier Vilela\n>Cc: pgsql-hackers@postgresql.org\n>Assunto: Re: [PATCH] Tiny optmization.\n>\n>On Fri, Nov 22, 2019 at 09:41:44PM +0000, Ranier Vilela wrote:\n>>Hi,\n>>Maybe it doesn't matter, but, I think it's worth discussing.\n>>The expression \"if(pstate)\" is redundant or is possible null dereference.\n>>\n>\n>Eh? Redundant with what? Why would it be a null dereference? It's a\n>parameter passed from outside, and we're not checking it before. And\n>the if condition is there exactly to prevent null dereference.\n>\n>It's generally a good idea to inspect existing callers of the modified\n>function and try running tests before submitting a patch. In this case\n>there's a BeginCopyFrom() call in contrib/file_fdw, passing NULL as the\n>first parameter, and if you run `make check` for that module it falls\n>flat on it's face due to a segfault.\n>\n>regards\n>\n>>\n>>--- \\dll\\postgresql-12.0\\a\\backend\\commands\\copy.c Mon Sep 30 17:06:55 2019\n>>+++ copy.c Fri Nov 22 18:33:05 2019\n>>@@ -3426,8 +3426,7 @@\n>> cstate->raw_buf_index = cstate->raw_buf_len = 0;\n>>\n>> /* Assign range table, we'll need it in CopyFrom. */\n>>- if (pstate)\n>>- cstate->range_table = pstate->p_rtable;\n>>+ cstate->range_table = pstate->p_rtable;\n>>\n>> tupDesc = RelationGetDescr(cstate->rel);\n>> num_phys_attrs = tupDesc->natts;\n>\n>\n>\n>--\n>Tomas Vondra http://www.2ndQuadrant.com\n>PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 22 Nov 2019 23:18:55 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Tiny optmization."
},
{
"msg_contents": "Hi,\npstate is touched here:\na) BeginCopy line 1489:\n\tProcessCopyOptions(pstate, cstate, is_from, options);\nb) ProcessCopyOptions line 1137:\n\n\t\t\tif (format_specified)\n\t\t\t\tereport(ERROR,\n\t\t\t\t\t\t(errcode(ERRCODE_SYNTAX_ERROR),\n\t\t\t\t\t\t errmsg(\"conflicting or redundant options\"),\n\t\t\t\t\t\t parser_errposition(pstate, defel->location)));\n\nbest regards.\nRanier Vilela\n\n________________________________________\nDe: Tom Lane <tgl@sss.pgh.pa.us>\nEnviado: sexta-feira, 22 de novembro de 2019 22:17\nPara: Ranier Vilela\nCc: pgsql-hackers@postgresql.org\nAssunto: Re: [PATCH] Tiny optmization.\n\nRanier Vilela <ranier_gyn@hotmail.com> writes:\n> Redudant because he it's been dereferenced here:\n> line 3410:\n> cstate = BeginCopy(pstate, true, rel, NULL, InvalidOid, attnamelist, options);\n\nNot necessarily ... the rel!=NULL code path there doesn't touch pstate,\nand that seems to be what contrib/file_fdw is relying on.\n\nArguably, the rel==NULL code path in BeginCopy should be prepared to\nsupport pstate being null, too. But what you proposed here is certainly\nnot OK.\n\n regards, tom lane\n\n\n",
"msg_date": "Fri, 22 Nov 2019 22:24:05 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Tiny optmization."
},
{
"msg_contents": "On Fri, Nov 22, 2019 at 10:24:05PM +0000, Ranier Vilela wrote:\n>Hi,\n>pstate is touched here:\n>a) BeginCopy line 1489:\n>\tProcessCopyOptions(pstate, cstate, is_from, options);\n>b) ProcessCopyOptions line 1137:\n>\n>\t\t\tif (format_specified)\n>\t\t\t\tereport(ERROR,\n>\t\t\t\t\t\t(errcode(ERRCODE_SYNTAX_ERROR),\n>\t\t\t\t\t\t errmsg(\"conflicting or redundant options\"),\n>\t\t\t\t\t\t parser_errposition(pstate, defel->location)));\n>\n\nAnd? The fact that you pass a (possibly NULL) pointer somewhere does not\nmake that a dereference. And parser_errposition() does this:\n\n\tif (pstate == NULL || pstate->p_sourcetext == NULL)\n\t\treturn 0;\n\nSo I fail to see why this would be an issue?\n\n\n\n>best regards.\n>Ranier Vilela\n>\n>________________________________________\n>De: Tom Lane <tgl@sss.pgh.pa.us>\n>Enviado: sexta-feira, 22 de novembro de 2019 22:17\n>Para: Ranier Vilela\n>Cc: pgsql-hackers@postgresql.org\n>Assunto: Re: [PATCH] Tiny optmization.\n>\n>Ranier Vilela <ranier_gyn@hotmail.com> writes:\n>> Redudant because he it's been dereferenced here:\n>> line 3410:\n>> cstate = BeginCopy(pstate, true, rel, NULL, InvalidOid, attnamelist, options);\n>\n>Not necessarily ... the rel!=NULL code path there doesn't touch pstate,\n>and that seems to be what contrib/file_fdw is relying on.\n>\n>Arguably, the rel==NULL code path in BeginCopy should be prepared to\n>support pstate being null, too. But what you proposed here is certainly\n>not OK.\n>\n> regards, tom lane\n>\n>\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 22 Nov 2019 23:29:23 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Tiny optmization."
}
] |
[
{
"msg_contents": "Hi,\nMaybe this is a real bug.\n\nThe assignment has no effect, or forget dereferencing it?\n\nBest regards.\nRanier Vilela\n\n--- \\dll\\postgresql-12.0\\a\\backend\\commands\\lockcmds.c\tMon Sep 30 17:06:55 2019\n+++ lockcmds.c\tFri Nov 22 18:45:01 2019\n@@ -285,7 +285,7 @@\n \n \tLockViewRecurse_walker((Node *) viewquery, &context);\n \n-\tancestor_views = list_delete_oid(ancestor_views, reloid);\n+\tlist_delete_oid(ancestor_views, reloid);\n \n \ttable_close(view, NoLock);\n }",
"msg_date": "Fri, 22 Nov 2019 21:51:50 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Tiny optmization or is a bug?"
},
{
"msg_contents": "On Fri, Nov 22, 2019 at 09:51:50PM +0000, Ranier Vilela wrote:\n>Hi,\n>Maybe this is a real bug.\n>\n>The assignment has no effect, or forget dereferencing it?\n>\n>Best regards.\n>Ranier Vilela\n>\n>--- \\dll\\postgresql-12.0\\a\\backend\\commands\\lockcmds.c\tMon Sep 30 17:06:55 2019\n>+++ lockcmds.c\tFri Nov 22 18:45:01 2019\n>@@ -285,7 +285,7 @@\n>\n> \tLockViewRecurse_walker((Node *) viewquery, &context);\n>\n>-\tancestor_views = list_delete_oid(ancestor_views, reloid);\n>+\tlist_delete_oid(ancestor_views, reloid);\n>\n> \ttable_close(view, NoLock);\n> }\n\n\nThis was already reworked in the master branch by commit d97b714a219.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 22 Nov 2019 23:14:52 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Tiny optmization or is a bug?"
}
] |
[
{
"msg_contents": "Hi,\nRemove redutant test.\n\nbest regards.\nRanier Vilela\n\n--- \\dll\\postgresql-12.0\\a\\backend\\executor\\execExpr.c\tMon Sep 30 17:06:55 2019\n+++ execExpr.c\tFri Nov 22 18:50:32 2019\n@@ -2426,7 +2426,7 @@\n \t{\n \t\tdesc = parent->scandesc;\n \n-\t\tif (parent && parent->scanops)\n+\t\tif (parent->scanops)\n \t\t\ttts_ops = parent->scanops;\n \n \t\tif (parent->scanopsset)",
"msg_date": "Fri, 22 Nov 2019 21:58:55 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Tiny optimization."
},
{
"msg_contents": "On 11/22/19 10:58 PM, Ranier Vilela wrote:\n> Remove redutant test.\n\nYeah, this test does look redundant since we already check for if parent \nis NULL earlier in the function. Any optimizing compiler should see this \ntoo, but it is still a good idea to remove it for code clarity.\n\nAndreas\n\n\n",
"msg_date": "Sat, 23 Nov 2019 10:44:47 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Tiny optimization."
},
{
"msg_contents": "On Sat, Nov 23, 2019 at 10:44:47AM +0100, Andreas Karlsson wrote:\n> On 11/22/19 10:58 PM, Ranier Vilela wrote:\n> > Remove redutant test.\n> \n> Yeah, this test does look redundant since we already check for if parent is\n> NULL earlier in the function. Any optimizing compiler should see this too,\n> but it is still a good idea to remove it for code clarity.\n\nAgreed, patch applied.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Tue, 17 Dec 2019 20:37:33 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Tiny optimization."
}
] |
[
{
"msg_contents": "Hi,\nThis is real bug? firsttupleslot == NULL.\n\n\\backend\\executor\\nodeGroup.c\n\tif (TupIsNull(firsttupleslot))\n\t{\n\t\touterslot = ExecProcNode(outerPlanState(node));\n\t\tif (TupIsNull(outerslot))\n\t\t{\n\t\t\t/* empty input, so return nothing */\n\t\t\tnode->grp_done = true;\n\t\t\treturn NULL;\n\t\t}\n\t\t/* Copy tuple into firsttupleslot */\n\t\tExecCopySlot(firsttupleslot, outerslot);\n\ninclude\\executor\\tuptable.h:\n#define TupIsNull(slot) \\\n\t((slot) == NULL || TTS_EMPTY(slot))\n\nstatic inline TupleTableSlot *\nExecCopySlot(TupleTableSlot *dstslot, TupleTableSlot *srcslot)\n{\n\tAssert(!TTS_EMPTY(srcslot));\n\n\tdstslot->tts_ops->copyslot(dstslot, srcslot);\n\n\treturn dstslot;\n}\n\n\n",
"msg_date": "Fri, 22 Nov 2019 22:32:11 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "[BUG] (firsttupleslot)==NULL is redundant or is possible null\n dereference?"
},
{
"msg_contents": "On Fri, Nov 22, 2019 at 10:32:11PM +0000, Ranier Vilela wrote:\n>Hi,\n>This is real bug? firsttupleslot == NULL.\n>\n\nRanier, I don't want to be rude, but I personally am getting a bit\nannoyed by this torrent of bug reports that are essentially just a bunch\nof copy-pasted chunks of code, without any specification of bench,\nposition in the file, etc.\n\nAnd more importantly, without any clear explanation why you think it is\na bug (or even a demonstration of an issue), and \"Is this a bug?\"\n\n>\\backend\\executor\\nodeGroup.c\n>\tif (TupIsNull(firsttupleslot))\n>\t{\n>\t\touterslot = ExecProcNode(outerPlanState(node));\n>\t\tif (TupIsNull(outerslot))\n>\t\t{\n>\t\t\t/* empty input, so return nothing */\n>\t\t\tnode->grp_done = true;\n>\t\t\treturn NULL;\n>\t\t}\n>\t\t/* Copy tuple into firsttupleslot */\n>\t\tExecCopySlot(firsttupleslot, outerslot);\n>\n>include\\executor\\tuptable.h:\n>#define TupIsNull(slot) \\\n>\t((slot) == NULL || TTS_EMPTY(slot))\n>\n>static inline TupleTableSlot *\n>ExecCopySlot(TupleTableSlot *dstslot, TupleTableSlot *srcslot)\n>{\n>\tAssert(!TTS_EMPTY(srcslot));\n>\n>\tdstslot->tts_ops->copyslot(dstslot, srcslot);\n>\n>\treturn dstslot;\n>}\n>\n\nAnd why do you think this is a bug? Immediately before the part of code\nyou copied we have this:\n\n /*\n * The ScanTupleSlot holds the (copied) first tuple of each group.\n */\n firsttupleslot = node->ss.ss_ScanTupleSlot;\n\nAnd node->ss.ss_ScanTupleSlot is expected to be non-NULL. So the initial\nassumption that firsttupleslot is NULL is incorrect.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 22 Nov 2019 23:54:15 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] (firsttupleslot)==NULL is redundant or is possible null\n dereference?"
},
{
"msg_contents": "Hi,\nSorry, you are right.\nHad not seen this line:\nfirsttupleslot = node->ss.ss_ScanTupleSlot;\n\nBest regards.\nRanier Vilela\n________________________________________\nDe: Tomas Vondra <tomas.vondra@2ndquadrant.com>\nEnviado: sexta-feira, 22 de novembro de 2019 22:54\nPara: Ranier Vilela\nCc: pgsql-hackers@postgresql.org\nAssunto: Re: [BUG] (firsttupleslot)==NULL is redundant or is possible null dereference?\n\nOn Fri, Nov 22, 2019 at 10:32:11PM +0000, Ranier Vilela wrote:\n>Hi,\n>This is real bug? firsttupleslot == NULL.\n>\n\nRanier, I don't want to be rude, but I personally am getting a bit\nannoyed by this torrent of bug reports that are essentially just a bunch\nof copy-pasted chunks of code, without any specification of bench,\nposition in the file, etc.\n\nAnd more importantly, without any clear explanation why you think it is\na bug (or even a demonstration of an issue), and \"Is this a bug?\"\n\n>\\backend\\executor\\nodeGroup.c\n> if (TupIsNull(firsttupleslot))\n> {\n> outerslot = ExecProcNode(outerPlanState(node));\n> if (TupIsNull(outerslot))\n> {\n> /* empty input, so return nothing */\n> node->grp_done = true;\n> return NULL;\n> }\n> /* Copy tuple into firsttupleslot */\n> ExecCopySlot(firsttupleslot, outerslot);\n>\n>include\\executor\\tuptable.h:\n>#define TupIsNull(slot) \\\n> ((slot) == NULL || TTS_EMPTY(slot))\n>\n>static inline TupleTableSlot *\n>ExecCopySlot(TupleTableSlot *dstslot, TupleTableSlot *srcslot)\n>{\n> Assert(!TTS_EMPTY(srcslot));\n>\n> dstslot->tts_ops->copyslot(dstslot, srcslot);\n>\n> return dstslot;\n>}\n>\n\nAnd why do you think this is a bug? Immediately before the part of code\nyou copied we have this:\n\n /*\n * The ScanTupleSlot holds the (copied) first tuple of each group.\n */\n firsttupleslot = node->ss.ss_ScanTupleSlot;\n\nAnd node->ss.ss_ScanTupleSlot is expected to be non-NULL. So the initial\nassumption that firsttupleslot is NULL is incorrect.\n\nregards\n\n--\nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 22 Nov 2019 22:57:13 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [BUG] (firsttupleslot)==NULL is redundant or is possible null\n dereference?"
},
{
"msg_contents": "On Fri, Nov 22, 2019 at 10:57:13PM +0000, Ranier Vilela wrote:\n>Hi,\n>Sorry, you are right.\n>Had not seen this line:\n>firsttupleslot = node->ss.ss_ScanTupleSlot;\n>\n\nOK, no problem. When writing future messages to this list, please\n\n* Make sure you explain why you think a given code is broken. Ideally,\n bug reports come with a reproducer (instructions how to hit it) but\n that may be difficult in some cases.\n\n* Don't top post, but respond in-line. Top posting makes it much harder\n to follow the discussion, in-line replies are customary here.\n\n* Don't mark questions as bugs in the subject.\n\nOtherwise you'll just annoy people to the extent that they'll start\nignoring your posts entirely.\n\nWe're OK with answering querstions and helping people learn the code\nbase, but the other side needs to make a bit of effort too.\n\nregards\n\n>Best regards.\n>Ranier Vilela\n>________________________________________\n>De: Tomas Vondra <tomas.vondra@2ndquadrant.com>\n>Enviado: sexta-feira, 22 de novembro de 2019 22:54\n>Para: Ranier Vilela\n>Cc: pgsql-hackers@postgresql.org\n>Assunto: Re: [BUG] (firsttupleslot)==NULL is redundant or is possible null dereference?\n>\n>On Fri, Nov 22, 2019 at 10:32:11PM +0000, Ranier Vilela wrote:\n>>Hi,\n>>This is real bug? firsttupleslot == NULL.\n>>\n>\n>Ranier, I don't want to be rude, but I personally am getting a bit\n>annoyed by this torrent of bug reports that are essentially just a bunch\n>of copy-pasted chunks of code, without any specification of bench,\n>position in the file, etc.\n>\n>And more importantly, without any clear explanation why you think it is\n>a bug (or even a demonstration of an issue), and \"Is this a bug?\"\n>\n>>\\backend\\executor\\nodeGroup.c\n>> if (TupIsNull(firsttupleslot))\n>> {\n>> outerslot = ExecProcNode(outerPlanState(node));\n>> if (TupIsNull(outerslot))\n>> {\n>> /* empty input, so return nothing */\n>> node->grp_done = true;\n>> return NULL;\n>> }\n>> /* Copy tuple into firsttupleslot */\n>> ExecCopySlot(firsttupleslot, outerslot);\n>>\n>>include\\executor\\tuptable.h:\n>>#define TupIsNull(slot) \\\n>> ((slot) == NULL || TTS_EMPTY(slot))\n>>\n>>static inline TupleTableSlot *\n>>ExecCopySlot(TupleTableSlot *dstslot, TupleTableSlot *srcslot)\n>>{\n>> Assert(!TTS_EMPTY(srcslot));\n>>\n>> dstslot->tts_ops->copyslot(dstslot, srcslot);\n>>\n>> return dstslot;\n>>}\n>>\n>\n>And why do you think this is a bug? Immediately before the part of code\n>you copied we have this:\n>\n> /*\n> * The ScanTupleSlot holds the (copied) first tuple of each group.\n> */\n> firsttupleslot = node->ss.ss_ScanTupleSlot;\n>\n>And node->ss.ss_ScanTupleSlot is expected to be non-NULL. So the initial\n>assumption that firsttupleslot is NULL is incorrect.\n>\n>regards\n>\n>--\n>Tomas Vondra http://www.2ndQuadrant.com\n>PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 23 Nov 2019 00:12:48 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] (firsttupleslot)==NULL is redundant or is possible null\n dereference?"
},
{
"msg_contents": ">And why do you think this is a bug? Immediately before the part of code\n>you copied we have this:\n>\n> /*\n> * The ScanTupleSlot holds the (copied) first tuple of each group.\n> */\n> firsttupleslot = node->ss.ss_ScanTupleSlot;\n>And node->ss.ss_ScanTupleSlot is expected to be non-NULL. So the initial\n>assumption that firsttupleslot is NULL is incorrect.\n\nIMHO, the test could be improved, this way it silences the scan tool.\n\n--- \\dll\\postgresql-12.0\\a\\backend\\executor\\nodeGroup.c\tMon Sep 30 17:06:55 2019\n+++ nodeGroup.c\tSat Nov 23 00:23:27 2019\n@@ -64,7 +64,7 @@\n \t * If first time through, acquire first input tuple and determine whether\n \t * to return it or not.\n \t */\n-\tif (TupIsNull(firsttupleslot))\n+ if ((firsttupleslot != NULL) && TTS_EMPTY(firsttupleslot))\n \t{\n \t\touterslot = ExecProcNode(outerPlanState(node));\n \t\tif (TupIsNull(outerslot))\n\nbest regards.\nRanier Vilela\n\n",
"msg_date": "Sat, 23 Nov 2019 03:38:23 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [BUG] (firsttupleslot)==NULL is redundant or is possible null\n dereference?"
}
] |
[
{
"msg_contents": "Hi,\nHi,\nMaybe this is a real bug.\n\nThe assignment has no effect, or forget dereferencing it?\n\nBest regards.\nRanier Vilela\n\n--- \\dll\\postgresql-12.0\\a\\backend\\optimizer\\plan\\initsplan.c\tMon Sep 30 17:06:55 2019\n+++ initsplan.c\tFri Nov 22 19:48:42 2019\n@@ -1718,7 +1718,7 @@\n \t\t\t\t\trelids =\n \t\t\t\t\t\tget_relids_in_jointree((Node *) root->parse->jointree,\n \t\t\t\t\t\t\t\t\t\t\t false);\n-\t\t\t\t\tqualscope = bms_copy(relids);\n+\t\t\t\t\tbms_copy(relids);\n \t\t\t\t}\n \t\t\t}\n \t\t}",
"msg_date": "Fri, 22 Nov 2019 23:06:53 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Tiny optmization or a bug?"
},
{
"msg_contents": "On Fri, Nov 22, 2019 at 11:06:53PM +0000, Ranier Vilela wrote:\n>Hi,\n>Hi,\n>Maybe this is a real bug.\n>\n>The assignment has no effect, or forget dereferencing it?\n>\n>Best regards.\n>Ranier Vilela\n>\n>--- \\dll\\postgresql-12.0\\a\\backend\\optimizer\\plan\\initsplan.c\tMon Sep 30 17:06:55 2019\n>+++ initsplan.c\tFri Nov 22 19:48:42 2019\n>@@ -1718,7 +1718,7 @@\n> \t\t\t\t\trelids =\n> \t\t\t\t\t\tget_relids_in_jointree((Node *) root->parse->jointree,\n> \t\t\t\t\t\t\t\t\t\t\t false);\n>-\t\t\t\t\tqualscope = bms_copy(relids);\n>+\t\t\t\t\tbms_copy(relids);\n> \t\t\t\t}\n> \t\t\t}\n> \t\t}\n\nSeriously, how are you searching for those \"issues\"?\n\n1) We're using qualscope in an assert about 100 lines down, and as coded\nwe need a copy of relids because that may be mutated (and reallocated to \na different pointer). So no, the assignment *has* effect.\n\n2) bms_copy(relids) on it's own is nonsensical, because it allocates a\ncopy but just throws the pointer away (why making the copy at all).\n\nHave you tried modifying this code and running the regression tests? If\nnot, try it.\n\n $ ./configure --enable-cassert\n $ make\n $ make check\n\nPlease, consider the suggestions from my previous response ...\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 23 Nov 2019 00:25:33 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Tiny optmization or a bug?"
},
{
"msg_contents": "Hi,\nI just wanted to help a little bit, sorry for the out balls.\n\nMaybe, I got one or two right.\n\nAnyway, thank you very much for your attention and patience.\n\nbest regards.\nRanier Vilela\n\n________________________________________\nDe: Tomas Vondra <tomas.vondra@2ndquadrant.com>\nEnviado: sexta-feira, 22 de novembro de 2019 23:25\nPara: Ranier Vilela\nCc: pgsql-hackers@postgresql.org\nAssunto: Re: [PATCH] Tiny optmization or a bug?\n\nOn Fri, Nov 22, 2019 at 11:06:53PM +0000, Ranier Vilela wrote:\n>Hi,\n>Hi,\n>Maybe this is a real bug.\n>\n>The assignment has no effect, or forget dereferencing it?\n>\n>Best regards.\n>Ranier Vilela\n>\n>--- \\dll\\postgresql-12.0\\a\\backend\\optimizer\\plan\\initsplan.c Mon Sep 30 17:06:55 2019\n>+++ initsplan.c Fri Nov 22 19:48:42 2019\n>@@ -1718,7 +1718,7 @@\n> relids =\n> get_relids_in_jointree((Node *) root->parse->jointree,\n> false);\n>- qualscope = bms_copy(relids);\n>+ bms_copy(relids);\n> }\n> }\n> }\n\nSeriously, how are you searching for those \"issues\"?\n\n1) We're using qualscope in an assert about 100 lines down, and as coded\nwe need a copy of relids because that may be mutated (and reallocated to\na different pointer). So no, the assignment *has* effect.\n\n2) bms_copy(relids) on it's own is nonsensical, because it allocates a\ncopy but just throws the pointer away (why making the copy at all).\n\nHave you tried modifying this code and running the regression tests? If\nnot, try it.\n\n $ ./configure --enable-cassert\n $ make\n $ make check\n\nPlease, consider the suggestions from my previous response ...\n\nregards\n\n--\nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 22 Nov 2019 23:34:45 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Tiny optmization or a bug?"
}
] |
[
{
"msg_contents": "In connection with a different issue, I wrote:\n\n> * The point of calling asyncQueueReadAllNotifications in\n> Exec_ListenPreCommit is to advance over already-committed queue entries\n> before we start sending any events to the client. Without this, a\n> newly-listening client could be sent some very stale events. (Note\n> that 51004c717 changed this from \"somewhat stale\" to \"very stale\".\n\nIt suddenly strikes me to worry that we have an XID wraparound hazard\nfor entries in the notify queue. The odds of seeing a live problem\nwith that before 51004c717 were pretty minimal, but now that we don't\naggressively advance the queue tail, I think it's a very real risk for\nlow-notify-traffic installations. In the worst case, imagine\n\n* Somebody sends one NOTIFY, maybe just as a test.\n\n* Nothing happens for a couple of weeks, during which the XID counter\nadvances by 2 billion or so.\n\n* Newly-listening sessions will now think that that old event is\n\"in the future\", hence fail to advance over it, resulting in denial\nof service for new notify traffic. There is no recourse short of\na server restart or waiting another couple weeks for wraparound.\n\nI thought about fixing this by tracking the queue's oldest XID in\nthe shared queue info, and forcing a tail advance when that got\ntoo old --- but if nobody actively uses any listen or notify\nfeatures for awhile, no such code is going to execute, so the\nabove scenario could happen anyway.\n\nThe only bulletproof fix I can think of offhand is to widen the\nqueue entries to 64-bit XIDs, which is a tad annoying from a\nspace consumption standpoint. Possibly we could compromise by\nstoring the high-order bits just once per SLRU page (and then\nforcing an advance to a new page when those bits change).\n\nA somewhat less bulletproof fix is to detect far-in-the-future queue\nentries and discard them. That would prevent the freezeup scenario,\nbut there'd be a residual hazard of transmitting ancient\nnotifications to clients (if a queue entry survived for 4G\ntransactions not just 2G). But maybe that's OK, given the rather\ntiny probabilities involved, and the low guarantees around notify\nreliability in general. It'd be a much more back-patchable answer\nthan a queue format change, too.\n\nThoughts? I'm not really planning to work on this myself, but\nsomebody oughta.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 23 Nov 2019 11:34:45 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "XID-wraparound hazards in LISTEN/NOTIFY"
},
{
"msg_contents": "\n\nOn 11/23/19 8:34 AM, Tom Lane wrote:\n> In connection with a different issue, I wrote:\n> \n>> * The point of calling asyncQueueReadAllNotifications in\n>> Exec_ListenPreCommit is to advance over already-committed queue entries\n>> before we start sending any events to the client. Without this, a\n>> newly-listening client could be sent some very stale events. (Note\n>> that 51004c717 changed this from \"somewhat stale\" to \"very stale\".\n> \n> It suddenly strikes me to worry that we have an XID wraparound hazard\n> for entries in the notify queue. The odds of seeing a live problem\n> with that before 51004c717 were pretty minimal, but now that we don't\n> aggressively advance the queue tail, I think it's a very real risk for\n> low-notify-traffic installations. In the worst case, imagine\n> \n> * Somebody sends one NOTIFY, maybe just as a test.\n> \n> * Nothing happens for a couple of weeks, during which the XID counter\n> advances by 2 billion or so.\n> \n> * Newly-listening sessions will now think that that old event is\n> \"in the future\", hence fail to advance over it, resulting in denial\n> of service for new notify traffic. There is no recourse short of\n> a server restart or waiting another couple weeks for wraparound.\n\nIs it worth checking for this condition in autovacuum? Even for\ninstallations with autovacuum disabled, would the anti-wraparound\nvacuums happen frequently enough to also advance the tail if modified\nto test for this condition?\n\n> I thought about fixing this by tracking the queue's oldest XID in\n> the shared queue info, and forcing a tail advance when that got\n> too old --- but if nobody actively uses any listen or notify\n> features for awhile, no such code is going to execute, so the\n> above scenario could happen anyway.\n> \n> The only bulletproof fix I can think of offhand is to widen the\n> queue entries to 64-bit XIDs, which is a tad annoying from a\n> space consumption standpoint. Possibly we could compromise by\n> storing the high-order bits just once per SLRU page (and then\n> forcing an advance to a new page when those bits change).\n> \n> A somewhat less bulletproof fix is to detect far-in-the-future queue\n> entries and discard them. That would prevent the freezeup scenario,\n> but there'd be a residual hazard of transmitting ancient\n> notifications to clients (if a queue entry survived for 4G\n> transactions not just 2G). But maybe that's OK, given the rather\n> tiny probabilities involved, and the low guarantees around notify\n> reliability in general. It'd be a much more back-patchable answer\n> than a queue format change, too.\n\nThere shouldn't be too much reason to back-patch any of this, since\nthe change in 51004c717 only applies to v13 and onward. Or do you\nsee the risk you described as \"pretty minimal\" as still being large\nenough to outweigh the risk of anything we might back-patch?\n\n\n\n-- \nMark Dilger\n\n\n",
"msg_date": "Sat, 23 Nov 2019 09:02:20 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID-wraparound hazards in LISTEN/NOTIFY"
},
{
"msg_contents": "Mark Dilger <hornschnorter@gmail.com> writes:\n> On 11/23/19 8:34 AM, Tom Lane wrote:\n>> It suddenly strikes me to worry that we have an XID wraparound hazard\n>> for entries in the notify queue.\n\n> Is it worth checking for this condition in autovacuum?\n\nDunno, maybe. It's a different avenue to consider, at least.\n\n> There shouldn't be too much reason to back-patch any of this, since\n> the change in 51004c717 only applies to v13 and onward. Or do you\n> see the risk you described as \"pretty minimal\" as still being large\n> enough to outweigh the risk of anything we might back-patch?\n\nThere may not be a risk large enough to worry about before 51004c717,\nassuming that we discount cases like a single session staying\nidle-in-transaction for long enough for the XID counter to wrap\n(which'd cause problems for more than just LISTEN/NOTIFY). I haven't\nanalyzed this carefully enough to be sure. We'd have to consider\nthat, as well as the complexity of whatever fix we choose for HEAD,\nwhile deciding if we need a back-patch.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 23 Nov 2019 12:10:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: XID-wraparound hazards in LISTEN/NOTIFY"
},
{
"msg_contents": "On Sat, Nov 23, 2019 at 12:10:56PM -0500, Tom Lane wrote:\n> Mark Dilger <hornschnorter@gmail.com> writes:\n> > On 11/23/19 8:34 AM, Tom Lane wrote:\n> >> It suddenly strikes me to worry that we have an XID wraparound hazard\n> >> for entries in the notify queue.\n> \n> > Is it worth checking for this condition in autovacuum?\n> \n> Dunno, maybe. It's a different avenue to consider, at least.\n> \n> > There shouldn't be too much reason to back-patch any of this, since\n> > the change in 51004c717 only applies to v13 and onward. Or do you\n> > see the risk you described as \"pretty minimal\" as still being large\n> > enough to outweigh the risk of anything we might back-patch?\n> \n> There may not be a risk large enough to worry about before 51004c717,\n> assuming that we discount cases like a single session staying\n> idle-in-transaction for long enough for the XID counter to wrap\n> (which'd cause problems for more than just LISTEN/NOTIFY). I haven't\n> analyzed this carefully enough to be sure. We'd have to consider\n> that, as well as the complexity of whatever fix we choose for HEAD,\n> while deciding if we need a back-patch.\n\nIs this still an open issue? Should it be a TODO item?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Wed, 8 Nov 2023 13:50:37 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: XID-wraparound hazards in LISTEN/NOTIFY"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Sat, Nov 23, 2019 at 12:10:56PM -0500, Tom Lane wrote:\n>>> It suddenly strikes me to worry that we have an XID wraparound hazard\n>>> for entries in the notify queue.\n\n> Is this still an open issue? Should it be a TODO item?\n\nI don't think anyone's done anything about it, so yeah.\n\nRealistically, if you've got NOTIFY messages that are going unread\nfor long enough to risk XID wraparound, your app is broken. So\nmaybe it'd be sufficient to discard messages that are old enough\nto approach the wrap horizon. But still that's code that doesn't\nexist.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 08 Nov 2023 14:52:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: XID-wraparound hazards in LISTEN/NOTIFY"
},
{
"msg_contents": "On Wed, Nov 8, 2023 at 02:52:16PM -0500, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Sat, Nov 23, 2019 at 12:10:56PM -0500, Tom Lane wrote:\n> >>> It suddenly strikes me to worry that we have an XID wraparound hazard\n> >>> for entries in the notify queue.\n> \n> > Is this still an open issue? Should it be a TODO item?\n> \n> I don't think anyone's done anything about it, so yeah.\n> \n> Realistically, if you've got NOTIFY messages that are going unread\n> for long enough to risk XID wraparound, your app is broken. So\n> maybe it'd be sufficient to discard messages that are old enough\n> to approach the wrap horizon. But still that's code that doesn't\n> exist.\n\nThanks, TODO added.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Wed, 8 Nov 2023 15:49:59 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: XID-wraparound hazards in LISTEN/NOTIFY"
}
] |
[
{
"msg_contents": "I ran into a couple of issues while trying to devise a regression test\nillustrating the LISTEN-in-serializable-transaction issue Mark Dilger\nreported. The first one is that an isolation test in which we expect\nto see a cross-process NOTIFY immediately after a COMMIT turns out to\nbe not very stable: on my machine, it works as long as you're just\nrunning the isolation tests by themselves, but it usually falls over\nif I'm running check-world with any amount of parallelism. The reason\nfor this seems to be that incoming notifies are only checked for when\nwe're about to wait for client input. At that point we've already\nsent the ReadyForQuery ('Z') protocol message, which will cause libpq\nto stand down from looking for more input and return a null from\nPQgetResult(). Depending on timing, the following Notify protocol\nmessages might arrive quickly enough that isolationtester.c sees them\nbefore it goes off to do something else, but that's not very reliable.\n\nIn the case of self-notifies, postgres.c ensures that those get\ntransmitted to the frontend *before* ReadyForQuery, and this is what\nmakes self-notify cases stable enough to survive buildfarm testing.\n\nI'm a bit surprised, now that I've seen this effect, that the existing\ncross-session notify tests in async-notify.spec haven't given us\nproblems in the buildfarm. (Maybe, now that I just pushed those into\nthe back branches, we'll start to see some failures?) Anyway, what\nI propose to do about this is patch 0001 attached, which tweaks\npostgres.c to ensure that any cross-session notifies that arrived\nduring the just-finished transaction are also guaranteed to be sent\nto the client before, not after, ReadyForQuery.\n\nAnother thing that I discovered while testing this is that as of HEAD,\nyou can't run \"make installcheck\" for the isolation tests more than\nonce without restarting the server. If you do, you get a test result\nmismatch because the async-notify test's first invocation of\npg_notification_queue_usage() returns a positive value. Which is\nentirely unsurprising, because the previous iteration ensured that\nit would, and we've done nothing to make the queue tail advance since\nthen.\n\nThis seems both undesirable for our own testing, and rather bogus\nfrom users' standpoints as well. However, I think a simple fix is\navailable: just make the SQL pg_notification_queue_usage() function\nadvance the queue tail before measuring, as in 0002 below. This will\nrestore the behavior of that function to what it was before 51004c717,\nand it doesn't seem like it'd cost any performance in any plausible\nuse-cases.\n\n0002 is only needed in HEAD, but I'll have to back-patch 0001 as\nfar as 9.6, to support a test case for the problem Mark discovered\nand to ensure that back-patching b10f40bf0 doesn't cause any issues.\n\nBTW, the fix and test case for Mark's issue look like 0003. Without\nthe 0001 patch, it's unstable exactly when the \"listener2: NOTIFY \"c1\"\nwith payload \"\" from notifier\" message comes out. But modulo that\nissue, this test case reliably shows the assertion failure in the\nback branches.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 23 Nov 2019 20:01:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "LISTEN/NOTIFY testing woes"
},
{
"msg_contents": "\n\nOn 11/23/19 5:01 PM, Tom Lane wrote:\n> I ran into a couple of issues while trying to devise a regression test\n> illustrating the LISTEN-in-serializable-transaction issue Mark Dilger\n> reported. The first one is that an isolation test in which we expect\n> to see a cross-process NOTIFY immediately after a COMMIT turns out to\n> be not very stable: on my machine, it works as long as you're just\n> running the isolation tests by themselves, but it usually falls over\n> if I'm running check-world with any amount of parallelism. The reason\n> for this seems to be that incoming notifies are only checked for when\n> we're about to wait for client input. At that point we've already\n> sent the ReadyForQuery ('Z') protocol message, which will cause libpq\n> to stand down from looking for more input and return a null from\n> PQgetResult(). Depending on timing, the following Notify protocol\n> messages might arrive quickly enough that isolationtester.c sees them\n> before it goes off to do something else, but that's not very reliable.\n\nThanks for working on this, Tom.\n\nI have finished reading and applying your three patches and have moved \non to testing them. I hope to finish the review soon.\n\n\n\n\n-- \nMark Dilger\n\n\n",
"msg_date": "Sat, 23 Nov 2019 20:50:52 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: LISTEN/NOTIFY testing woes"
},
{
"msg_contents": "Hoi Tom,\n\nOn Sun, 24 Nov 2019 at 02:01, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> This seems both undesirable for our own testing, and rather bogus\n> from users' standpoints as well. However, I think a simple fix is\n> available: just make the SQL pg_notification_queue_usage() function\n> advance the queue tail before measuring, as in 0002 below. This will\n> restore the behavior of that function to what it was before 51004c717,\n> and it doesn't seem like it'd cost any performance in any plausible\n> use-cases.\n\nThis was one of those open points in the previous patches where it\nwasn't quite clear what the correct behaviour should be. This fixes\nthe issue, but the question in my mind is: do we want to document this\nfact and can users rely on this behaviour? If we go with the argument\nthat the delay in cleaning up should be entirely invisible, then I\nguess this patch is the correct one that makes the made changes\ninvisible. Arguably not doing this means we'd have to document the\nvalues are possibly out of date.\n\nSo I think this patch does the right thing.\n\nHave a nice day,\n-- \nMartijn van Oosterhout <kleptog@gmail.com> http://svana.org/kleptog/\n\n\n",
"msg_date": "Sun, 24 Nov 2019 14:19:39 +0100",
"msg_from": "Martijn van Oosterhout <kleptog@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: LISTEN/NOTIFY testing woes"
},
{
"msg_contents": "Martijn van Oosterhout <kleptog@gmail.com> writes:\n> On Sun, 24 Nov 2019 at 02:01, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> This seems both undesirable for our own testing, and rather bogus\n>> from users' standpoints as well. However, I think a simple fix is\n>> available: just make the SQL pg_notification_queue_usage() function\n>> advance the queue tail before measuring, as in 0002 below. This will\n>> restore the behavior of that function to what it was before 51004c717,\n>> and it doesn't seem like it'd cost any performance in any plausible\n>> use-cases.\n\n> This was one of those open points in the previous patches where it\n> wasn't quite clear what the correct behaviour should be. This fixes\n> the issue, but the question in my mind is: do we want to document this\n> fact and can users rely on this behaviour? If we go with the argument\n> that the delay in cleaning up should be entirely invisible, then I\n> guess this patch is the correct one that makes the made changes\n> invisible. Arguably not doing this means we'd have to document the\n> values are possibly out of date.\n\n> So I think this patch does the right thing.\n\nThanks for looking! In the light of morning, there's one other\nthing bothering me about this patch: it means that the function has\nside-effects, even though those effects are at the implementation\nlevel and shouldn't be user-visible. We do already have it marked\n\"volatile\", so that's OK, but I notice that it's not parallel\nrestricted. The isolation test still passes when I set\nforce_parallel_mode = regress, so apparently it works to run this\ncode in a parallel worker, but that seems pretty scary to me;\ncertainly nothing in async.c was written with that in mind.\nI think we'd be well advised to adjust pg_proc.dat to mark\npg_notification_queue_usage() as parallel-restricted, so that\nit only executes in the main session process. It's hard to\nsee any use-case for parallelizing it that would justify even\na small chance of new bugs.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 24 Nov 2019 10:25:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: LISTEN/NOTIFY testing woes"
},
{
"msg_contents": "\n\nOn 11/23/19 8:50 PM, Mark Dilger wrote:\n> \n> \n> On 11/23/19 5:01 PM, Tom Lane wrote:\n>> I ran into a couple of issues while trying to devise a regression test\n>> illustrating the LISTEN-in-serializable-transaction issue Mark Dilger\n>> reported.� The first one is that an isolation test in which we expect\n>> to see a cross-process NOTIFY immediately after a COMMIT turns out to\n>> be not very stable: on my machine, it works as long as you're just\n>> running the isolation tests by themselves, but it usually falls over\n>> if I'm running check-world with any amount of parallelism.� The reason\n>> for this seems to be that incoming notifies are only checked for when\n>> we're about to wait for client input.� At that point we've already\n>> sent the ReadyForQuery ('Z') protocol message, which will cause libpq\n>> to stand down from looking for more input and return a null from\n>> PQgetResult().� Depending on timing, the following Notify protocol\n>> messages might arrive quickly enough that isolationtester.c sees them\n>> before it goes off to do something else, but that's not very reliable.\n> \n> Thanks for working on this, Tom.\n> \n> I have finished reading and applying your three patches and have moved \n> on to testing them.� I hope to finish the review soon.\n\nAfter applying all three patches, the stress test that originally\nuncovered the assert in predicate.c no longer triggers any asserts.\n`check-world` passes. The code and comments look good.\n\nYour patches are ready for commit.\n\n-- \nMark Dilger\n\n\n",
"msg_date": "Sun, 24 Nov 2019 10:25:57 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: LISTEN/NOTIFY testing woes"
},
{
"msg_contents": "Mark Dilger <hornschnorter@gmail.com> writes:\n> On 11/23/19 8:50 PM, Mark Dilger wrote:\n>> I have finished reading and applying your three patches and have moved \n>> on to testing them. I hope to finish the review soon.\n\n> After applying all three patches, the stress test that originally\n> uncovered the assert in predicate.c no longer triggers any asserts.\n> `check-world` passes. The code and comments look good.\n\nThanks for reviewing!\n\nAfter sleeping on it, I'm not really happy with what I did in\nPrepareTransaction (that is, invent a separate PrePrepare_Notify\nfunction). The idea was to keep that looking parallel to what\nCommitTransaction does, and preserve infrastructure against the\nday that somebody gets motivated to allow LISTEN or NOTIFY in\na prepared transaction. But on second thought, what would surely\nhappen when that feature gets added is just that AtPrepare_Notify\nwould serialize the pending LISTEN/NOTIFY actions into the 2PC\nstate file. There wouldn't be any need for PrePrepare_Notify,\nso there's no point in introducing that. I'll just move the\ncomment saying that nothing has to happen at that point for NOTIFY.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 24 Nov 2019 13:39:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: LISTEN/NOTIFY testing woes"
},
{
"msg_contents": "\n\nOn 11/24/19 10:39 AM, Tom Lane wrote:\n> Mark Dilger <hornschnorter@gmail.com> writes:\n>> On 11/23/19 8:50 PM, Mark Dilger wrote:\n>>> I have finished reading and applying your three patches and have moved\n>>> on to testing them. I hope to finish the review soon.\n> \n>> After applying all three patches, the stress test that originally\n>> uncovered the assert in predicate.c no longer triggers any asserts.\n>> `check-world` passes. The code and comments look good.\n> \n> Thanks for reviewing!\n> \n> After sleeping on it, I'm not really happy with what I did in\n> PrepareTransaction (that is, invent a separate PrePrepare_Notify\n> function). The idea was to keep that looking parallel to what\n> CommitTransaction does, and preserve infrastructure against the\n> day that somebody gets motivated to allow LISTEN or NOTIFY in\n> a prepared transaction. But on second thought, what would surely\n> happen when that feature gets added is just that AtPrepare_Notify\n> would serialize the pending LISTEN/NOTIFY actions into the 2PC\n> state file. There wouldn't be any need for PrePrepare_Notify,\n> so there's no point in introducing that. I'll just move the\n> comment saying that nothing has to happen at that point for NOTIFY.\n\nI looked at that. I thought it was an interesting decision to\nfactor out that error to its own function while leaving a\nsimilar error inline just a little below in xact.c:\n\n if ((MyXactFlags & XACT_FLAGS_ACCESSEDTEMPNAMESPACE))\n ereport(ERROR,\n (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n errmsg(\"cannot PREPARE a transaction that has operated \non temporary objects\")));\n\nI assumed you had factored it out in anticipation of supporting notify\nhere in the future. If you want to backtrack that decision and leave it\ninline, you would still keep the test rather than just a comment, right?\nIt sounds like you intend to let AtPrepare_Notify catch the problem\nlater, but since that's just an Assert and not an ereport(ERROR), that\nprovides less error checking for non-assert builds.\n\n\n-- \nMark Dilger\n\n\n",
"msg_date": "Sun, 24 Nov 2019 11:01:04 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: LISTEN/NOTIFY testing woes"
},
{
"msg_contents": "Mark Dilger <hornschnorter@gmail.com> writes:\n> On 11/24/19 10:39 AM, Tom Lane wrote:\n>> After sleeping on it, I'm not really happy with what I did in\n>> PrepareTransaction (that is, invent a separate PrePrepare_Notify\n>> function). The idea was to keep that looking parallel to what\n>> CommitTransaction does, and preserve infrastructure against the\n>> day that somebody gets motivated to allow LISTEN or NOTIFY in\n>> a prepared transaction. But on second thought, what would surely\n>> happen when that feature gets added is just that AtPrepare_Notify\n>> would serialize the pending LISTEN/NOTIFY actions into the 2PC\n>> state file. There wouldn't be any need for PrePrepare_Notify,\n>> so there's no point in introducing that. I'll just move the\n>> comment saying that nothing has to happen at that point for NOTIFY.\n\n> I assumed you had factored it out in anticipation of supporting notify\n> here in the future. If you want to backtrack that decision and leave it\n> inline, you would still keep the test rather than just a comment, right?\n\nNo, there wouldn't be any error condition; that's just needed because the\nfeature isn't implemented yet. So I'll leave that alone; the only thing\nthat needs to happen now in the PREPARE code path is to adjust the one\ncomment.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 24 Nov 2019 14:04:38 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: LISTEN/NOTIFY testing woes"
},
{
"msg_contents": "\n\nOn 11/24/19 11:04 AM, Tom Lane wrote:\n> Mark Dilger <hornschnorter@gmail.com> writes:\n>> On 11/24/19 10:39 AM, Tom Lane wrote:\n>>> After sleeping on it, I'm not really happy with what I did in\n>>> PrepareTransaction (that is, invent a separate PrePrepare_Notify\n>>> function). The idea was to keep that looking parallel to what\n>>> CommitTransaction does, and preserve infrastructure against the\n>>> day that somebody gets motivated to allow LISTEN or NOTIFY in\n>>> a prepared transaction. But on second thought, what would surely\n>>> happen when that feature gets added is just that AtPrepare_Notify\n>>> would serialize the pending LISTEN/NOTIFY actions into the 2PC\n>>> state file. There wouldn't be any need for PrePrepare_Notify,\n>>> so there's no point in introducing that. I'll just move the\n>>> comment saying that nothing has to happen at that point for NOTIFY.\n> \n>> I assumed you had factored it out in anticipation of supporting notify\n>> here in the future. If you want to backtrack that decision and leave it\n>> inline, you would still keep the test rather than just a comment, right?\n> \n> No, there wouldn't be any error condition; that's just needed because the\n> feature isn't implemented yet. So I'll leave that alone; the only thing\n> that needs to happen now in the PREPARE code path is to adjust the one\n> comment.\n\nOk.\n\n-- \nMark Dilger\n\n\n",
"msg_date": "Sun, 24 Nov 2019 11:24:35 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: LISTEN/NOTIFY testing woes"
}
] |
[
{
"msg_contents": "Hi,\nFix function declaration .\n\nBest regards,\nRanier Vilela\n\n--- \\dll\\postgresql\\a\\backend\\utils\\adt\\mac8.c\t2019-11-23 13:19:20.000000000 -0300\n+++ mac8.c\t2019-11-24 09:41:34.200458700 -0300\n@@ -35,7 +35,7 @@\n #define lobits(addr) \\\n ((unsigned long)(((addr)->e<<24) | ((addr)->f<<16) | ((addr)->g<<8) | ((addr)->h)))\n \n-static unsigned char hex2_to_uchar(const unsigned char *str, const unsigned char *ptr);\n+static unsigned char hex2_to_uchar(const unsigned char *ptr, const unsigned char *str);\n \n static const signed char hexlookup[128] = {\n \t-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1,",
"msg_date": "Sun, 24 Nov 2019 12:47:40 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Style: fix function declaration"
},
{
"msg_contents": "Hi,\n\nOn Sun, Nov 24, 2019 at 12:47:40PM +0000, Ranier Vilela wrote:\n> Fix function declaration .\n\nI see no problem with fixing this kind of inconsistency for\nreadability, so applied the change.\n\nAnyway, when sending a patch there are a couple of things which can\nmake the life of people looking at what you send easier:\nhttps://wiki.postgresql.org/wiki/Submitting_a_Patch\n\nOne problem that I noted with the patch sent on this thread is that it\ndoes not directly apply on the git repository. Folks on -hackers are\nmainly used to diffs generated by git. So first I would recommend\nthat you set up a git repository of the tree, say with that:\ngit clone https://git.postgresql.org/git/postgresql.git\n\nAnd then you can begin working on the code. On Windows, using git is\na rather straight-foward experience (I have used it and still use it\noccasionally because it has its own concept of *nix-like terminal):\nhttps://git-scm.com/download/win\n\nMost people use a *nix platform, with either macos, Linux, a BSD\nflavor (NetBSD, FreeBSD), etc. Still there are Windows users.\nBuilding the code can be harder than other platforms, but we have\ndocumentation on the matter:\nhttps://www.postgresql.org/docs/devel/install-windows.html\n\nGenerating a patch can be done with git in a couple of ways from the\ncloned repository, say:\n1) git diff\n2) git format-patch\nBoth can be applied with a simple \"patch -p1\" command or even the more\nadvanced \"git am\", still the latter is kind of picky.\n\nThe code of Postgres is complex, so usually there are reasons why\nthings are done the way they are, and it is important to not be afraid\nto ask questions. Also, making the subject of the emails you send\nexplicative enough is important. Please note pgsql-hackers has a lot\nof traffic, and this helps some people in filtering out threads they\nare not interested in.\n\nThanks!\n--\nMichael",
"msg_date": "Mon, 25 Nov 2019 10:05:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Style: fix function declaration"
}
] |
[
{
"msg_contents": "Hi,\nAccording to specification of scanf: %x argument must be unsigned.\nhttp://www.cplusplus.com/reference/cstdio/scanf/\nI think that sscanf must follow scanf specification.\n\nBest regards.\nRanier Vilela\n\n--- \\dll\\postgresql\\a\\backend\\utils\\adt\\mac.c\t2019-11-23 13:19:20.000000000 -0300\n+++ mac.c\t2019-11-24 09:49:01.737639100 -0300\n@@ -57,7 +57,7 @@\n {\n \tchar\t *str = PG_GETARG_CSTRING(0);\n \tmacaddr *result;\n-\tint\t\t\ta,\n+\tunsigned int a,\n \t\t\t\tb,\n \t\t\t\tc,\n \t\t\t\td,",
"msg_date": "Sun, 24 Nov 2019 13:00:10 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Fix var declaration according scanf specification,"
}
] |
[
{
"msg_contents": "Hi,\nOf course, I don't know if it's the best solution, but it's the most obvious.\nOr the test at line 3326 is irrelavant.\n\n\\backend\\tcop\\postgres.c\n\tif (stack_depth > max_stack_depth_bytes &&\n\t\tstack_base_ptr != NULL)\n\t\treturn true;\n\nOtherwise, if is relevant, substraction with NULL pointer is technically,undefined behavior..\n\nBest regards.\nRanier Vilela\n\n--- \\dll\\postgresql\\a\\backend\\tcop\\postgres.c\t2019-11-23 13:19:20.000000000 -0300\n+++ postgres.c\t2019-11-24 11:13:34.131437500 -0300\n@@ -3303,7 +3303,10 @@\n \t/*\n \t * Compute distance from reference point to my local variables\n \t */\n-\tstack_depth = (long) (stack_base_ptr - &stack_top_loc);\n+\tif (stack_base_ptr != NULL)\n+\t stack_depth = (long) (stack_base_ptr - &stack_top_loc);\n+\telse\n+\t stack_depth = (long) &stack_top_loc;\n \n \t/*\n \t * Take abs value, since stacks grow up on some machines, down on others",
"msg_date": "Sun, 24 Nov 2019 14:23:38 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Possible arithmetic with NULL pointer or test \"stack_base_ptr\n != NULL\" is irrelevant."
},
{
"msg_contents": "Ranier Vilela <ranier_gyn@hotmail.com> writes:\n> Of course, I don't know if it's the best solution, but it's the most obvious.\n> Or the test at line 3326 is irrelavant.\n\n> \\backend\\tcop\\postgres.c\n> \tif (stack_depth > max_stack_depth_bytes &&\n> \t\tstack_base_ptr != NULL)\n> \t\treturn true;\n\n> Otherwise, if is relevant, substraction with NULL pointer is technically,undefined behavior..\n\n[ shrug... ] Stack overflow in itself is outside the realm of the C\nspecification. Also, if you want to get nitty-gritty about it,\nI believe that the standard only promises defined results from the\nsubtraction of two pointers that point to elements of the same array\nobject. So the change you propose isn't going to make it any closer\nto adhering to the letter of \"defined-ness\". In practice, this code\nworks fine on every platform that Postgres is ever likely to support,\nso I see no need to change it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 24 Nov 2019 11:40:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Possible arithmetic with NULL pointer or test\n \"stack_base_ptr != NULL\" is irrelevant."
},
{
"msg_contents": ">In practice, this code\n>works fine on every platform that Postgres is ever likely to support,\n>so I see no need to change it.\n\nOf course, I trust your judgment.\nThank you for the review.\n\nBest regards,\nRanier Vilela\n\n",
"msg_date": "Sun, 24 Nov 2019 18:06:33 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Possible arithmetic with NULL pointer or test\n \"stack_base_ptr != NULL\" is irrelevant."
}
] |
[
{
"msg_contents": "Hi,\nThe test \"if (zeropadlen > 0)\" is redundant and can be salely removed.\nIt has already been tested in the same path.\n\nBest regards,\nRanier Vilela\n\n--- \\dll\\postgresql\\a\\port\\snprintf.c\t2019-11-23 13:19:20.000000000 -0300\n+++ snprintf.c\t2019-11-24 13:02:45.510806400 -0300\n@@ -1227,16 +1227,14 @@\n \t\t{\n \t\t\t/* pad before exponent */\n \t\t\tdostr(convert, epos - convert, target);\n-\t\t\tif (zeropadlen > 0)\n-\t\t\t\tdopr_outchmulti('0', zeropadlen, target);\n+\t\t\tdopr_outchmulti('0', zeropadlen, target);\n \t\t\tdostr(epos, vallen - (epos - convert), target);\n \t\t}\n \t\telse\n \t\t{\n \t\t\t/* no exponent, pad after the digits */\n \t\t\tdostr(convert, vallen, target);\n-\t\t\tif (zeropadlen > 0)\n-\t\t\t\tdopr_outchmulti('0', zeropadlen, target);\n+\t\t\tdopr_outchmulti('0', zeropadlen, target);\n \t\t}\n \t}\n \telse",
"msg_date": "Sun, 24 Nov 2019 16:12:10 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Style, remove redudant test \"if (zeropadlen > 0)\""
},
{
"msg_contents": "\n\nOn 11/24/19 8:12 AM, Ranier Vilela wrote:\n> Hi,\n> The test \"if (zeropadlen > 0)\" is redundant and can be salely removed.\n> It has already been tested in the same path.\n\nI have not tested your patch, but it looks right to me.\n\n\n-- \nMark Dilger\n\n\n",
"msg_date": "Sun, 24 Nov 2019 08:17:55 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Style, remove redudant test \"if (zeropadlen > 0)\""
},
{
"msg_contents": ">I have not tested your patch, but it looks right to me.\nThanks for review.\n\nBest regards.\nRanier Vilela\n\n",
"msg_date": "Sun, 24 Nov 2019 16:27:01 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Style, remove redudant test \"if (zeropadlen > 0)\""
},
{
"msg_contents": "On Sun, Nov 24, 2019 at 8:12 AM Ranier Vilela <ranier_gyn@hotmail.com>\nwrote:\n\n> Hi,\n> The test \"if (zeropadlen > 0)\" is redundant and can be salely removed.\n> It has already been tested in the same path.\n>\n> Best regards,\n> Ranier Vilela\n>\n> --- \\dll\\postgresql\\a\\port\\snprintf.c 2019-11-23 13:19:20.000000000 -0300\n> +++ snprintf.c 2019-11-24 13:02:45.510806400 -0300\n>\n\nCould you please at least take the time to produce a patch that actually\napplies properly?\n\nIf the patch does not have the proper path from the root of the source tree\nthan it is completely worthless to most folks because it's really not\nappropriate to ask someone to fix your patch when the tools are clearly\navailable to properly produce a patch without any issue.\n\nSpecifically git diff does this without issue.\n\nThanks in advance\n\nJohn\n\nOn Sun, Nov 24, 2019 at 8:12 AM Ranier Vilela <ranier_gyn@hotmail.com> wrote:Hi,\nThe test \"if (zeropadlen > 0)\" is redundant and can be salely removed.\nIt has already been tested in the same path.\n\nBest regards,\nRanier Vilela\n\n--- \\dll\\postgresql\\a\\port\\snprintf.c 2019-11-23 13:19:20.000000000 -0300\n+++ snprintf.c 2019-11-24 13:02:45.510806400 -0300Could you please at least take the time to produce a patch that actually applies properly? If the patch does not have the proper path from the root of the source tree than it is completely worthless to most folks because it's really not appropriate to ask someone to fix your patch when the tools are clearly available to properly produce a patch without any issue.Specifically git diff does this without issue.Thanks in advanceJohn",
"msg_date": "Sun, 24 Nov 2019 08:30:53 -0800",
"msg_from": "John W Higgins <wishdev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Style, remove redudant test \"if (zeropadlen > 0)\""
},
{
"msg_contents": "Mark Dilger <hornschnorter@gmail.com> writes:\n> On 11/24/19 8:12 AM, Ranier Vilela wrote:\n>> The test \"if (zeropadlen > 0)\" is redundant and can be salely removed.\n>> It has already been tested in the same path.\n\n> I have not tested your patch, but it looks right to me.\n\nAgreed, seems like an oversight in an old patch of mine. Pushed.\n\nI concur with John's nearby complaint that you're not submitting\ndiffs in a useful format. The file paths are weird. Also, it's\ngenerally proven to be a good idea to send diffs as attachments,\nnot embedded in-line in the email --- in-line text is far too\nprone to get mangled by assorted mail programs.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 24 Nov 2019 12:06:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Style, remove redudant test \"if (zeropadlen > 0)\""
},
{
"msg_contents": ">Could you please at least take the time to produce a patch that actually applies properly?\nYes of course.\nThank you.\n\nRanier Vilela\n\n",
"msg_date": "Sun, 24 Nov 2019 17:33:19 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Style, remove redudant test \"if (zeropadlen > 0)\""
}
] |
[
{
"msg_contents": "Hi,\nThe var OffsetNumber maxoff it's like uint16, see at include/storage/off.h\ntypedef uint16 OffsetNumber;\n\nWithin the function _bt_afternewitemoff, at line 641, maxoff is used in an dangerous expression,\nwithout protection.: (maxoff - 1)\n\nThe function: PageGetMaxOffsetNumber that initializes maxoff, can return zero.\nSee at storage/bufpage.h\n * PageGetMaxOffsetNumber\n *\t\tReturns the maximum offset number used by the given page.\n *\t\tSince offset numbers are 1-based, this is also the number\n *\t\tof items on the page.\n *\n *\t\tNOTE: if the page is not initialized (pd_lower == 0), we must\n *\t\treturn zero to ensure sane behavior. Accept double evaluation\n *\t\tof the argument so that we can ensure this.\n\nSurely not the best solution, but it was the best I could think of.\n\nbest regards.\nRanier Vilela",
"msg_date": "Sun, 24 Nov 2019 17:58:51 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Fix possible underflow in expression (maxoff - 1)"
},
{
"msg_contents": "On Sun, Nov 24, 2019 at 9:58 AM Ranier Vilela <ranier_gyn@hotmail.com> wrote:\n> Within the function _bt_afternewitemoff, at line 641, maxoff is used in an dangerous expression,\n> without protection.: (maxoff - 1)\n\nI wrote this code. It's safe.\n\nIn general, it's not possible to split a page without it being\ninitialized, and having at least 2 items (not including the incoming\nnewitem). Besides, even if \"maxoff\" had an integer underflow the\nbehavior of the function would still be sane and defined. OffsetNumber\nis an unsigned type.\n\nWhere are you getting this stuff from? Are you using a static analysis tool?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 24 Nov 2019 11:07:37 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix possible underflow in expression (maxoff - 1)"
},
{
"msg_contents": ">In general, it's not possible to split a page without it being\n>initialized, and having at least 2 items (not including the incoming\n>newitem). Besides, even if \"maxoff\" had an integer underflow the\n>behavior of the function would still be sane and defined. OffsetNumber\n>is an unsigned type.\nWell, I didn't mean that it's failing..I meant it could fail..\nIf PageGetMaxOffsetNumber, can return zero, maxoff can be zero.\n(0 - 1), on unsigned type, certainly is underflow and if maxoff can be one,\n(1 - 1) is zero, and state->newitemsz * (maxoff - 1), is zero.\n\n>Where are you getting this stuff from? Are you using a static analysis tool?\nYes,two static tools, but reviewed by me.\n\nBest regards.\nRanier Vilela\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 24 Nov 2019 19:21:06 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Fix possible underflow in expression (maxoff - 1)"
},
{
"msg_contents": "On Sun, Nov 24, 2019 at 11:21 AM Ranier Vilela <ranier_gyn@hotmail.com> wrote:\n> >In general, it's not possible to split a page without it being\n> >initialized, and having at least 2 items (not including the incoming\n> >newitem). Besides, even if \"maxoff\" had an integer underflow the\n> >behavior of the function would still be sane and defined. OffsetNumber\n> >is an unsigned type.\n> Well, I didn't mean that it's failing..I meant it could fail..\n> If PageGetMaxOffsetNumber, can return zero, maxoff can be zero.\n> (0 - 1), on unsigned type, certainly is underflow and if maxoff can be one,\n> (1 - 1) is zero, and state->newitemsz * (maxoff - 1), is zero.\n\nI think that you're being far too optimistic about your ability to\ndetect and report valid issues using these static analysis tools. It's\nnot possible to apply the information they provide without a high\nlevel understanding of the design of the code. There are already quite\na few full time Postgres hackers that use tools like Coverity all the\ntime.\n\nWhile it's certainly true that PageGetMaxOffsetNumber cannot in\ngeneral be trusted to be > 0, we're talking about code that exists to\ndeal with pages that are already full, and need to be split. It is\nimpossible for \"maxoff\" to underflow, even if you deliberately corrupt\na page image using a tool like pg_hexedit. Even if we failed to be\nsufficiently defensive about such a case (which is not the case), it\nwouldn't make any sense to fix it in this specific esoteric function,\nwhich is called when we've already decided to split the page (but only\nsometimes). Sanitization needs to happen at some central choke point.\n\n> Yes,two static tools, but reviewed by me.\n\nI strongly suggest confining all of this to a single thread, and\nstating your reasoning upfront.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 24 Nov 2019 11:40:09 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix possible underflow in expression (maxoff - 1)"
},
{
"msg_contents": ">I think that you're being far too optimistic about your ability to\n>detect and report valid issues using these static analysis tools. It's\n>not possible to apply the information they provide without a high\nl>evel understanding of the design of the code. There are already quite\n>a few full time Postgres hackers that use tools like Coverity all the\n>time.\nI've been programming in C for a long time, and I'm getting better every day, I believe.\nI'll arrive there.\n\n>While it's certainly true that PageGetMaxOffsetNumber cannot in\n>general be trusted to be > 0, we're talking about code that exists to\n>deal with pages that are already full, and need to be split. It is\n>impossible for \"maxoff\" to underflow, even if you deliberately corrupt\n>a page image using a tool like pg_hexedit. Even if we failed to be\n>sufficiently defensive about such a case (which is not the case), it\n>wouldn't make any sense to fix it in this specific esoteric function,\n>which is called when we've already decided to split the page (but only\n>sometimes).\nAt this point you are right. I hope that in the future anyone who will use _bt_afternewitemoff will remember this hidden danger.\n\n> Sanitization needs to happen at some central choke point.\nSurely that would be the best solution. But this is not a function of a static analysis tool.\n\n>I strongly suggest confining all of this to a single thread, and\n>stating your reasoning upfront.\nI don't know what that means.\n\nBest regards.\nRanier Vilela\n\n",
"msg_date": "Sun, 24 Nov 2019 20:02:50 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Fix possible underflow in expression (maxoff - 1)"
},
{
"msg_contents": "On Sun, Nov 24, 2019 at 12:02 PM Ranier Vilela <ranier_gyn@hotmail.com> wrote:\n> I've been programming in C for a long time, and I'm getting better every day, I believe.\n> I'll arrive there.\n\nIf you don't understand the *specific* C code in question, you're\nunlikely to successfully diagnose a problem with the C code.\nRegardless of your general ability as a C programmer. It is necessary\nto understand the data structures in question, and how they're used\nand expected to work. Their invariants.\n\n> >I strongly suggest confining all of this to a single thread, and\n> >stating your reasoning upfront.\n> I don't know what that means.\n\nInstead of starting new email threads for each issue, confine the\nentire discussion to just one thread. This makes the discussion much\nmore manageable for everyone else. This is a high traffic mailing\nlist.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 24 Nov 2019 12:11:12 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix possible underflow in expression (maxoff - 1)"
},
{
"msg_contents": "On Mon, Nov 25, 2019 at 8:21 AM Ranier Vilela <ranier_gyn@hotmail.com> wrote:\n> >Where are you getting this stuff from? Are you using a static analysis tool?\n\n> Yes,two static tools, but reviewed by me.\n\nIf you're working on/with static code analysis tools, I have some\nrequests :-) How could we automate the discovery of latch wait\nprogramming mistakes?\n\n\n",
"msg_date": "Wed, 18 Dec 2019 13:18:18 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix possible underflow in expression (maxoff - 1)"
},
{
"msg_contents": "De: Thomas Munro <thomas.munro@gmail.com>\nEnviado: quarta-feira, 18 de dezembro de 2019 00:18\n\n>If you're working on/with static code analysis tools, I have some\n>requests :-) How could we automate the discovery of latch wait\n>programming mistakes?\nI doubt that static analysis can help with this problem.\nThis seems to me more like a high logic problem. Static tools are good at discovering flaws as uninitialized variable.\nIn a quick research I did on the subject, I found that sql queries specifically made can reveal latch wait.\nSo my suggestion for automating would be, if don't already have it, include a test class in regression testing:\nmake latch\nStarting from a baseline (v12.1), which would generate an expected amount of latchs, as soon as the reviewer applied a patch that might touch buffer pages, it could run the test suite.\nOnce the result showed a significant increase in the number of latches, it would be a warning that something is not good in the patch.\nUnfortunately, that would not show where in the code the problem would be.\n\nregards,\nRanier Vilela\n\n",
"msg_date": "Wed, 18 Dec 2019 10:13:03 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Fix possible underflow in expression (maxoff - 1)"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen reading the syntax documentation for SELECT I noticed a missing \nspace in \"[ OVERRIDING { SYSTEM | USER} VALUE ]\".\n\nAndreas",
"msg_date": "Sun, 24 Nov 2019 20:53:37 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": true,
"msg_subject": "Minor white space typo in documentation"
},
{
"msg_contents": "On Mon, Nov 25, 2019 at 8:53 AM Andreas Karlsson <andreas@proxel.se> wrote:\n> When reading the syntax documentation for SELECT I noticed a missing\n> space in \"[ OVERRIDING { SYSTEM | USER} VALUE ]\".\n\nRight. Pushed.\n\n\n",
"msg_date": "Mon, 25 Nov 2019 09:29:25 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Minor white space typo in documentation"
}
] |
[
{
"msg_contents": "Hi,\n\n>I see no problem with fixing this kind of inconsistency for\n>readability, so applied the change.\nThank you.\n\n>Anyway, when sending a patch there are a couple of things which can\n>make the life of people looking at what you send easier:\n>https://wiki.postgresql.org/wiki/Submitting_a_Patch<https://wiki.postgresql.org/wiki/Submitting_a_Patch>\nYes, I will read.\n\n>One problem that I noted with the patch sent on this thread is that it\n>does not directly apply on the git repository. Folks on -hackers are\n>mainly used to diffs generated by git. So first I would recommend\n>that you set up a git repository of the tree, say with that:\n>git clone https://git.postgresql.org/git/postgresql.git\nI will make use.\n\n>Generating a patch can be done with git in a couple of ways from the\n>cloned repository, say:\n>1) git diff\n>2) git format-patch\n>Both can be applied with a simple \"patch -p1\" command or even the more\n>advanced \"git am\", still the latter is kind of picky.\nThanks for the hints.\n\n>The code of Postgres is complex, so usually there are reasons why\n>things are done the way they are, and it is important to not be afraid\n>to ask questions. Also, making the subject of the emails you send\n>explicative enough is important. Please note pgsql-hackers has a lot\n>of traffic, and this helps some people in filtering out threads they\n>are not interested in.\nWell, my experience with pgsql-hackers, haven't been good.\nAsk questions, about the code, have not had good acceptance..\n\nBest regards.\nRanier Vilela\n\n\n\n\n\n\n\n\n\n\n\nHi,\n\n\n\n\n\n>I see no problem with fixing this kind of inconsistency for\n>readability, so applied the change.\nThank you.\n\n>Anyway, when sending a patch there are a couple of things which can\n>make the life of people looking at what you send easier:\n>https://wiki.postgresql.org/wiki/Submitting_a_Patch\nYes, I will read.\n\n>One problem that I noted with the patch sent on this thread is that it\n>does not directly apply on the git repository. Folks on -hackers are\n>mainly used to diffs generated by git. So first I would recommend\n>that you set up a git repository of the tree, say with that:\n>git clone \nhttps://git.postgresql.org/git/postgresql.git\nI will make use.\n\n>Generating a patch can be done with git in a couple of ways from the\n>cloned repository, say:\n>1) git diff\n>2) git format-patch\n>Both can be applied with a simple \"patch -p1\" command or even the more\n>advanced \"git am\", still the latter is kind of picky.\nThanks for the hints.\n\n>The code of Postgres is complex, so usually there are reasons why\n>things are done the way they are, and it is important to not be afraid\n>to ask questions. Also, making the subject of the emails you send\n>explicative enough is important. Please note pgsql-hackers has a lot\n>of traffic, and this helps some people in filtering out threads they\n>are not interested in.\nWell, my experience with pgsql-hackers, haven't been good.\nAsk questions, about the code, have not had good acceptance..\n\nBest regards.\nRanier Vilela",
"msg_date": "Mon, 25 Nov 2019 01:18:28 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Style: fix function declaration"
}
] |
[
{
"msg_contents": "In logical decoding, while sending the changes to the output plugin we\nneed to arrange them in the LSN order. But, if there is only one\ntransaction which is a very common case then we can avoid building the\nbinary heap. A small patch is attached for the same.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 25 Nov 2019 09:22:49 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fastpath while arranging the changes in LSN order in logical decoding"
},
{
"msg_contents": "On Mon, Nov 25, 2019 at 9:22 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> In logical decoding, while sending the changes to the output plugin we\n> need to arrange them in the LSN order. But, if there is only one\n> transaction which is a very common case then we can avoid building the\n> binary heap. A small patch is attached for the same.\n\nI have registered it in the next commitfest.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 17 Dec 2019 08:33:13 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fastpath while arranging the changes in LSN order in logical\n decoding"
},
{
"msg_contents": "On 25/11/2019 05:52, Dilip Kumar wrote:\n> In logical decoding, while sending the changes to the output plugin we\n> need to arrange them in the LSN order. But, if there is only one\n> transaction which is a very common case then we can avoid building the\n> binary heap. A small patch is attached for the same.\n\nDoes this make any measurable performance difference? Building a \none-element binary heap seems pretty cheap.\n\n- Heikki\n\n\n",
"msg_date": "Wed, 8 Jan 2020 13:58:23 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Fastpath while arranging the changes in LSN order in logical\n decoding"
},
{
"msg_contents": "On Wed, 8 Jan 2020 at 5:28 PM, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n> On 25/11/2019 05:52, Dilip Kumar wrote:\n> > In logical decoding, while sending the changes to the output plugin we\n> > need to arrange them in the LSN order. But, if there is only one\n> > transaction which is a very common case then we can avoid building the\n> > binary heap. A small patch is attached for the same.\n>\n> Does this make any measurable performance difference? Building a\n> one-element binary heap seems pretty cheap.\n\n\nI haven’t really measured the performance for this. I will try to do that\nnext week. Thanks for looking into this.\n\n>\n> --\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Wed, 8 Jan 2020 at 5:28 PM, Heikki Linnakangas <hlinnaka@iki.fi> wrote:On 25/11/2019 05:52, Dilip Kumar wrote:\n> In logical decoding, while sending the changes to the output plugin we\n> need to arrange them in the LSN order. But, if there is only one\n> transaction which is a very common case then we can avoid building the\n> binary heap. A small patch is attached for the same.\n\nDoes this make any measurable performance difference? Building a \none-element binary heap seems pretty cheap.I haven’t really measured the performance for this. I will try to do that next week. Thanks for looking into this.\n-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 8 Jan 2020 18:06:52 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fastpath while arranging the changes in LSN order in logical\n decoding"
},
{
"msg_contents": "1. Tried to apply the patch to PG 12.2 commit 45b88269a353ad93744772791feb6d01bc7e1e42 (HEAD -> REL_12_2, tag: REL_12_2), it doesn't work. Then tried to check the patch, and found the errors showing below.\r\n$ git apply --check 0001-Fastpath-for-sending-changes-to-output-plugin-in-log.patch\r\nerror: patch failed: contrib/test_decoding/logical.conf:1\r\nerror: contrib/test_decoding/logical.conf: patch does not apply\r\nerror: patch failed: src/backend/replication/logical/reorderbuffer.c:1133\r\nerror: src/backend/replication/logical/reorderbuffer.c: patch does not apply\r\n\r\n2. Ran a further check for file \"logical.conf\", and found there is only one commit since 2014, which doesn't have the parameter, \"logical_decoding_work_mem = 64kB\"\r\n\r\n3. Manually apply the patch including src/backend/replication/logical/reorderbuffer.c, and then ran a simple logical replication test. A connection issue is found like below,\r\n\"table public.pgbench_accounts: INSERT: aid[integer]:4071 bid[integer]:1 abalance[integer]:0 filler[character]:' '\r\npg_recvlogical: error: could not receive data from WAL stream: server closed the connection unexpectedly\r\n\tThis probably means the server terminated abnormally\r\n\tbefore or while processing the request.\r\npg_recvlogical: disconnected; waiting 5 seconds to try again\"\r\n\r\n4. This connection issue can be reproduced on PG 12.2 commit mentioned above, the basic steps,\r\n4.1 Change \"wal_level = logical\" in \"postgresql.conf\"\r\n4.2 create a logical slot and listen on it,\r\n$ pg_recvlogical -d postgres --slot test --create-slot\r\n$ pg_recvlogical -d postgres --slot test --start -f -\r\n\r\n4.3 from another terminal, run the command below,\r\n$ pgbench -i -p 5432 -d postgres\r\n\r\nLet me know if I did something wrong, and if a new patch is available, I can re-run the test on the same environment.\r\n\r\n-- \r\nDavid\r\nSoftware Engineer\r\nHighgo Software Inc. (Canada)\r\nwww.highgo.ca",
"msg_date": "Wed, 19 Feb 2020 00:16:09 +0000",
"msg_from": "David Zhang <david.zhang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: Fastpath while arranging the changes in LSN order in logical\n decoding"
},
{
"msg_contents": "After manually applied the patch, a diff regenerated is attached.\n\nOn 2020-02-18 4:16 p.m., David Zhang wrote:\n> 1. Tried to apply the patch to PG 12.2 commit 45b88269a353ad93744772791feb6d01bc7e1e42 (HEAD -> REL_12_2, tag: REL_12_2), it doesn't work. Then tried to check the patch, and found the errors showing below.\n> $ git apply --check 0001-Fastpath-for-sending-changes-to-output-plugin-in-log.patch\n> error: patch failed: contrib/test_decoding/logical.conf:1\n> error: contrib/test_decoding/logical.conf: patch does not apply\n> error: patch failed: src/backend/replication/logical/reorderbuffer.c:1133\n> error: src/backend/replication/logical/reorderbuffer.c: patch does not apply\n>\n> 2. Ran a further check for file \"logical.conf\", and found there is only one commit since 2014, which doesn't have the parameter, \"logical_decoding_work_mem = 64kB\"\n>\n> 3. Manually apply the patch including src/backend/replication/logical/reorderbuffer.c, and then ran a simple logical replication test. A connection issue is found like below,\n> \"table public.pgbench_accounts: INSERT: aid[integer]:4071 bid[integer]:1 abalance[integer]:0 filler[character]:' '\n> pg_recvlogical: error: could not receive data from WAL stream: server closed the connection unexpectedly\n> \tThis probably means the server terminated abnormally\n> \tbefore or while processing the request.\n> pg_recvlogical: disconnected; waiting 5 seconds to try again\"\n>\n> 4. This connection issue can be reproduced on PG 12.2 commit mentioned above, the basic steps,\n> 4.1 Change \"wal_level = logical\" in \"postgresql.conf\"\n> 4.2 create a logical slot and listen on it,\n> $ pg_recvlogical -d postgres --slot test --create-slot\n> $ pg_recvlogical -d postgres --slot test --start -f -\n>\n> 4.3 from another terminal, run the command below,\n> $ pgbench -i -p 5432 -d postgres\n>\n> Let me know if I did something wrong, and if a new patch is available, I can re-run the test on the same environment.\n>\n-- \nDavid\n\nSoftware Engineer\nHighgo Software Inc. (Canada)\nwww.highgo.ca",
"msg_date": "Tue, 18 Feb 2020 16:30:35 -0800",
"msg_from": "David Zhang <david.zhang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: Fastpath while arranging the changes in LSN order in logical\n decoding"
},
{
"msg_contents": "Hi Dilip,\n\nOn 2/18/20 7:30 PM, David Zhang wrote:\n> After manually applied the patch, a diff regenerated is attached.\n\nDavid's updated patch applies but all logical decoding regression tests \nare failing on cfbot.\n\nDo you know when you will be able to supply an updated patch?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Mon, 2 Mar 2020 08:57:23 -0500",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Fastpath while arranging the changes in LSN order in logical\n decoding"
},
{
"msg_contents": "On Mon, Mar 2, 2020 at 7:27 PM David Steele <david@pgmasters.net> wrote:\n>\n> Hi Dilip,\n>\n> On 2/18/20 7:30 PM, David Zhang wrote:\n> > After manually applied the patch, a diff regenerated is attached.\n>\n> David's updated patch applies but all logical decoding regression tests\n> are failing on cfbot.\n>\n> Do you know when you will be able to supply an updated patch?\n\nI will try to send in a day or two.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 3 Mar 2020 08:42:52 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fastpath while arranging the changes in LSN order in logical\n decoding"
},
{
"msg_contents": "On Tue, Mar 3, 2020 at 8:42 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Mar 2, 2020 at 7:27 PM David Steele <david@pgmasters.net> wrote:\n> >\n> > Hi Dilip,\n> >\n> > On 2/18/20 7:30 PM, David Zhang wrote:\n> > > After manually applied the patch, a diff regenerated is attached.\n> >\n> > David's updated patch applies but all logical decoding regression tests\n> > are failing on cfbot.\n> >\n> > Do you know when you will be able to supply an updated patch?\n>\n> I will try to send in a day or two.\n\nI have rebased the patch. check-world is passing.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 3 Mar 2020 10:41:11 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fastpath while arranging the changes in LSN order in logical\n decoding"
},
{
"msg_contents": "On Wed, Feb 19, 2020 at 6:00 AM David Zhang <david.zhang@highgo.ca> wrote:\n>\n> After manually applied the patch, a diff regenerated is attached.\n>\n> On 2020-02-18 4:16 p.m., David Zhang wrote:\n> > 1. Tried to apply the patch to PG 12.2 commit 45b88269a353ad93744772791feb6d01bc7e1e42 (HEAD -> REL_12_2, tag: REL_12_2), it doesn't work. Then tried to check the patch, and found the errors showing below.\n> > $ git apply --check 0001-Fastpath-for-sending-changes-to-output-plugin-in-log.patch\n> > error: patch failed: contrib/test_decoding/logical.conf:1\n> > error: contrib/test_decoding/logical.conf: patch does not apply\n> > error: patch failed: src/backend/replication/logical/reorderbuffer.c:1133\n> > error: src/backend/replication/logical/reorderbuffer.c: patch does not apply\n> >\n> > 2. Ran a further check for file \"logical.conf\", and found there is only one commit since 2014, which doesn't have the parameter, \"logical_decoding_work_mem = 64kB\"\n> >\n> > 3. Manually apply the patch including src/backend/replication/logical/reorderbuffer.c, and then ran a simple logical replication test. A connection issue is found like below,\n> > \"table public.pgbench_accounts: INSERT: aid[integer]:4071 bid[integer]:1 abalance[integer]:0 filler[character]:' '\n> > pg_recvlogical: error: could not receive data from WAL stream: server closed the connection unexpectedly\n> > This probably means the server terminated abnormally\n> > before or while processing the request.\n> > pg_recvlogical: disconnected; waiting 5 seconds to try again\"\n> >\n> > 4. This connection issue can be reproduced on PG 12.2 commit mentioned above, the basic steps,\n> > 4.1 Change \"wal_level = logical\" in \"postgresql.conf\"\n> > 4.2 create a logical slot and listen on it,\n> > $ pg_recvlogical -d postgres --slot test --create-slot\n> > $ pg_recvlogical -d postgres --slot test --start -f -\n> >\n> > 4.3 from another terminal, run the command below,\n> > $ pgbench -i -p 5432 -d postgres\n> >\n> > Let me know if I did something wrong, and if a new patch is available, I can re-run the test on the same environment.\n\nThanks for testing and rebasing. I think one of the hunks is missing\nin your rebased version. That could be the reason for failure. Can\nyou test on my latest version?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 3 Mar 2020 10:41:27 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fastpath while arranging the changes in LSN order in logical\n decoding"
},
{
"msg_contents": "Hi Dilip,\n\nI repeated the same test cases again and can't reproduce the \ndisconnection issue after applied your new patch.\n\nBest regards,\n\nDavid\n\nOn 2020-03-02 9:11 p.m., Dilip Kumar wrote:\n> On Wed, Feb 19, 2020 at 6:00 AM David Zhang <david.zhang@highgo.ca> wrote:\n>> After manually applied the patch, a diff regenerated is attached.\n>>\n>> On 2020-02-18 4:16 p.m., David Zhang wrote:\n>>> 1. Tried to apply the patch to PG 12.2 commit 45b88269a353ad93744772791feb6d01bc7e1e42 (HEAD -> REL_12_2, tag: REL_12_2), it doesn't work. Then tried to check the patch, and found the errors showing below.\n>>> $ git apply --check 0001-Fastpath-for-sending-changes-to-output-plugin-in-log.patch\n>>> error: patch failed: contrib/test_decoding/logical.conf:1\n>>> error: contrib/test_decoding/logical.conf: patch does not apply\n>>> error: patch failed: src/backend/replication/logical/reorderbuffer.c:1133\n>>> error: src/backend/replication/logical/reorderbuffer.c: patch does not apply\n>>>\n>>> 2. Ran a further check for file \"logical.conf\", and found there is only one commit since 2014, which doesn't have the parameter, \"logical_decoding_work_mem = 64kB\"\n>>>\n>>> 3. Manually apply the patch including src/backend/replication/logical/reorderbuffer.c, and then ran a simple logical replication test. A connection issue is found like below,\n>>> \"table public.pgbench_accounts: INSERT: aid[integer]:4071 bid[integer]:1 abalance[integer]:0 filler[character]:' '\n>>> pg_recvlogical: error: could not receive data from WAL stream: server closed the connection unexpectedly\n>>> This probably means the server terminated abnormally\n>>> before or while processing the request.\n>>> pg_recvlogical: disconnected; waiting 5 seconds to try again\"\n>>>\n>>> 4. This connection issue can be reproduced on PG 12.2 commit mentioned above, the basic steps,\n>>> 4.1 Change \"wal_level = logical\" in \"postgresql.conf\"\n>>> 4.2 create a logical slot and listen on it,\n>>> $ pg_recvlogical -d postgres --slot test --create-slot\n>>> $ pg_recvlogical -d postgres --slot test --start -f -\n>>>\n>>> 4.3 from another terminal, run the command below,\n>>> $ pgbench -i -p 5432 -d postgres\n>>>\n>>> Let me know if I did something wrong, and if a new patch is available, I can re-run the test on the same environment.\n> Thanks for testing and rebasing. I think one of the hunks is missing\n> in your rebased version. That could be the reason for failure. Can\n> you test on my latest version?\n>\n-- \nDavid\n\nSoftware Engineer\nHighgo Software Inc. (Canada)\nwww.highgo.ca\n\n\n",
"msg_date": "Tue, 3 Mar 2020 13:32:05 -0800",
"msg_from": "David Zhang <david.zhang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: Fastpath while arranging the changes in LSN order in logical\n decoding"
},
{
"msg_contents": "On Wed, Mar 4, 2020 at 3:02 AM David Zhang <david.zhang@highgo.ca> wrote:\n>\n> Hi Dilip,\n>\n> I repeated the same test cases again and can't reproduce the\n> disconnection issue after applied your new patch.\n\nThanks for the confirmation.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 4 Mar 2020 08:33:59 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fastpath while arranging the changes in LSN order in logical\n decoding"
},
{
"msg_contents": "Hi,\n\nOn 2020-01-08 18:06:52 +0530, Dilip Kumar wrote:\n> On Wed, 8 Jan 2020 at 5:28 PM, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> \n> > On 25/11/2019 05:52, Dilip Kumar wrote:\n> > > In logical decoding, while sending the changes to the output plugin we\n> > > need to arrange them in the LSN order. But, if there is only one\n> > > transaction which is a very common case then we can avoid building the\n> > > binary heap. A small patch is attached for the same.\n> >\n> > Does this make any measurable performance difference? Building a\n> > one-element binary heap seems pretty cheap.\n> \n> \n> I haven’t really measured the performance for this. I will try to do that\n> next week. Thanks for looking into this.\n\nDid you do that?\n\nRegards,\n\nAndres\n\n\n",
"msg_date": "Fri, 6 Mar 2020 10:13:25 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Fastpath while arranging the changes in LSN order in logical\n decoding"
},
{
"msg_contents": "On Sat, Mar 7, 2020 at 12:30 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-01-08 18:06:52 +0530, Dilip Kumar wrote:\n> > On Wed, 8 Jan 2020 at 5:28 PM, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> >\n> > > On 25/11/2019 05:52, Dilip Kumar wrote:\n> > > > In logical decoding, while sending the changes to the output plugin we\n> > > > need to arrange them in the LSN order. But, if there is only one\n> > > > transaction which is a very common case then we can avoid building the\n> > > > binary heap. A small patch is attached for the same.\n> > >\n> > > Does this make any measurable performance difference? Building a\n> > > one-element binary heap seems pretty cheap.\n> >\n> >\n> > I haven’t really measured the performance for this. I will try to do that\n> > next week. Thanks for looking into this.\n>\n> Did you do that?\n\nI tried once in my local machine but could not produce consistent\nresults. I will try this once again in the performance machine and\nreport back.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 7 Mar 2020 09:59:38 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fastpath while arranging the changes in LSN order in logical\n decoding"
},
{
"msg_contents": "On Sat, Mar 7, 2020 at 9:59 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Sat, Mar 7, 2020 at 12:30 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2020-01-08 18:06:52 +0530, Dilip Kumar wrote:\n> > > On Wed, 8 Jan 2020 at 5:28 PM, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> > >\n> > > > On 25/11/2019 05:52, Dilip Kumar wrote:\n> > > > > In logical decoding, while sending the changes to the output plugin we\n> > > > > need to arrange them in the LSN order. But, if there is only one\n> > > > > transaction which is a very common case then we can avoid building the\n> > > > > binary heap. A small patch is attached for the same.\n> > > >\n> > > > Does this make any measurable performance difference? Building a\n> > > > one-element binary heap seems pretty cheap.\n> > >\n> > >\n> > > I haven’t really measured the performance for this. I will try to do that\n> > > next week. Thanks for looking into this.\n> >\n> > Did you do that?\n>\n> I tried once in my local machine but could not produce consistent\n> results. I will try this once again in the performance machine and\n> report back.\n\nI have tried to decode changes for the 100,000 small transactions and\nmeasured the time with head vs patch. I did not observe any\nsignificant gain.\n\nHead\n-------\n519ms\n500ms\n487ms\n501ms\n\npatch\n------\n501ms\n492ms\n486ms\n489ms\n\nIMHO, if we conclude that because there is no performance gain so we\ndon't want to add one extra path in the code then we might want to\nremove that TODO from the code so that we don't spend time for\noptimizing this in the future.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 7 Mar 2020 11:15:27 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fastpath while arranging the changes in LSN order in logical\n decoding"
},
{
"msg_contents": "On Saturday, March 7, 2020, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Sat, Mar 7, 2020 at 9:59 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Sat, Mar 7, 2020 at 12:30 AM Andres Freund <andres@anarazel.de>\n> wrote:\n> > >\n> > > Hi,\n> > >\n> > > On 2020-01-08 18:06:52 +0530, Dilip Kumar wrote:\n> > > > On Wed, 8 Jan 2020 at 5:28 PM, Heikki Linnakangas <hlinnaka@iki.fi>\n> wrote:\n> > > >\n> > > > > On 25/11/2019 05:52, Dilip Kumar wrote:\n> > > > > > In logical decoding, while sending the changes to the output\n> plugin we\n> > > > > > need to arrange them in the LSN order. But, if there is only one\n> > > > > > transaction which is a very common case then we can avoid\n> building the\n> > > > > > binary heap. A small patch is attached for the same.\n> > > > >\n> > > > > Does this make any measurable performance difference? Building a\n> > > > > one-element binary heap seems pretty cheap.\n> > > >\n> > > >\n> > > > I haven’t really measured the performance for this. I will try to\n> do that\n> > > > next week. Thanks for looking into this.\n> > >\n> > > Did you do that?\n> >\n> > I tried once in my local machine but could not produce consistent\n> > results. I will try this once again in the performance machine and\n> > report back.\n>\n> I have tried to decode changes for the 100,000 small transactions and\n> measured the time with head vs patch. I did not observe any\n> significant gain.\n>\n> Head\n> -------\n> 519ms\n> 500ms\n> 487ms\n> 501ms\n>\n> patch\n> ------\n> 501ms\n> 492ms\n> 486ms\n> 489ms\n>\n> IMHO, if we conclude that because there is no performance gain so we\n> don't want to add one extra path in the code then we might want to\n> remove that TODO from the code so that we don't spend time for\n> optimizing this in the future.\n>\n\nWould you be able to share your test setup? It seems like it’d helpful to\nget a larger sample size to better determine if it’s measurable or not.\nVisually those 4 runs look to me like it’s possible, but objectively I’m\nnot sure we can yet conclude one way or the other.\n\nJames\n\nOn Saturday, March 7, 2020, Dilip Kumar <dilipbalaut@gmail.com> wrote:On Sat, Mar 7, 2020 at 9:59 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Sat, Mar 7, 2020 at 12:30 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2020-01-08 18:06:52 +0530, Dilip Kumar wrote:\n> > > On Wed, 8 Jan 2020 at 5:28 PM, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> > >\n> > > > On 25/11/2019 05:52, Dilip Kumar wrote:\n> > > > > In logical decoding, while sending the changes to the output plugin we\n> > > > > need to arrange them in the LSN order. But, if there is only one\n> > > > > transaction which is a very common case then we can avoid building the\n> > > > > binary heap. A small patch is attached for the same.\n> > > >\n> > > > Does this make any measurable performance difference? Building a\n> > > > one-element binary heap seems pretty cheap.\n> > >\n> > >\n> > > I haven’t really measured the performance for this. I will try to do that\n> > > next week. Thanks for looking into this.\n> >\n> > Did you do that?\n>\n> I tried once in my local machine but could not produce consistent\n> results. I will try this once again in the performance machine and\n> report back.\n\nI have tried to decode changes for the 100,000 small transactions and\nmeasured the time with head vs patch. I did not observe any\nsignificant gain.\n\nHead\n-------\n519ms\n500ms\n487ms\n501ms\n\npatch\n------\n501ms\n492ms\n486ms\n489ms\n\nIMHO, if we conclude that because there is no performance gain so we\ndon't want to add one extra path in the code then we might want to\nremove that TODO from the code so that we don't spend time for\noptimizing this in the future.\nWould you be able to share your test setup? It seems like it’d helpful to get a larger sample size to better determine if it’s measurable or not. Visually those 4 runs look to me like it’s possible, but objectively I’m not sure we can yet conclude one way or the other. James",
"msg_date": "Sun, 8 Mar 2020 11:54:56 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fastpath while arranging the changes in LSN order in logical\n decoding"
},
{
"msg_contents": "On Sun, Mar 8, 2020 at 9:24 PM James Coleman <jtc331@gmail.com> wrote:\n>\n> On Saturday, March 7, 2020, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>>\n>> On Sat, Mar 7, 2020 at 9:59 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>> >\n>> > On Sat, Mar 7, 2020 at 12:30 AM Andres Freund <andres@anarazel.de> wrote:\n>> > >\n>> > > Hi,\n>> > >\n>> > > On 2020-01-08 18:06:52 +0530, Dilip Kumar wrote:\n>> > > > On Wed, 8 Jan 2020 at 5:28 PM, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>> > > >\n>> > > > > On 25/11/2019 05:52, Dilip Kumar wrote:\n>> > > > > > In logical decoding, while sending the changes to the output plugin we\n>> > > > > > need to arrange them in the LSN order. But, if there is only one\n>> > > > > > transaction which is a very common case then we can avoid building the\n>> > > > > > binary heap. A small patch is attached for the same.\n>> > > > >\n>> > > > > Does this make any measurable performance difference? Building a\n>> > > > > one-element binary heap seems pretty cheap.\n>> > > >\n>> > > >\n>> > > > I haven’t really measured the performance for this. I will try to do that\n>> > > > next week. Thanks for looking into this.\n>> > >\n>> > > Did you do that?\n>> >\n>> > I tried once in my local machine but could not produce consistent\n>> > results. I will try this once again in the performance machine and\n>> > report back.\n>>\n>> I have tried to decode changes for the 100,000 small transactions and\n>> measured the time with head vs patch. I did not observe any\n>> significant gain.\n>>\n>> Head\n>> -------\n>> 519ms\n>> 500ms\n>> 487ms\n>> 501ms\n>>\n>> patch\n>> ------\n>> 501ms\n>> 492ms\n>> 486ms\n>> 489ms\n>>\n>> IMHO, if we conclude that because there is no performance gain so we\n>> don't want to add one extra path in the code then we might want to\n>> remove that TODO from the code so that we don't spend time for\n>> optimizing this in the future.\n>\n>\n> Would you be able to share your test setup? It seems like it’d helpful to get a larger sample size to better determine if it’s measurable or not. Visually those 4 runs look to me like it’s possible, but objectively I’m not sure we can yet conclude one way or the other.\n\nYeah, my test is very simple\n\nCREATE TABLE t1 (a int, b int);\nSELECT * FROM pg_create_logical_replication_slot('regression_slot',\n'test_decoding');\n\n--run 100,000 small transactions with pgbench\n./pgbench -f test.sql -c 1 -j 1 -t 100000 -P 1 postgres;\n\n--measure time to decode the changes\ntime ./psql -d postgres -c \"select count(*) from\npg_logical_slot_get_changes('regression_slot', NULL,NULL);\n\n*test.sql is just one insert query like below\ninsert into t1 values(1,1);\n\nI guess this should be the best case to test this patch because we are\ndecoding a lot of small transactions but it seems the time taken for\ncreating the binary heap is quite small.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 9 Mar 2020 09:20:44 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fastpath while arranging the changes in LSN order in logical\n decoding"
},
{
"msg_contents": "On 2020-03-07 11:15:27 +0530, Dilip Kumar wrote:\n> IMHO, if we conclude that because there is no performance gain so we\n> don't want to add one extra path in the code then we might want to\n> remove that TODO from the code so that we don't spend time for\n> optimizing this in the future.\n\n+1\n\n\n",
"msg_date": "Mon, 9 Mar 2020 10:37:03 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Fastpath while arranging the changes in LSN order in logical\n decoding"
},
{
"msg_contents": "On Mon, Mar 9, 2020 at 11:07 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2020-03-07 11:15:27 +0530, Dilip Kumar wrote:\n> > IMHO, if we conclude that because there is no performance gain so we\n> > don't want to add one extra path in the code then we might want to\n> > remove that TODO from the code so that we don't spend time for\n> > optimizing this in the future.\n>\n> +1\n>\n\nDilip, are you planning to do more tests for this? Anyone else wants\nto do more tests? If not, based on current results, we can remove that\nTODO and in future, if someone comes with a test case to show benefit\nfor adding fastpath, then we can consider the patch proposed by Dilip.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 24 Mar 2020 18:16:28 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fastpath while arranging the changes in LSN order in logical\n decoding"
},
{
"msg_contents": "On Tue, Mar 24, 2020 at 6:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Mar 9, 2020 at 11:07 PM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > On 2020-03-07 11:15:27 +0530, Dilip Kumar wrote:\n> > > IMHO, if we conclude that because there is no performance gain so we\n> > > don't want to add one extra path in the code then we might want to\n> > > remove that TODO from the code so that we don't spend time for\n> > > optimizing this in the future.\n> >\n> > +1\n> >\n>\n> Dilip, are you planning to do more tests for this? Anyone else wants\n> to do more tests? If not, based on current results, we can remove that\n> TODO and in future, if someone comes with a test case to show benefit\n> for adding fastpath, then we can consider the patch proposed by Dilip.\n\nIMHO, I have tried the best case but did not see any performance gain\nso I am not planning to test this further. I have attached the patch\nfor removing the TODO.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 24 Mar 2020 18:36:03 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fastpath while arranging the changes in LSN order in logical\n decoding"
},
{
"msg_contents": "On 2020-03-24 18:36:03 +0530, Dilip Kumar wrote:\n> IMHO, I have tried the best case but did not see any performance gain\n> so I am not planning to test this further. I have attached the patch\n> for removing the TODO.\n\nPushed. Thanks!\n\n\n",
"msg_date": "Tue, 24 Mar 2020 12:16:51 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Fastpath while arranging the changes in LSN order in logical\n decoding"
},
{
"msg_contents": "On Wed, Mar 25, 2020 at 12:46 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2020-03-24 18:36:03 +0530, Dilip Kumar wrote:\n> > IMHO, I have tried the best case but did not see any performance gain\n> > so I am not planning to test this further. I have attached the patch\n> > for removing the TODO.\n>\n> Pushed. Thanks!\n>\n\nI have updated the CF entry. Thanks to all involved in this.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 25 Mar 2020 09:23:27 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fastpath while arranging the changes in LSN order in logical\n decoding"
},
{
"msg_contents": "On Wed, Mar 25, 2020 at 9:23 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Mar 25, 2020 at 12:46 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > On 2020-03-24 18:36:03 +0530, Dilip Kumar wrote:\n> > > IMHO, I have tried the best case but did not see any performance gain\n> > > so I am not planning to test this further. I have attached the patch\n> > > for removing the TODO.\n> >\n> > Pushed. Thanks!\n> >\n>\n> I have updated the CF entry. Thanks to all involved in this.\n\nThanks!\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 25 Mar 2020 09:39:17 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fastpath while arranging the changes in LSN order in logical\n decoding"
}
] |
[
{
"msg_contents": "Right now JIT provides about 30% improvement of TPC-H Q1 query:\n\nhttps://www.citusdata.com/blog/2018/09/11/postgresql-11-just-in-time/\n\nI wonder why even at this query, which seems to be ideal use case for \nJIT, we get such modest improvement?\nI have raised this question several years ago - but that time JIT was \nassumed to be in early development stage and performance aspects were \nless critical\nthan required infrastructure changes. But right now JIT seems to be \nstable enough and is switch on by default.\nVitesse DB reports 8x speedup on Q1,\nISP-RAS JIT version provides 3x speedup of Q1:\n\nhttps://www.pgcon.org/2017/schedule/attachments/467_PGCon%202017-05-26%2015-00%20ISPRAS%20Dynamic%20Compilation%20of%20SQL%20Queries%20in%20PostgreSQL%20Using%20LLVM%20JIT.pdf\n\nAccording to this presentation Q1 spends 6% of time in ExecQual and 75% \nin ExecAgg.\n\nVOPS provides 10x improvement of Q1.\n\nI have a hypothesis that such difference was caused by the way of \naggregates calculation.\nPostgres is using Youngs-Cramer algorithm while both ISPRAS JIT version \nand my VOPS are just accumulating results in variable of type double.\nI rewrite VOPS to use the same algorithm as Postgres, but VOPS is still \nabout 10 times faster.\n\nResults of Q1 on scale factor=10 TPC-H data at my desktop with parallel \nexecution enabled:\nno-JIT: 5640 msec\nJIT: 4590msec\nVOPS: 452 msec\nVOPS + Youngs-Cramer algorithm: 610 msec\n\nBelow are tops of profiles (functions with more than 1% of time):\n\nJIT:\n 10.98% postgres postgres [.] float4_accum\n 8.40% postgres postgres [.] float8_accum\n 7.51% postgres postgres [.] HeapTupleSatisfiesVisibility\n 5.92% postgres postgres [.] ExecInterpExpr\n 5.63% postgres postgres [.] tts_minimal_getsomeattrs\n 4.35% postgres postgres [.] lookup_hash_entries\n 3.72% postgres postgres [.] TupleHashTableHash.isra.8\n 2.93% postgres postgres [.] tuplehash_insert\n 2.70% postgres postgres [.] heapgettup_pagemode\n 2.24% postgres postgres [.] check_float8_array\n 2.23% postgres postgres [.] hash_search_with_hash_value\n 2.10% postgres postgres [.] ExecScan\n 1.90% postgres postgres [.] hash_uint32\n 1.57% postgres postgres [.] tts_minimal_clear\n 1.53% postgres postgres [.] FunctionCall1Coll\n 1.47% postgres postgres [.] pg_detoast_datum\n 1.39% postgres postgres [.] heapgetpage\n 1.37% postgres postgres [.] TupleHashTableMatch.isra.9\n 1.35% postgres postgres [.] ExecStoreBufferHeapTuple\n 1.06% postgres postgres [.] LookupTupleHashEntry\n 1.06% postgres postgres [.] AggCheckCallContext\n\nno-JIT:\n 26.82% postgres postgres [.] ExecInterpExpr\n 15.26% postgres postgres [.] tts_buffer_heap_getsomeattrs\n 8.27% postgres postgres [.] float4_accum\n 7.51% postgres postgres [.] float8_accum\n 5.26% postgres postgres [.] HeapTupleSatisfiesVisibility\n 2.78% postgres postgres [.] TupleHashTableHash.isra.8\n 2.63% postgres postgres [.] tts_minimal_getsomeattrs\n 2.54% postgres postgres [.] lookup_hash_entries\n 2.05% postgres postgres [.] tuplehash_insert\n 1.97% postgres postgres [.] heapgettup_pagemode\n 1.72% postgres postgres [.] hash_search_with_hash_value\n 1.57% postgres postgres [.] float48mul\n 1.55% postgres postgres [.] check_float8_array\n 1.48% postgres postgres [.] ExecScan\n 1.26% postgres postgres [.] hash_uint32\n 1.04% postgres postgres [.] tts_minimal_clear\n 1.00% postgres postgres [.] FunctionCall1Coll\n\nVOPS:\n 44.25% postgres vops.so [.] vops_avg_state_accumulate\n 11.76% postgres vops.so [.] vops_float4_avg_accumulate\n 6.14% postgres postgres [.] ExecInterpExpr\n 5.89% postgres vops.so [.] vops_float4_sub_lconst\n 4.89% postgres vops.so [.] vops_float4_mul\n 4.30% postgres vops.so [.] vops_int4_le_rconst\n 2.57% postgres vops.so [.] vops_float4_add_lconst\n 2.31% postgres vops.so [.] vops_count_accumulate\n 2.24% postgres postgres [.] tts_buffer_heap_getsomeattrs\n 1.97% postgres postgres [.] heap_page_prune_opt\n 1.72% postgres postgres [.] HeapTupleSatisfiesVisibility\n 1.67% postgres postgres [.] AllocSetAlloc\n 1.47% postgres postgres [.] hash_search_with_hash_value\n\n\nIn theory by elimination of interpretation overhead JIT should provide \nperformance comparable with vecrtorized executor.\nIn most programming languages using JIT compiler instead of byte-code \ninterpreter provides about 10x speed improvement.\nCertainly DBMS engine is very different with traditional interpreter and \na lot of time is spent in tuple packing/unpacking (although JIT is also \nused here),\nin heap traversal,... But it is still unclear to me why if ISPRAS \nmeasurement were correct and we actually spent 75% of Q1 time in \naggregation,\nJIT was not able to significantly (times) increase speed on Q1 query? \nExperiment with VOPS shows that used aggregation algorithm itself is not \na bottleneck.\nProfile also give no answer for this question.\nAny ideas?\n\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Mon, 25 Nov 2019 18:09:29 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Why JIT speed improvement is so modest?"
},
{
"msg_contents": "On Mon, Nov 25, 2019 at 9:09 AM Konstantin Knizhnik\n<k.knizhnik@postgrespro.ru> wrote:\n> JIT was not able to significantly (times) increase speed on Q1 query?\n> Experiment with VOPS shows that used aggregation algorithm itself is not\n> a bottleneck.\n> Profile also give no answer for this question.\n> Any ideas?\n\nWell, in the VOPS variant around 2/3 of the time is spent in routines\nthat are obviously aggregation. In the JIT version, it's around 20%.\nSo this suggests that the replacement execution engine is more\ninvasive. I would also guess (!) that the VOPS engine optimizes fewer\nclasses of query plan. ExecScan for example, looks to be completely\noptimized out VOPS but is still utilized in the JIT engine.\n\nI experimented with Vitessa a couple of years back and this was\nconsistent with my recollection.\n\nmerlin\n\n\n",
"msg_date": "Mon, 25 Nov 2019 09:24:29 -0600",
"msg_from": "Merlin Moncure <mmoncure@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Why JIT speed improvement is so modest?"
},
{
"msg_contents": "\n\nOn 25.11.2019 18:24, Merlin Moncure wrote:\n> On Mon, Nov 25, 2019 at 9:09 AM Konstantin Knizhnik\n> <k.knizhnik@postgrespro.ru> wrote:\n>> JIT was not able to significantly (times) increase speed on Q1 query?\n>> Experiment with VOPS shows that used aggregation algorithm itself is not\n>> a bottleneck.\n>> Profile also give no answer for this question.\n>> Any ideas?\n> Well, in the VOPS variant around 2/3 of the time is spent in routines\n> that are obviously aggregation. In the JIT version, it's around 20%.\n> So this suggests that the replacement execution engine is more\n> invasive. I would also guess (!) that the VOPS engine optimizes fewer\n> classes of query plan. ExecScan for example, looks to be completely\n> optimized out VOPS but is still utilized in the JIT engine.\n\nThe difference in fraction of time spent in aggregate calculation is not \nso large (2 times vs. 10 times).\nI suspected that a lot of time is spent in relation traversal code, \ntuple unpacking and visibility checks.\nTo check this hypothesis I have implement in-memory table access method \nwhich stores tuples in unpacked form and\ndoesn't perform any visibility checks at all.\nResults were not so existed. I have to disable parallel execution \n(because it is not possible for tuples stored in backend private memory).\nResults are the following:\n\nlineitem: 13736 msec\ninmem_lineitem: 10044 msec\nvops_lineitem: 1945 msec\n\nThe profile of inmem_lineitem is the following:\n\n 16.79% postgres postgres [.] float4_accum\n 12.86% postgres postgres [.] float8_accum\n 5.83% postgres postgres [.] TupleHashTableHash.isra.8\n 4.44% postgres postgres [.] lookup_hash_entries\n 3.37% postgres postgres [.] check_float8_array\n 3.11% postgres postgres [.] tuplehash_insert\n 2.91% postgres postgres [.] hash_uint32\n 2.83% postgres postgres [.] ExecScan\n 2.56% postgres postgres [.] inmem_getnextslot\n 2.22% postgres postgres [.] FunctionCall1Coll\n 2.14% postgres postgres [.] LookupTupleHashEntry\n 1.95% postgres postgres [.] TupleHashTableMatch.isra.9\n 1.76% postgres postgres [.] pg_detoast_datum\n 1.58% postgres postgres [.] AggCheckCallContext\n 1.57% postgres postgres [.] tts_minimal_clear\n 1.35% postgres perf-3054.map [.] 0x00007f558db60010\n 1.23% postgres postgres [.] fetch_input_tuple\n 1.15% postgres postgres [.] SeqNext\n 1.06% postgres postgres [.] ExecAgg\n 1.00% postgres postgres [.] tts_minimal_store_tuple\n\nSo now fraction of time spent in aggregation is increased to 30% (vs. \n20% for lineitem and 42% for vops_lineitem).\nLooks like the main bottleneck now is hashagg. VOPS is accessing hash \nabout 10 times less (because it accumulates values for the whole tile).\nAnd it explains still large difference bwtween vops_lineitem and \ninmem_lineitem.\n\nIf we remove aggregation and rewrite Q1 query as:\nselect\n avg(l_quantity) as sum_qty,\n avg(l_extendedprice) as sum_base_price,\n avg(l_extendedprice*(1-l_discount)) as sum_disc_price,\n avg(l_extendedprice*(1-l_discount)*(1+l_tax)) as sum_charge,\n avg(l_quantity) as avg_qty,\n avg(l_extendedprice) as avg_price,\n avg(l_discount) as avg_disc,\n count(*) as count_order\nfrom\n inmem_lineitem\nwhere\n l_shipdate <= '1998-12-01';\n\nthen results are the following:\nlineitem: 9805 msec\ninmem_lineitem: 6257 msec\nvops_lineitem: 1865 msec\n\nand now profile of inmem_lineitem is:\n\n 25.27% postgres postgres [.] float4_accum\n 21.86% postgres postgres [.] float8_accum\n 5.49% postgres postgres [.] check_float8_array\n 4.57% postgres postgres [.] ExecScan\n 2.61% postgres postgres [.] AggCheckCallContext\n 2.30% postgres postgres [.] pg_detoast_datum\n 2.10% postgres postgres [.] inmem_getnextslot\n 1.81% postgres postgres [.] SeqNext\n 1.73% postgres postgres [.] fetch_input_tuple\n 1.61% postgres postgres [.] ExecAgg\n 1.23% postgres postgres [.] MemoryContextReset\n\nBut still more than 3 times difference with VOPS!\nSomething is wrong here...\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Wed, 27 Nov 2019 18:38:45 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Why JIT speed improvement is so modest?"
},
{
"msg_contents": "On Wed, Nov 27, 2019 at 06:38:45PM +0300, Konstantin Knizhnik wrote:\n>\n>\n>On 25.11.2019 18:24, Merlin Moncure wrote:\n>>On Mon, Nov 25, 2019 at 9:09 AM Konstantin Knizhnik\n>><k.knizhnik@postgrespro.ru> wrote:\n>>>JIT was not able to significantly (times) increase speed on Q1 query?\n>>>Experiment with VOPS shows that used aggregation algorithm itself is not\n>>>a bottleneck.\n>>>Profile also give no answer for this question.\n>>>Any ideas?\n>>Well, in the VOPS variant around 2/3 of the time is spent in routines\n>>that are obviously aggregation. In the JIT version, it's around 20%.\n>>So this suggests that the replacement execution engine is more\n>>invasive. I would also guess (!) that the VOPS engine optimizes fewer\n>>classes of query plan. ExecScan for example, looks to be completely\n>>optimized out VOPS but is still utilized in the JIT engine.\n>\n>The difference in fraction of time spent in aggregate calculation is \n>not so large (2 times vs. 10 times).\n>I suspected that a lot of time is spent in relation traversal code, \n>tuple unpacking and visibility checks.\n>To check this hypothesis I have implement in-memory table access \n>method which stores tuples in unpacked form and\n>doesn't perform any visibility checks at all.\n>Results were not so existed. I have to disable parallel execution \n>(because it is not possible for tuples stored in backend private \n>memory).\n>Results are the following:\n>\n>lineitem:�������������� 13736 msec\n>inmem_lineitem:� 10044 msec\n>vops_lineitem:������� 1945 msec\n>\n>The profile of inmem_lineitem is the following:\n>\n>� 16.79%� postgres� postgres������������ [.] float4_accum\n>� 12.86%� postgres� postgres������������ [.] float8_accum\n>�� 5.83%� postgres� postgres������������ [.] TupleHashTableHash.isra.8\n>�� 4.44%� postgres� postgres������������ [.] lookup_hash_entries\n>�� 3.37%� postgres� postgres������������ [.] check_float8_array\n>�� 3.11%� postgres� postgres������������ [.] tuplehash_insert\n>�� 2.91%� postgres� postgres������������ [.] hash_uint32\n>�� 2.83%� postgres� postgres������������ [.] ExecScan\n>�� 2.56%� postgres� postgres������������ [.] inmem_getnextslot\n>�� 2.22%� postgres� postgres������������ [.] FunctionCall1Coll\n>�� 2.14%� postgres� postgres������������ [.] LookupTupleHashEntry\n>�� 1.95%� postgres� postgres������������ [.] TupleHashTableMatch.isra.9\n>�� 1.76%� postgres� postgres������������ [.] pg_detoast_datum\n>�� 1.58%� postgres� postgres������������ [.] AggCheckCallContext\n>�� 1.57%� postgres� postgres������������ [.] tts_minimal_clear\n>�� 1.35%� postgres� perf-3054.map������� [.] 0x00007f558db60010\n>�� 1.23%� postgres� postgres������������ [.] fetch_input_tuple\n>�� 1.15%� postgres� postgres������������ [.] SeqNext\n>�� 1.06%� postgres� postgres������������ [.] ExecAgg\n>�� 1.00%� postgres� postgres������������ [.] tts_minimal_store_tuple\n>\n>So now fraction of time spent in aggregation is increased to 30% (vs. \n>20% for lineitem and 42% for vops_lineitem).\n>Looks like the main bottleneck now is hashagg. VOPS is accessing hash \n>about 10 times less (because it accumulates values for the whole \n>tile).\n>And it explains still large difference bwtween vops_lineitem and \n>inmem_lineitem.\n>\n>If we remove aggregation and rewrite Q1 query as:\n>select\n>��� avg(l_quantity) as sum_qty,\n>��� avg(l_extendedprice) as sum_base_price,\n>��� avg(l_extendedprice*(1-l_discount)) as sum_disc_price,\n>��� avg(l_extendedprice*(1-l_discount)*(1+l_tax)) as sum_charge,\n>��� avg(l_quantity) as avg_qty,\n>��� avg(l_extendedprice) as avg_price,\n>��� avg(l_discount) as avg_disc,\n>��� count(*) as count_order\n>from\n>��� inmem_lineitem\n>where\n>��� l_shipdate <= '1998-12-01';\n>\n>then results are the following:\n>lineitem:�������������� 9805 msec\n>inmem_lineitem:� 6257 msec\n>vops_lineitem:����� 1865 msec\n>\n>and now profile of inmem_lineitem is:\n>\n>� 25.27%� postgres� postgres���������� [.] float4_accum\n>� 21.86%� postgres� postgres���������� [.] float8_accum\n>�� 5.49%� postgres� postgres���������� [.] check_float8_array\n>�� 4.57%� postgres� postgres���������� [.] ExecScan\n>�� 2.61%� postgres� postgres���������� [.] AggCheckCallContext\n>�� 2.30%� postgres� postgres���������� [.] pg_detoast_datum\n>�� 2.10%� postgres� postgres���������� [.] inmem_getnextslot\n>�� 1.81%� postgres� postgres���������� [.] SeqNext\n>�� 1.73%� postgres� postgres���������� [.] fetch_input_tuple\n>�� 1.61%� postgres� postgres���������� [.] ExecAgg\n>�� 1.23%� postgres� postgres���������� [.] MemoryContextReset\n>\n>But still more than 3 times difference with VOPS!\n>Something is wrong here...\n>\n\nI have no idea what VOPS does, but IIRC one of the bottlenecks compared\nto various column stores is our iterative execution model, which makes\nit difficult/imposible to vectorize operations. That's likely why the\naccum functions are so high in the CPU profile.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 27 Nov 2019 17:05:04 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Why JIT speed improvement is so modest?"
},
{
"msg_contents": "\n\nOn 27.11.2019 19:05, Tomas Vondra wrote:\n> On Wed, Nov 27, 2019 at 06:38:45PM +0300, Konstantin Knizhnik wrote:\n>>\n>>\n>> On 25.11.2019 18:24, Merlin Moncure wrote:\n>>> On Mon, Nov 25, 2019 at 9:09 AM Konstantin Knizhnik\n>>> <k.knizhnik@postgrespro.ru> wrote:\n>>>> JIT was not able to significantly (times) increase speed on Q1 query?\n>>>> Experiment with VOPS shows that used aggregation algorithm itself \n>>>> is not\n>>>> a bottleneck.\n>>>> Profile also give no answer for this question.\n>>>> Any ideas?\n>>> Well, in the VOPS variant around 2/3 of the time is spent in routines\n>>> that are obviously aggregation. In the JIT version, it's around 20%.\n>>> So this suggests that the replacement execution engine is more\n>>> invasive. I would also guess (!) that the VOPS engine optimizes fewer\n>>> classes of query plan. ExecScan for example, looks to be completely\n>>> optimized out VOPS but is still utilized in the JIT engine.\n>>\n>> The difference in fraction of time spent in aggregate calculation is \n>> not so large (2 times vs. 10 times).\n>> I suspected that a lot of time is spent in relation traversal code, \n>> tuple unpacking and visibility checks.\n>> To check this hypothesis I have implement in-memory table access \n>> method which stores tuples in unpacked form and\n>> doesn't perform any visibility checks at all.\n>> Results were not so existed. I have to disable parallel execution \n>> (because it is not possible for tuples stored in backend private \n>> memory).\n>> Results are the following:\n>>\n>> lineitem: 13736 msec\n>> inmem_lineitem: 10044 msec\n>> vops_lineitem: 1945 msec\n>>\n>> The profile of inmem_lineitem is the following:\n>>\n>> 16.79% postgres postgres [.] float4_accum\n>> 12.86% postgres postgres [.] float8_accum\n>> 5.83% postgres postgres [.] TupleHashTableHash.isra.8\n>> 4.44% postgres postgres [.] lookup_hash_entries\n>> 3.37% postgres postgres [.] check_float8_array\n>> 3.11% postgres postgres [.] tuplehash_insert\n>> 2.91% postgres postgres [.] hash_uint32\n>> 2.83% postgres postgres [.] ExecScan\n>> 2.56% postgres postgres [.] inmem_getnextslot\n>> 2.22% postgres postgres [.] FunctionCall1Coll\n>> 2.14% postgres postgres [.] LookupTupleHashEntry\n>> 1.95% postgres postgres [.] TupleHashTableMatch.isra.9\n>> 1.76% postgres postgres [.] pg_detoast_datum\n>> 1.58% postgres postgres [.] AggCheckCallContext\n>> 1.57% postgres postgres [.] tts_minimal_clear\n>> 1.35% postgres perf-3054.map [.] 0x00007f558db60010\n>> 1.23% postgres postgres [.] fetch_input_tuple\n>> 1.15% postgres postgres [.] SeqNext\n>> 1.06% postgres postgres [.] ExecAgg\n>> 1.00% postgres postgres [.] tts_minimal_store_tuple\n>>\n>> So now fraction of time spent in aggregation is increased to 30% (vs. \n>> 20% for lineitem and 42% for vops_lineitem).\n>> Looks like the main bottleneck now is hashagg. VOPS is accessing hash \n>> about 10 times less (because it accumulates values for the whole tile).\n>> And it explains still large difference bwtween vops_lineitem and \n>> inmem_lineitem.\n>>\n>> If we remove aggregation and rewrite Q1 query as:\n>> select\n>> avg(l_quantity) as sum_qty,\n>> avg(l_extendedprice) as sum_base_price,\n>> avg(l_extendedprice*(1-l_discount)) as sum_disc_price,\n>> avg(l_extendedprice*(1-l_discount)*(1+l_tax)) as sum_charge,\n>> avg(l_quantity) as avg_qty,\n>> avg(l_extendedprice) as avg_price,\n>> avg(l_discount) as avg_disc,\n>> count(*) as count_order\n>> from\n>> inmem_lineitem\n>> where\n>> l_shipdate <= '1998-12-01';\n>>\n>> then results are the following:\n>> lineitem: 9805 msec\n>> inmem_lineitem: 6257 msec\n>> vops_lineitem: 1865 msec\n>>\n>> and now profile of inmem_lineitem is:\n>>\n>> 25.27% postgres postgres [.] float4_accum\n>> 21.86% postgres postgres [.] float8_accum\n>> 5.49% postgres postgres [.] check_float8_array\n>> 4.57% postgres postgres [.] ExecScan\n>> 2.61% postgres postgres [.] AggCheckCallContext\n>> 2.30% postgres postgres [.] pg_detoast_datum\n>> 2.10% postgres postgres [.] inmem_getnextslot\n>> 1.81% postgres postgres [.] SeqNext\n>> 1.73% postgres postgres [.] fetch_input_tuple\n>> 1.61% postgres postgres [.] ExecAgg\n>> 1.23% postgres postgres [.] MemoryContextReset\n>>\n>> But still more than 3 times difference with VOPS!\n>> Something is wrong here...\n>>\n>\n> I have no idea what VOPS does, but IIRC one of the bottlenecks compared\n> to various column stores is our iterative execution model, which makes\n> it difficult/imposible to vectorize operations. That's likely why the\n> accum functions are so high in the CPU profile.\n>\n> regards\n>\n\nVOPS is doing very simple thing: it replaces scala types with vector \n(tiles) and define all standard operations for them.\nAlso it provides Postgres aggregate for this types.\nSo while for normal Postgres table, the query\n\nselect sum(x) from T;\n\ncalls float4_accum for each row of T, the same query in VOPS will call \nvops_float4_avg_accumulate for each tile which contains 64 elements.\nSo vops_float4_avg_accumulate is called 64 times less than float4_accum. \nAnd inside it contains straightforward loop:\n\n for (i = 0; i < TILE_SIZE; i++) {\n sum += opd->payload[i];\n }\n\nwhich can be optimized by compiler (loop unrolling, use of SIMD \ninstructions,...).\nSo no wonder that VOPS is faster than Postgres executor.\nBut Postgres now contains JIT and it is used in this case.\nSo interpretation overhead of executor should be mostly eliminated by JIT.\nIn theory, perfect JIT code should process rows of horizontal data model \nat the same speed as vector executor processing columns of vertical data \nmodel.\nVertical model provides signficatn advantages when a query affect only \nsmall fraction of rows.\nBut in case of Q1 we are calculating 8 aggregates for just 4 columns. \nAnd inmem_lineitem is actually projection of original lineitem table \ncontaining only columns needed for this query.\nSo amount of fetched data in this case is almost the same for horizontal \nand vertical data models.\nEffects of CPU caches should not also play significant role in this case.\nThat is why it is not quite clear to me why there is still big \ndifference (3 times) between VOPS and in-memory table and not so large \ndifference between normal and in-memory tables.\n\nConcerning large percent spent in accumulate function - I do not agree \nwith you. What this query is actually doing is just calculating aggregates.\nThe less is interpretation overhead the larger percent of time we should \nspent in aggregate function.\nMay be the whole infrastructure of Postgres aggregates adds too large \noverhead (check_float8_array, function calls,...) and in case of VOPS \nthis overhead is divided by 64.\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Thu, 28 Nov 2019 10:08:18 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Why JIT speed improvement is so modest?"
},
{
"msg_contents": "Hi,\nyou mean if we don't add new compiler options the compiler will do the loop\nunrolling using SIMD automatically?\nBeside the function calls, cache miss etc, for VOPS I think the call stack\nis squeezing too, but the JIT optimize still process rows one by one.\n\nKonstantin Knizhnik <k.knizhnik@postgrespro.ru> 于2019年11月28日周四 下午3:08写道:\n\n>\n>\n> On 27.11.2019 19:05, Tomas Vondra wrote:\n> > On Wed, Nov 27, 2019 at 06:38:45PM +0300, Konstantin Knizhnik wrote:\n> >>\n> >>\n> >> On 25.11.2019 18:24, Merlin Moncure wrote:\n> >>> On Mon, Nov 25, 2019 at 9:09 AM Konstantin Knizhnik\n> >>> <k.knizhnik@postgrespro.ru> wrote:\n> >>>> JIT was not able to significantly (times) increase speed on Q1 query?\n> >>>> Experiment with VOPS shows that used aggregation algorithm itself\n> >>>> is not\n> >>>> a bottleneck.\n> >>>> Profile also give no answer for this question.\n> >>>> Any ideas?\n> >>> Well, in the VOPS variant around 2/3 of the time is spent in routines\n> >>> that are obviously aggregation. In the JIT version, it's around 20%.\n> >>> So this suggests that the replacement execution engine is more\n> >>> invasive. I would also guess (!) that the VOPS engine optimizes fewer\n> >>> classes of query plan. ExecScan for example, looks to be completely\n> >>> optimized out VOPS but is still utilized in the JIT engine.\n> >>\n> >> The difference in fraction of time spent in aggregate calculation is\n> >> not so large (2 times vs. 10 times).\n> >> I suspected that a lot of time is spent in relation traversal code,\n> >> tuple unpacking and visibility checks.\n> >> To check this hypothesis I have implement in-memory table access\n> >> method which stores tuples in unpacked form and\n> >> doesn't perform any visibility checks at all.\n> >> Results were not so existed. I have to disable parallel execution\n> >> (because it is not possible for tuples stored in backend private\n> >> memory).\n> >> Results are the following:\n> >>\n> >> lineitem: 13736 msec\n> >> inmem_lineitem: 10044 msec\n> >> vops_lineitem: 1945 msec\n> >>\n> >> The profile of inmem_lineitem is the following:\n> >>\n> >> 16.79% postgres postgres [.] float4_accum\n> >> 12.86% postgres postgres [.] float8_accum\n> >> 5.83% postgres postgres [.] TupleHashTableHash.isra.8\n> >> 4.44% postgres postgres [.] lookup_hash_entries\n> >> 3.37% postgres postgres [.] check_float8_array\n> >> 3.11% postgres postgres [.] tuplehash_insert\n> >> 2.91% postgres postgres [.] hash_uint32\n> >> 2.83% postgres postgres [.] ExecScan\n> >> 2.56% postgres postgres [.] inmem_getnextslot\n> >> 2.22% postgres postgres [.] FunctionCall1Coll\n> >> 2.14% postgres postgres [.] LookupTupleHashEntry\n> >> 1.95% postgres postgres [.] TupleHashTableMatch.isra.9\n> >> 1.76% postgres postgres [.] pg_detoast_datum\n> >> 1.58% postgres postgres [.] AggCheckCallContext\n> >> 1.57% postgres postgres [.] tts_minimal_clear\n> >> 1.35% postgres perf-3054.map [.] 0x00007f558db60010\n> >> 1.23% postgres postgres [.] fetch_input_tuple\n> >> 1.15% postgres postgres [.] SeqNext\n> >> 1.06% postgres postgres [.] ExecAgg\n> >> 1.00% postgres postgres [.] tts_minimal_store_tuple\n> >>\n> >> So now fraction of time spent in aggregation is increased to 30% (vs.\n> >> 20% for lineitem and 42% for vops_lineitem).\n> >> Looks like the main bottleneck now is hashagg. VOPS is accessing hash\n> >> about 10 times less (because it accumulates values for the whole tile).\n> >> And it explains still large difference bwtween vops_lineitem and\n> >> inmem_lineitem.\n> >>\n> >> If we remove aggregation and rewrite Q1 query as:\n> >> select\n> >> avg(l_quantity) as sum_qty,\n> >> avg(l_extendedprice) as sum_base_price,\n> >> avg(l_extendedprice*(1-l_discount)) as sum_disc_price,\n> >> avg(l_extendedprice*(1-l_discount)*(1+l_tax)) as sum_charge,\n> >> avg(l_quantity) as avg_qty,\n> >> avg(l_extendedprice) as avg_price,\n> >> avg(l_discount) as avg_disc,\n> >> count(*) as count_order\n> >> from\n> >> inmem_lineitem\n> >> where\n> >> l_shipdate <= '1998-12-01';\n> >>\n> >> then results are the following:\n> >> lineitem: 9805 msec\n> >> inmem_lineitem: 6257 msec\n> >> vops_lineitem: 1865 msec\n> >>\n> >> and now profile of inmem_lineitem is:\n> >>\n> >> 25.27% postgres postgres [.] float4_accum\n> >> 21.86% postgres postgres [.] float8_accum\n> >> 5.49% postgres postgres [.] check_float8_array\n> >> 4.57% postgres postgres [.] ExecScan\n> >> 2.61% postgres postgres [.] AggCheckCallContext\n> >> 2.30% postgres postgres [.] pg_detoast_datum\n> >> 2.10% postgres postgres [.] inmem_getnextslot\n> >> 1.81% postgres postgres [.] SeqNext\n> >> 1.73% postgres postgres [.] fetch_input_tuple\n> >> 1.61% postgres postgres [.] ExecAgg\n> >> 1.23% postgres postgres [.] MemoryContextReset\n> >>\n> >> But still more than 3 times difference with VOPS!\n> >> Something is wrong here...\n> >>\n> >\n> > I have no idea what VOPS does, but IIRC one of the bottlenecks compared\n> > to various column stores is our iterative execution model, which makes\n> > it difficult/imposible to vectorize operations. That's likely why the\n> > accum functions are so high in the CPU profile.\n> >\n> > regards\n> >\n>\n> VOPS is doing very simple thing: it replaces scala types with vector\n> (tiles) and define all standard operations for them.\n> Also it provides Postgres aggregate for this types.\n> So while for normal Postgres table, the query\n>\n> select sum(x) from T;\n>\n> calls float4_accum for each row of T, the same query in VOPS will call\n> vops_float4_avg_accumulate for each tile which contains 64 elements.\n> So vops_float4_avg_accumulate is called 64 times less than float4_accum.\n> And inside it contains straightforward loop:\n>\n> for (i = 0; i < TILE_SIZE; i++) {\n> sum += opd->payload[i];\n> }\n>\n> which can be optimized by compiler (loop unrolling, use of SIMD\n> instructions,...).\n> So no wonder that VOPS is faster than Postgres executor.\n> But Postgres now contains JIT and it is used in this case.\n> So interpretation overhead of executor should be mostly eliminated by JIT.\n> In theory, perfect JIT code should process rows of horizontal data model\n> at the same speed as vector executor processing columns of vertical data\n> model.\n> Vertical model provides signficatn advantages when a query affect only\n> small fraction of rows.\n> But in case of Q1 we are calculating 8 aggregates for just 4 columns.\n> And inmem_lineitem is actually projection of original lineitem table\n> containing only columns needed for this query.\n> So amount of fetched data in this case is almost the same for horizontal\n> and vertical data models.\n> Effects of CPU caches should not also play significant role in this case.\n> That is why it is not quite clear to me why there is still big\n> difference (3 times) between VOPS and in-memory table and not so large\n> difference between normal and in-memory tables.\n>\n> Concerning large percent spent in accumulate function - I do not agree\n> with you. What this query is actually doing is just calculating aggregates.\n> The less is interpretation overhead the larger percent of time we should\n> spent in aggregate function.\n> May be the whole infrastructure of Postgres aggregates adds too large\n> overhead (check_float8_array, function calls,...) and in case of VOPS\n> this overhead is divided by 64.\n>\n>\n> --\n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n>\n>\n\n-- \nGuang-Nan He\n\nHi, you mean if we don't add new compiler options the compiler will do the loop unrolling using SIMD automatically?Beside the function calls, cache miss etc, for VOPS I think the call stack is squeezing too, but the JIT optimize still process rows one by one.Konstantin Knizhnik <k.knizhnik@postgrespro.ru> 于2019年11月28日周四 下午3:08写道:\n\nOn 27.11.2019 19:05, Tomas Vondra wrote:\n> On Wed, Nov 27, 2019 at 06:38:45PM +0300, Konstantin Knizhnik wrote:\n>>\n>>\n>> On 25.11.2019 18:24, Merlin Moncure wrote:\n>>> On Mon, Nov 25, 2019 at 9:09 AM Konstantin Knizhnik\n>>> <k.knizhnik@postgrespro.ru> wrote:\n>>>> JIT was not able to significantly (times) increase speed on Q1 query?\n>>>> Experiment with VOPS shows that used aggregation algorithm itself \n>>>> is not\n>>>> a bottleneck.\n>>>> Profile also give no answer for this question.\n>>>> Any ideas?\n>>> Well, in the VOPS variant around 2/3 of the time is spent in routines\n>>> that are obviously aggregation. In the JIT version, it's around 20%.\n>>> So this suggests that the replacement execution engine is more\n>>> invasive. I would also guess (!) that the VOPS engine optimizes fewer\n>>> classes of query plan. ExecScan for example, looks to be completely\n>>> optimized out VOPS but is still utilized in the JIT engine.\n>>\n>> The difference in fraction of time spent in aggregate calculation is \n>> not so large (2 times vs. 10 times).\n>> I suspected that a lot of time is spent in relation traversal code, \n>> tuple unpacking and visibility checks.\n>> To check this hypothesis I have implement in-memory table access \n>> method which stores tuples in unpacked form and\n>> doesn't perform any visibility checks at all.\n>> Results were not so existed. I have to disable parallel execution \n>> (because it is not possible for tuples stored in backend private \n>> memory).\n>> Results are the following:\n>>\n>> lineitem: 13736 msec\n>> inmem_lineitem: 10044 msec\n>> vops_lineitem: 1945 msec\n>>\n>> The profile of inmem_lineitem is the following:\n>>\n>> 16.79% postgres postgres [.] float4_accum\n>> 12.86% postgres postgres [.] float8_accum\n>> 5.83% postgres postgres [.] TupleHashTableHash.isra.8\n>> 4.44% postgres postgres [.] lookup_hash_entries\n>> 3.37% postgres postgres [.] check_float8_array\n>> 3.11% postgres postgres [.] tuplehash_insert\n>> 2.91% postgres postgres [.] hash_uint32\n>> 2.83% postgres postgres [.] ExecScan\n>> 2.56% postgres postgres [.] inmem_getnextslot\n>> 2.22% postgres postgres [.] FunctionCall1Coll\n>> 2.14% postgres postgres [.] LookupTupleHashEntry\n>> 1.95% postgres postgres [.] TupleHashTableMatch.isra.9\n>> 1.76% postgres postgres [.] pg_detoast_datum\n>> 1.58% postgres postgres [.] AggCheckCallContext\n>> 1.57% postgres postgres [.] tts_minimal_clear\n>> 1.35% postgres perf-3054.map [.] 0x00007f558db60010\n>> 1.23% postgres postgres [.] fetch_input_tuple\n>> 1.15% postgres postgres [.] SeqNext\n>> 1.06% postgres postgres [.] ExecAgg\n>> 1.00% postgres postgres [.] tts_minimal_store_tuple\n>>\n>> So now fraction of time spent in aggregation is increased to 30% (vs. \n>> 20% for lineitem and 42% for vops_lineitem).\n>> Looks like the main bottleneck now is hashagg. VOPS is accessing hash \n>> about 10 times less (because it accumulates values for the whole tile).\n>> And it explains still large difference bwtween vops_lineitem and \n>> inmem_lineitem.\n>>\n>> If we remove aggregation and rewrite Q1 query as:\n>> select\n>> avg(l_quantity) as sum_qty,\n>> avg(l_extendedprice) as sum_base_price,\n>> avg(l_extendedprice*(1-l_discount)) as sum_disc_price,\n>> avg(l_extendedprice*(1-l_discount)*(1+l_tax)) as sum_charge,\n>> avg(l_quantity) as avg_qty,\n>> avg(l_extendedprice) as avg_price,\n>> avg(l_discount) as avg_disc,\n>> count(*) as count_order\n>> from\n>> inmem_lineitem\n>> where\n>> l_shipdate <= '1998-12-01';\n>>\n>> then results are the following:\n>> lineitem: 9805 msec\n>> inmem_lineitem: 6257 msec\n>> vops_lineitem: 1865 msec\n>>\n>> and now profile of inmem_lineitem is:\n>>\n>> 25.27% postgres postgres [.] float4_accum\n>> 21.86% postgres postgres [.] float8_accum\n>> 5.49% postgres postgres [.] check_float8_array\n>> 4.57% postgres postgres [.] ExecScan\n>> 2.61% postgres postgres [.] AggCheckCallContext\n>> 2.30% postgres postgres [.] pg_detoast_datum\n>> 2.10% postgres postgres [.] inmem_getnextslot\n>> 1.81% postgres postgres [.] SeqNext\n>> 1.73% postgres postgres [.] fetch_input_tuple\n>> 1.61% postgres postgres [.] ExecAgg\n>> 1.23% postgres postgres [.] MemoryContextReset\n>>\n>> But still more than 3 times difference with VOPS!\n>> Something is wrong here...\n>>\n>\n> I have no idea what VOPS does, but IIRC one of the bottlenecks compared\n> to various column stores is our iterative execution model, which makes\n> it difficult/imposible to vectorize operations. That's likely why the\n> accum functions are so high in the CPU profile.\n>\n> regards\n>\n\nVOPS is doing very simple thing: it replaces scala types with vector \n(tiles) and define all standard operations for them.\nAlso it provides Postgres aggregate for this types.\nSo while for normal Postgres table, the query\n\nselect sum(x) from T;\n\ncalls float4_accum for each row of T, the same query in VOPS will call \nvops_float4_avg_accumulate for each tile which contains 64 elements.\nSo vops_float4_avg_accumulate is called 64 times less than float4_accum. \nAnd inside it contains straightforward loop:\n\n for (i = 0; i < TILE_SIZE; i++) {\n sum += opd->payload[i];\n }\n\nwhich can be optimized by compiler (loop unrolling, use of SIMD \ninstructions,...).\nSo no wonder that VOPS is faster than Postgres executor.\nBut Postgres now contains JIT and it is used in this case.\nSo interpretation overhead of executor should be mostly eliminated by JIT.\nIn theory, perfect JIT code should process rows of horizontal data model \nat the same speed as vector executor processing columns of vertical data \nmodel.\nVertical model provides signficatn advantages when a query affect only \nsmall fraction of rows.\nBut in case of Q1 we are calculating 8 aggregates for just 4 columns. \nAnd inmem_lineitem is actually projection of original lineitem table \ncontaining only columns needed for this query.\nSo amount of fetched data in this case is almost the same for horizontal \nand vertical data models.\nEffects of CPU caches should not also play significant role in this case.\nThat is why it is not quite clear to me why there is still big \ndifference (3 times) between VOPS and in-memory table and not so large \ndifference between normal and in-memory tables.\n\nConcerning large percent spent in accumulate function - I do not agree \nwith you. What this query is actually doing is just calculating aggregates.\nThe less is interpretation overhead the larger percent of time we should \nspent in aggregate function.\nMay be the whole infrastructure of Postgres aggregates adds too large \noverhead (check_float8_array, function calls,...) and in case of VOPS \nthis overhead is divided by 64.\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n-- Guang-Nan He",
"msg_date": "Thu, 28 Nov 2019 15:36:02 +0800",
"msg_from": "guangnan he <gnhe2009@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Why JIT speed improvement is so modest?"
},
{
"msg_contents": "Hi,\n\nOn 28.11.2019 10:36, guangnan he wrote:\n> Hi,\n> you mean if we don't add new compiler options the compiler will do the \n> loop unrolling using SIMD automatically?\n\nYes, most of modern compiler are doing it.\nGCC requires -O3 option (-O2 is not enough), but clang is using them \neven with -O2.\n\nBut Postgres is using more sophisticated Youngs-Cramer algorithm for \ncalculating SUM/AVG aggregates. And here SIMD instructions do not help much.\nMy original assumption was that huge difference in speed between \nVOPS/ISPRAS JIT and Vanilla JIT can be explained by the difference in \naccumulation algorithm.\n\nThis is why I implemented calculation of AVG in VOPS using Youngs-Cramer \nalgorithm.\nAnd it certainly affect performance: Q1 with SUM aggregates is executed \nby VOPS almost three times faster than with AVG aggregates (700 msec vs. \n2000 msec).\nBut even with Youngs-Cramer algorithm VOPS is 6 times faster than \nstandard Postgres with JIT and 5 times faster than my in-memory storage.\n\n> Beside the function calls, cache miss etc, for VOPS I think the call \n> stack is squeezing too, but the JIT optimize still process rows one by \n> one.\nIf we do not take in account overhead of heap traversal and tuples \npacking then amount of calculations doesn't depend on data model: \nwhether it is vertical or horizontal.\nBy implementing in-memory storage which just keeps unpacked tuples in L2 \nlist in backend's private memory and so doesn't spend time for unpacking \nor visibility checks\nI want to exclude this overhead and reach almost the same speed as VOPS.\nBut it doesn't happen.\n\n\n\n>\n> Konstantin Knizhnik <k.knizhnik@postgrespro.ru \n> <mailto:k.knizhnik@postgrespro.ru>> 于2019年11月28日周四 下午3:08写道:\n>\n>\n>\n> On 27.11.2019 19:05, Tomas Vondra wrote:\n> > On Wed, Nov 27, 2019 at 06:38:45PM +0300, Konstantin Knizhnik wrote:\n> >>\n> >>\n> >> On 25.11.2019 18:24, Merlin Moncure wrote:\n> >>> On Mon, Nov 25, 2019 at 9:09 AM Konstantin Knizhnik\n> >>> <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>>\n> wrote:\n> >>>> JIT was not able to significantly (times) increase speed on\n> Q1 query?\n> >>>> Experiment with VOPS shows that used aggregation algorithm\n> itself\n> >>>> is not\n> >>>> a bottleneck.\n> >>>> Profile also give no answer for this question.\n> >>>> Any ideas?\n> >>> Well, in the VOPS variant around 2/3 of the time is spent in\n> routines\n> >>> that are obviously aggregation. In the JIT version, it's\n> around 20%.\n> >>> So this suggests that the replacement execution engine is more\n> >>> invasive. I would also guess (!) that the VOPS engine\n> optimizes fewer\n> >>> classes of query plan. ExecScan for example, looks to be\n> completely\n> >>> optimized out VOPS but is still utilized in the JIT engine.\n> >>\n> >> The difference in fraction of time spent in aggregate\n> calculation is\n> >> not so large (2 times vs. 10 times).\n> >> I suspected that a lot of time is spent in relation traversal\n> code,\n> >> tuple unpacking and visibility checks.\n> >> To check this hypothesis I have implement in-memory table access\n> >> method which stores tuples in unpacked form and\n> >> doesn't perform any visibility checks at all.\n> >> Results were not so existed. I have to disable parallel execution\n> >> (because it is not possible for tuples stored in backend private\n> >> memory).\n> >> Results are the following:\n> >>\n> >> lineitem: 13736 msec\n> >> inmem_lineitem: 10044 msec\n> >> vops_lineitem: 1945 msec\n> >>\n> >> The profile of inmem_lineitem is the following:\n> >>\n> >> 16.79% postgres postgres [.] float4_accum\n> >> 12.86% postgres postgres [.] float8_accum\n> >> 5.83% postgres postgres [.]\n> TupleHashTableHash.isra.8\n> >> 4.44% postgres postgres [.] lookup_hash_entries\n> >> 3.37% postgres postgres [.] check_float8_array\n> >> 3.11% postgres postgres [.] tuplehash_insert\n> >> 2.91% postgres postgres [.] hash_uint32\n> >> 2.83% postgres postgres [.] ExecScan\n> >> 2.56% postgres postgres [.] inmem_getnextslot\n> >> 2.22% postgres postgres [.] FunctionCall1Coll\n> >> 2.14% postgres postgres [.] LookupTupleHashEntry\n> >> 1.95% postgres postgres [.]\n> TupleHashTableMatch.isra.9\n> >> 1.76% postgres postgres [.] pg_detoast_datum\n> >> 1.58% postgres postgres [.] AggCheckCallContext\n> >> 1.57% postgres postgres [.] tts_minimal_clear\n> >> 1.35% postgres perf-3054.map [.] 0x00007f558db60010\n> >> 1.23% postgres postgres [.] fetch_input_tuple\n> >> 1.15% postgres postgres [.] SeqNext\n> >> 1.06% postgres postgres [.] ExecAgg\n> >> 1.00% postgres postgres [.]\n> tts_minimal_store_tuple\n> >>\n> >> So now fraction of time spent in aggregation is increased to\n> 30% (vs.\n> >> 20% for lineitem and 42% for vops_lineitem).\n> >> Looks like the main bottleneck now is hashagg. VOPS is\n> accessing hash\n> >> about 10 times less (because it accumulates values for the\n> whole tile).\n> >> And it explains still large difference bwtween vops_lineitem and\n> >> inmem_lineitem.\n> >>\n> >> If we remove aggregation and rewrite Q1 query as:\n> >> select\n> >> avg(l_quantity) as sum_qty,\n> >> avg(l_extendedprice) as sum_base_price,\n> >> avg(l_extendedprice*(1-l_discount)) as sum_disc_price,\n> >> avg(l_extendedprice*(1-l_discount)*(1+l_tax)) as sum_charge,\n> >> avg(l_quantity) as avg_qty,\n> >> avg(l_extendedprice) as avg_price,\n> >> avg(l_discount) as avg_disc,\n> >> count(*) as count_order\n> >> from\n> >> inmem_lineitem\n> >> where\n> >> l_shipdate <= '1998-12-01';\n> >>\n> >> then results are the following:\n> >> lineitem: 9805 msec\n> >> inmem_lineitem: 6257 msec\n> >> vops_lineitem: 1865 msec\n> >>\n> >> and now profile of inmem_lineitem is:\n> >>\n> >> 25.27% postgres postgres [.] float4_accum\n> >> 21.86% postgres postgres [.] float8_accum\n> >> 5.49% postgres postgres [.] check_float8_array\n> >> 4.57% postgres postgres [.] ExecScan\n> >> 2.61% postgres postgres [.] AggCheckCallContext\n> >> 2.30% postgres postgres [.] pg_detoast_datum\n> >> 2.10% postgres postgres [.] inmem_getnextslot\n> >> 1.81% postgres postgres [.] SeqNext\n> >> 1.73% postgres postgres [.] fetch_input_tuple\n> >> 1.61% postgres postgres [.] ExecAgg\n> >> 1.23% postgres postgres [.] MemoryContextReset\n> >>\n> >> But still more than 3 times difference with VOPS!\n> >> Something is wrong here...\n> >>\n> >\n> > I have no idea what VOPS does, but IIRC one of the bottlenecks\n> compared\n> > to various column stores is our iterative execution model, which\n> makes\n> > it difficult/imposible to vectorize operations. That's likely\n> why the\n> > accum functions are so high in the CPU profile.\n> >\n> > regards\n> >\n>\n> VOPS is doing very simple thing: it replaces scala types with vector\n> (tiles) and define all standard operations for them.\n> Also it provides Postgres aggregate for this types.\n> So while for normal Postgres table, the query\n>\n> select sum(x) from T;\n>\n> calls float4_accum for each row of T, the same query in VOPS will\n> call\n> vops_float4_avg_accumulate for each tile which contains 64 elements.\n> So vops_float4_avg_accumulate is called 64 times less than\n> float4_accum.\n> And inside it contains straightforward loop:\n>\n> for (i = 0; i < TILE_SIZE; i++) {\n> sum += opd->payload[i];\n> }\n>\n> which can be optimized by compiler (loop unrolling, use of SIMD\n> instructions,...).\n> So no wonder that VOPS is faster than Postgres executor.\n> But Postgres now contains JIT and it is used in this case.\n> So interpretation overhead of executor should be mostly eliminated\n> by JIT.\n> In theory, perfect JIT code should process rows of horizontal data\n> model\n> at the same speed as vector executor processing columns of\n> vertical data\n> model.\n> Vertical model provides signficatn advantages when a query affect\n> only\n> small fraction of rows.\n> But in case of Q1 we are calculating 8 aggregates for just 4 columns.\n> And inmem_lineitem is actually projection of original lineitem table\n> containing only columns needed for this query.\n> So amount of fetched data in this case is almost the same for\n> horizontal\n> and vertical data models.\n> Effects of CPU caches should not also play significant role in\n> this case.\n> That is why it is not quite clear to me why there is still big\n> difference (3 times) between VOPS and in-memory table and not so\n> large\n> difference between normal and in-memory tables.\n>\n> Concerning large percent spent in accumulate function - I do not\n> agree\n> with you. What this query is actually doing is just calculating\n> aggregates.\n> The less is interpretation overhead the larger percent of time we\n> should\n> spent in aggregate function.\n> May be the whole infrastructure of Postgres aggregates adds too large\n> overhead (check_float8_array, function calls,...) and in case of VOPS\n> this overhead is divided by 64.\n>\n>\n> -- \n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n>\n>\n>\n> -- \n> Guang-Nan He\n>\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n Hi,\n\nOn 28.11.2019 10:36, guangnan he wrote:\n\n\n\nHi, \n you mean if we don't add new compiler options the compiler\n will do the loop unrolling using SIMD automatically?\n\n\n\n Yes, most of modern compiler are doing it.\n GCC requires -O3 option (-O2 is not enough), but clang is using them\n even with -O2.\n\n But Postgres is using more sophisticated Youngs-Cramer algorithm for\n calculating SUM/AVG aggregates. And here SIMD instructions do not\n help much.\n My original assumption was that huge difference in speed between\n VOPS/ISPRAS JIT and Vanilla JIT can be explained by the difference\n in accumulation algorithm.\n\n This is why I implemented calculation of AVG in VOPS using\n Youngs-Cramer algorithm.\n And it certainly affect performance: Q1 with SUM aggregates is\n executed by VOPS almost three times faster than with AVG aggregates\n (700 msec vs. 2000 msec).\n But even with Youngs-Cramer algorithm VOPS is 6 times faster than\n standard Postgres with JIT and 5 times faster than my in-memory\n storage.\n\n\n\n\nBeside the function calls, cache miss etc, for VOPS I\n think the call stack is squeezing too, but the JIT optimize\n still process rows one by one.\n\n\n\n If we do not take in account overhead of heap traversal and tuples\n packing then amount of calculations doesn't depend on data model:\n whether it is vertical or horizontal. \n By implementing in-memory storage which just keeps unpacked tuples\n in L2 list in backend's private memory and so doesn't spend time for\n unpacking or visibility checks\n I want to exclude this overhead and reach almost the same speed as\n VOPS.\n But it doesn't happen.\n\n\n\n\n\nKonstantin Knizhnik <k.knizhnik@postgrespro.ru>\n 于2019年11月28日周四 下午3:08写道:\n\n\n\n On 27.11.2019 19:05, Tomas Vondra wrote:\n > On Wed, Nov 27, 2019 at 06:38:45PM +0300, Konstantin\n Knizhnik wrote:\n >>\n >>\n >> On 25.11.2019 18:24, Merlin Moncure wrote:\n >>> On Mon, Nov 25, 2019 at 9:09 AM Konstantin\n Knizhnik\n >>> <k.knizhnik@postgrespro.ru>\n wrote:\n >>>> JIT was not able to significantly (times)\n increase speed on Q1 query?\n >>>> Experiment with VOPS shows that used\n aggregation algorithm itself \n >>>> is not\n >>>> a bottleneck.\n >>>> Profile also give no answer for this\n question.\n >>>> Any ideas?\n >>> Well, in the VOPS variant around 2/3 of the time\n is spent in routines\n >>> that are obviously aggregation. In the JIT\n version, it's around 20%.\n >>> So this suggests that the replacement execution\n engine is more\n >>> invasive. I would also guess (!) that the VOPS\n engine optimizes fewer\n >>> classes of query plan. ExecScan for example,\n looks to be completely\n >>> optimized out VOPS but is still utilized in the\n JIT engine.\n >>\n >> The difference in fraction of time spent in aggregate\n calculation is \n >> not so large (2 times vs. 10 times).\n >> I suspected that a lot of time is spent in relation\n traversal code, \n >> tuple unpacking and visibility checks.\n >> To check this hypothesis I have implement in-memory\n table access \n >> method which stores tuples in unpacked form and\n >> doesn't perform any visibility checks at all.\n >> Results were not so existed. I have to disable\n parallel execution \n >> (because it is not possible for tuples stored in\n backend private \n >> memory).\n >> Results are the following:\n >>\n >> lineitem: 13736 msec\n >> inmem_lineitem: 10044 msec\n >> vops_lineitem: 1945 msec\n >>\n >> The profile of inmem_lineitem is the following:\n >>\n >> 16.79% postgres postgres [.]\n float4_accum\n >> 12.86% postgres postgres [.]\n float8_accum\n >> 5.83% postgres postgres [.]\n TupleHashTableHash.isra.8\n >> 4.44% postgres postgres [.]\n lookup_hash_entries\n >> 3.37% postgres postgres [.]\n check_float8_array\n >> 3.11% postgres postgres [.]\n tuplehash_insert\n >> 2.91% postgres postgres [.]\n hash_uint32\n >> 2.83% postgres postgres [.] ExecScan\n >> 2.56% postgres postgres [.]\n inmem_getnextslot\n >> 2.22% postgres postgres [.]\n FunctionCall1Coll\n >> 2.14% postgres postgres [.]\n LookupTupleHashEntry\n >> 1.95% postgres postgres [.]\n TupleHashTableMatch.isra.9\n >> 1.76% postgres postgres [.]\n pg_detoast_datum\n >> 1.58% postgres postgres [.]\n AggCheckCallContext\n >> 1.57% postgres postgres [.]\n tts_minimal_clear\n >> 1.35% postgres perf-3054.map [.]\n 0x00007f558db60010\n >> 1.23% postgres postgres [.]\n fetch_input_tuple\n >> 1.15% postgres postgres [.] SeqNext\n >> 1.06% postgres postgres [.] ExecAgg\n >> 1.00% postgres postgres [.]\n tts_minimal_store_tuple\n >>\n >> So now fraction of time spent in aggregation is\n increased to 30% (vs. \n >> 20% for lineitem and 42% for vops_lineitem).\n >> Looks like the main bottleneck now is hashagg. VOPS\n is accessing hash \n >> about 10 times less (because it accumulates values\n for the whole tile).\n >> And it explains still large difference bwtween\n vops_lineitem and \n >> inmem_lineitem.\n >>\n >> If we remove aggregation and rewrite Q1 query as:\n >> select\n >> avg(l_quantity) as sum_qty,\n >> avg(l_extendedprice) as sum_base_price,\n >> avg(l_extendedprice*(1-l_discount)) as\n sum_disc_price,\n >> avg(l_extendedprice*(1-l_discount)*(1+l_tax)) as\n sum_charge,\n >> avg(l_quantity) as avg_qty,\n >> avg(l_extendedprice) as avg_price,\n >> avg(l_discount) as avg_disc,\n >> count(*) as count_order\n >> from\n >> inmem_lineitem\n >> where\n >> l_shipdate <= '1998-12-01';\n >>\n >> then results are the following:\n >> lineitem: 9805 msec\n >> inmem_lineitem: 6257 msec\n >> vops_lineitem: 1865 msec\n >>\n >> and now profile of inmem_lineitem is:\n >>\n >> 25.27% postgres postgres [.]\n float4_accum\n >> 21.86% postgres postgres [.]\n float8_accum\n >> 5.49% postgres postgres [.]\n check_float8_array\n >> 4.57% postgres postgres [.] ExecScan\n >> 2.61% postgres postgres [.]\n AggCheckCallContext\n >> 2.30% postgres postgres [.]\n pg_detoast_datum\n >> 2.10% postgres postgres [.]\n inmem_getnextslot\n >> 1.81% postgres postgres [.] SeqNext\n >> 1.73% postgres postgres [.]\n fetch_input_tuple\n >> 1.61% postgres postgres [.] ExecAgg\n >> 1.23% postgres postgres [.]\n MemoryContextReset\n >>\n >> But still more than 3 times difference with VOPS!\n >> Something is wrong here...\n >>\n >\n > I have no idea what VOPS does, but IIRC one of the\n bottlenecks compared\n > to various column stores is our iterative execution\n model, which makes\n > it difficult/imposible to vectorize operations. That's\n likely why the\n > accum functions are so high in the CPU profile.\n >\n > regards\n >\n\n VOPS is doing very simple thing: it replaces scala types with\n vector \n (tiles) and define all standard operations for them.\n Also it provides Postgres aggregate for this types.\n So while for normal Postgres table, the query\n\n select sum(x) from T;\n\n calls float4_accum for each row of T, the same query in VOPS\n will call \n vops_float4_avg_accumulate for each tile which contains 64\n elements.\n So vops_float4_avg_accumulate is called 64 times less than\n float4_accum. \n And inside it contains straightforward loop:\n\n for (i = 0; i < TILE_SIZE; i++) {\n sum += opd->payload[i];\n }\n\n which can be optimized by compiler (loop unrolling, use of\n SIMD \n instructions,...).\n So no wonder that VOPS is faster than Postgres executor.\n But Postgres now contains JIT and it is used in this case.\n So interpretation overhead of executor should be mostly\n eliminated by JIT.\n In theory, perfect JIT code should process rows of horizontal\n data model \n at the same speed as vector executor processing columns of\n vertical data \n model.\n Vertical model provides signficatn advantages when a query\n affect only \n small fraction of rows.\n But in case of Q1 we are calculating 8 aggregates for just 4\n columns. \n And inmem_lineitem is actually projection of original lineitem\n table \n containing only columns needed for this query.\n So amount of fetched data in this case is almost the same for\n horizontal \n and vertical data models.\n Effects of CPU caches should not also play significant role in\n this case.\n That is why it is not quite clear to me why there is still big\n \n difference (3 times) between VOPS and in-memory table and not\n so large \n difference between normal and in-memory tables.\n\n Concerning large percent spent in accumulate function - I do\n not agree \n with you. What this query is actually doing is just\n calculating aggregates.\n The less is interpretation overhead the larger percent of time\n we should \n spent in aggregate function.\n May be the whole infrastructure of Postgres aggregates adds\n too large \n overhead (check_float8_array, function calls,...) and in case\n of VOPS \n this overhead is divided by 64.\n\n\n -- \n Konstantin Knizhnik\n Postgres Professional: http://www.postgrespro.com\n The Russian Postgres Company\n\n\n\n\n\n\n\n\n -- \n\n\nGuang-Nan He\n\n\n\n\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 28 Nov 2019 12:25:14 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Why JIT speed improvement is so modest?"
},
{
"msg_contents": "Hi,\n\nOn 2019-11-25 18:09:29 +0300, Konstantin Knizhnik wrote:\n> I wonder why even at this query, which seems to be ideal use case for JIT,\n> we get such modest improvement?\n\nI think there's a number of causes:\n\n1) There's bottlenecks elsewhere:\n - The order of sequential scan memory accesses is bad\n https://www.postgresql.org/message-id/20161030073655.rfa6nvbyk4w2kkpk%40alap3.anarazel.de\n\n In my experiments, fixing that yields larger JIT improvements,\n because less time is spent stalling due to cache misses during\n tuple deforming (needing the tuple's natts at the start prevents\n out-of-order from hiding the relevant latency).\n\n\n - The transition function for floating point aggregates is pretty\n expensive. In particular, we compute the full youngs-cramer stuff\n for sum/avg, even though they aren't actually needed there. This\n has become measurably worse with\n https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=e954a727f0c8872bf5203186ad0f5312f6183746\n In this case it's complicated enough apparently that the transition\n functions are too expensive to inline.\n\n - float4/8_accum use arrays to store the transition state. That's\n noticably more expensive than just accessing a struct, partially\n because more checks needs to be done. We really should move most,\n if not all, aggregates that use array transition states to\n \"internal\" type transition states. Probably with some reusable\n helpers to make it easier to write serialization / deserialization\n functions so we can continue to allow parallelism.\n\n - The per-row overhead on lower levels of the query is\n significant. E.g. in your profile the\n HeapTupleSatisfiesVisibility() calls (you'd get largely rid of this\n by freezing), and the hashtable overhead is quite noticable. JITing\n expression eval doesn't fix that.\n\n ...\n\n\n2) The code generated for JIT isn't that good. In particular, the\n external memory references included in the generated code limit the\n optimization potential quite substantially. There's also quite some\n (not just JIT) improvement potential related to the aggregation code,\n simplifying the generated expressions.\n\n See https://www.postgresql.org/message-id/20191023163849.sosqbfs5yenocez3%40alap3.anarazel.de\n for my attempt at improving the situation. It does measurably\n improve the situation for Q1, while still leaving a lot of further\n improvements to be done. You'd be more than welcome to review some\n of that!\n\n\n3) Plenty of crucial code is not JITed, even when expression\n related. Most crucial for Q1 is the fact that the hash computation\n for aggregates isn't JITed as a whole - when looking at hierarchical\n profiles, we spend about 1/3 of the whole query time within\n TupleHashTable*.\n\n4) The currently required forming / deforming of tuples into minimal\n tuples when storing them in the hashagg table is *expensive*.\n\n We can address that partially by computing NOT NULL information for\n the tupledesc used for the hashtable (which will make JITed tuple\n deforming considerably faster, because it'll just be a reference to\n an hardcoded offset).\n\n We can also simplify the minimal tuple representation - historically\n it looks the way it does now because we needed minimal tuples to be\n largely compatible with heap tuples - but we don't anymore. Even just\n removing the weird offset math we do for minimal tuples would be\n beneficial, but I think we can do more than that.\n\n\n\n> Vitesse DB reports 8x speedup on Q1,\n> ISP-RAS JIT version� provides 3x speedup of Q1:\n\nI think those measurements were done before a lot of generic\nimprovements to aggregation speed were done. E.g. Q1 performance\nimproved significantly due to the new expression evaluation engine, even\nwithout JIT. Because the previous tree-walking expression evaluation was\nso slow for many things, JITing that away obviously yielded bigger\nimprovements than it does now.\n\n\n> VOPS provides 10x improvement of Q1.\n\nMy understanding of VOPS is that it ferries around more than one tuple\nat a time. And avoids a lot of generic code paths. So that just doesn't\nseem a meaningful comparison.\n\n\n> In theory by elimination of interpretation overhead JIT should provide\n> performance comparable with vecrtorized executor.\n\nI don't think that's true at all. Vectorized execution, which I assume\nto mean dealing with more than one tuple at a time, is largely\northogonal to the way expressions are evaluated. The reason that\nvectorized execution is good is that it drastically increases cache\nlocality (by performing work that accesses related data, e.g. a buffer\npage, in a tight loop, without a lot of other work happening inbetween),\nthat it increases the benefits of out of order execution (by removing\ndependencies, as e.g. predicates for multiple tuples can be computed,\nwithout a separate dependency on the result for each predicate\nevaluation), etc.\n\nJIT compiled expression evaluation cannot get you these benefits.\n\n\n> In most programming languages using JIT compiler instead of byte-code\n> interpreter provides about 10x speed improvement.\n\nBut that's with low level bytecode execution, whereas expression\nevaluation uses relatively coarse ops (sometimes called \"super\"\nopcodes).\n\n\n\n> Below are tops of profiles (functions with more than 1% of time):\n>\n> JIT:\n\nNote that just looking at a plain porfile, without injecting information\nabout the JITed code, will yield misleading results. Without the\nadditional information perf will not be able to group the instructions\nof the JITed code sampled to a function, leading to them each being\nlisted separately.\n\nIf you enable jit_profiling_support, and measure with\n\nperf record -k 1 -o /tmp/perf.data -p 22950\n(optionally with --call-graph lbr)\nyou then can inject the information about JITed code:\nperf inject -v --jit -i /tmp/perf.data -o /tmp/perf.jit.data\nand look at the result of that with\nperf report -i /tmp/perf.jit.data\n\n\n> � 10.98%� postgres� postgres����������� [.] float4_accum\n> �� 8.40%� postgres� postgres����������� [.] float8_accum\n> �� 7.51%� postgres� postgres����������� [.] HeapTupleSatisfiesVisibility\n> �� 5.92%� postgres� postgres����������� [.] ExecInterpExpr\n> �� 5.63%� postgres� postgres����������� [.] tts_minimal_getsomeattrs\n\nThe fact that ExecInterpExpr, tts_minimal_getsomeattrs show up\nsignificantly suggests that you're running a slightly older build,\nwithout a few bugfixes. Could that be true?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 4 Dec 2019 11:43:32 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Why JIT speed improvement is so modest?"
},
{
"msg_contents": "Hi,\n\nThank you for your replay and explanations.\nMy comments are inside.\n\nOn 04.12.2019 22:43, Andres Freund wrote:\n> Hi,\n>\n> On 2019-11-25 18:09:29 +0300, Konstantin Knizhnik wrote:\n>> I wonder why even at this query, which seems to be ideal use case for JIT,\n>> we get such modest improvement?\n> I think there's a number of causes:\n>\n> 1) There's bottlenecks elsewhere:\n> - The order of sequential scan memory accesses is bad\n> https://www.postgresql.org/message-id/20161030073655.rfa6nvbyk4w2kkpk%40alap3.anarazel.de\n>\n> In my experiments, fixing that yields larger JIT improvements,\n> because less time is spent stalling due to cache misses during\n> tuple deforming (needing the tuple's natts at the start prevents\n> out-of-order from hiding the relevant latency).\n\nThis is why I have implemented my own in-memory table access method.\nIt stores tuples in unpacked format so there should be no tuple \ndeforming overhead.\nBy the way if somebody is interested (mostly for experiments, I do not \nthing that it in the current state it has some practival meaning)\nmy in-memory storage implementation is here:\n\nhttps://github.com/postgrespro/postgresql.builtin_pool/tree/inmem_am\n\n>\n>\n> - The transition function for floating point aggregates is pretty\n> expensive. In particular, we compute the full youngs-cramer stuff\n> for sum/avg, even though they aren't actually needed there. This\n> has become measurably worse with\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=e954a727f0c8872bf5203186ad0f5312f6183746\n> In this case it's complicated enough apparently that the transition\n> functions are too expensive to inline.\n>\n> - float4/8_accum use arrays to store the transition state. That's\n> noticably more expensive than just accessing a struct, partially\n> because more checks needs to be done. We really should move most,\n> if not all, aggregates that use array transition states to\n> \"internal\" type transition states. Probably with some reusable\n> helpers to make it easier to write serialization / deserialization\n> functions so we can continue to allow parallelism.\n\nYes, it is true.\nProfile shows that packing/unpacking transition state takes substantial \namount of time.\nBut youngs-cramer stuff is not used for SUM aggregate! It is using \nfloat4pl for accumulation and float4 as transaction type.\nWhen I replace AVG with sum time of query execution with in-mem storage \nis decreased from 6 seconds to 4 seconds.\nBut in VOPS improvement was even larger: 700 msec vs. 1865 msec. So the \ngap in performance is even larger.\n\nAnd profile is shows that aggregate infrastructure overhead disappears:\n\n 8.82% postgres postgres [.] ExecScan\n 4.60% postgres postgres [.] fetch_input_tuple\n 4.51% postgres postgres [.] inmem_getnextslot\n 3.48% postgres postgres [.] SeqNext\n 3.13% postgres postgres [.] ExecAgg\n 2.78% postgres postgres [.] MemoryContextReset\n 1.77% postgres perf-27660.map [.] 0x00007fcb81ae9032\n 1.52% postgres perf-27660.map [.] 0x00007fcb81ae9a4a\n 1.50% postgres perf-27660.map [.] 0x00007fcb81ae9fa2\n 1.49% postgres perf-27660.map [.] 0x00007fcb81ae9dcd\n 1.44% postgres perf-27660.map [.] 0x00007fcb81aea205\n 1.42% postgres perf-27660.map [.] 0x00007fcb81ae9072\n 1.31% postgres perf-27660.map [.] 0x00007fcb81ae9a56\n 1.22% postgres perf-27660.map [.] 0x00007fcb81ae9df1\n 1.22% postgres perf-27660.map [.] 0x00007fcb81aea225\n 1.21% postgres perf-27660.map [.] 0x00007fcb81ae93e6\n 1.21% postgres perf-27660.map [.] 0x00007fcb81ae9fae\n 1.19% postgres perf-27660.map [.] 0x00007fcb81ae9c83\n 1.12% postgres perf-27660.map [.] 0x00007fcb81ae9e5b\n 1.12% postgres perf-27660.map [.] 0x00007fcb81ae9c5f\n 1.05% postgres perf-27660.map [.] 0x00007fcb81ae9010\n 1.05% postgres perf-27660.map [.] 0x00007fcb81ae987b\n\nAs far as I understand positions in profile starting from 7-th are \ncorresponding to JIT code.\n\n\n> - The per-row overhead on lower levels of the query is\n> significant. E.g. in your profile the\n> HeapTupleSatisfiesVisibility() calls (you'd get largely rid of this\n> by freezing), and the hashtable overhead is quite noticable. JITing\n> expression eval doesn't fix that.\n\nOnce again: my in-memory storage doesn't perform visibility checks.\nThis is was the primary idea of my experiment: try to minimize per-row \nstorage overhead and check if JIT can provide performance\ncomparable with vectorized engine. Unfortunately the answer was \nnegative: the difference with VOPS is more than three times, while \ndifference between\nstandard table and in-memory table is less than 1.5.\n\n>\n> ...\n>\n>\n> 2) The code generated for JIT isn't that good. In particular, the\n> external memory references included in the generated code limit the\n> optimization potential quite substantially. There's also quite some\n> (not just JIT) improvement potential related to the aggregation code,\n> simplifying the generated expressions.\n>\n> See https://www.postgresql.org/message-id/20191023163849.sosqbfs5yenocez3%40alap3.anarazel.de\n> for my attempt at improving the situation. It does measurably\n> improve the situation for Q1, while still leaving a lot of further\n> improvements to be done. You'd be more than welcome to review some\n> of that!\n>\n>\n> 3) Plenty of crucial code is not JITed, even when expression\n> related. Most crucial for Q1 is the fact that the hash computation\n> for aggregates isn't JITed as a whole - when looking at hierarchical\n> profiles, we spend about 1/3 of the whole query time within\n> TupleHashTable*.\n> 4) The currently required forming / deforming of tuples into minimal\n> tuples when storing them in the hashagg table is *expensive*.\n>\n> We can address that partially by computing NOT NULL information for\n> the tupledesc used for the hashtable (which will make JITed tuple\n> deforming considerably faster, because it'll just be a reference to\n> an hardcoded offset).\n>\n> We can also simplify the minimal tuple representation - historically\n> it looks the way it does now because we needed minimal tuples to be\n> largely compatible with heap tuples - but we don't anymore. Even just\n> removing the weird offset math we do for minimal tuples would be\n> beneficial, but I think we can do more than that.\n>\n\nYes, this is the first think I have noticed. VOPS is calling hash \nfunction only once per 64 rows - 64 times smaller than row storage.\nThis is why VOPS is 6 times faster on Q1 than vanilla postgres and 5 \ntimes than my in-memory storage.\nAnd this is why I removed aggregation from Q1 and just calculates grand \naggregates.\n\n>\n>> VOPS provides 10x improvement of Q1.\n> My understanding of VOPS is that it ferries around more than one tuple\n> at a time. And avoids a lot of generic code paths. So that just doesn't\n> seem a meaningful comparison.\n\nVOPS is just an example of vectorized executor.\nIt is possible to implement things which VOPS is performing using \ncustomized types\nusing custom nodes as in Hubert Zhang prototype:\nhttps://www.postgresql.org/message-id/flat/CAB0yrenxJ3FcmnLs8JqpEG3tzSZ%3DOL1MZBUh3v6dgH%2Bo70GTFA%40mail.gmail.com#df50bbf3610dc2f42cb9b54423a22111\n\n\n>> In theory by elimination of interpretation overhead JIT should provide\n>> performance comparable with vecrtorized executor.\n> I don't think that's true at all. Vectorized execution, which I assume\n> to mean dealing with more than one tuple at a time, is largely\n> orthogonal to the way expressions are evaluated. The reason that\n> vectorized execution is good is that it drastically increases cache\n> locality (by performing work that accesses related data, e.g. a buffer\n> page, in a tight loop, without a lot of other work happening inbetween),\n> that it increases the benefits of out of order execution (by removing\n> dependencies, as e.g. predicates for multiple tuples can be computed,\n> without a separate dependency on the result for each predicate\n> evaluation), etc.\n>\n> JIT compiled expression evaluation cannot get you these benefits.\n\nYes, I know this arguments.\nBut please look here in Q1 and lineitem table projection we have just 4 \nfloat4 attributes and calculates 7 aggregates for them.\nIt seems to me that in this case CPU cache will be even more efficiently \nusing in case of horizontal calculation.\nAt least if you implement correspondent query in C, then version working \nwith array of struct will be almost two times\nfaster than version working with vertical arrays.\n\n\n>\n> Note that just looking at a plain porfile, without injecting information\n> about the JITed code, will yield misleading results. Without the\n> additional information perf will not be able to group the instructions\n> of the JITed code sampled to a function, leading to them each being\n> listed separately.\n>\n> If you enable jit_profiling_support, and measure with\n>\n> perf record -k 1 -o /tmp/perf.data -p 22950\n> (optionally with --call-graph lbr)\n> you then can inject the information about JITed code:\n> perf inject -v --jit -i /tmp/perf.data -o /tmp/perf.jit.data\n> and look at the result of that with\n> perf report -i /tmp/perf.jit.data\n>\nSomething is not working properly in my case:\n\nroot@knizhnik:~# perf record -k 1 -o /tmp/perf.data -p 7407\n^C[ perf record: Woken up 2 times to write data ]\n[ perf record: Captured and wrote 0.452 MB /tmp/perf.data (11410 samples) ]\n\nroot@knizhnik:~# perf inject -v --jit -i /tmp/perf.data -o \n/tmp/perf.jit.data\nbuild id event received for [kernel.kallsyms]: \nb1ef0f6204a7ec3f508b9e1536f73521c7b4b41a\nbuild id event received for /home/knizhnik/postgresql/dist/bin/postgres: \n8ef1a41e80f043a56778e265f5badb67f1441b61\nbuild id event received for [vdso]: b13824592e1e837368d92991b72a19437dc86a27\nLooking at the vmlinux_path (8 entries long)\nsymsrc__init: cannot get elf header.\nUsing /proc/kcore for kernel object code\nUsing /proc/kallsyms for symbols\nUsing CPUID GenuineIntel-6-3C\nroot@knizhnik:~# perf report -i /tmp/perf.jit.data\n\n 7.37% postgres postgres [.] ExecScan\n 7.23% postgres postgres [.] inmem_getnextslot\n 4.79% postgres postgres [.] fetch_input_tuple\n 4.07% postgres postgres [.] SeqNext\n 3.52% postgres postgres [.] ExecAgg\n 2.68% postgres postgres [.] MemoryContextReset\n 1.62% postgres perf-7407.map [.] 0x00007f4591c95f02\n 1.50% postgres perf-7407.map [.] 0x00007f4591c95d2d\n...\n> The fact that ExecInterpExpr, tts_minimal_getsomeattrs show up\n> significantly suggests that you're running a slightly older build,\n> without a few bugfixes. Could that be true?\nMy forked my branch on your commit from 27 November \n(ca266a069a20c32a8f0a1df982a5a67d9483bcb3).\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Thu, 5 Dec 2019 11:47:34 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Why JIT speed improvement is so modest?"
},
{
"msg_contents": "On Thu, Nov 28, 2019 at 2:08 AM Konstantin Knizhnik\n<k.knizhnik@postgrespro.ru> wrote:\n> calls float4_accum for each row of T, the same query in VOPS will call\n> vops_float4_avg_accumulate for each tile which contains 64 elements.\n> So vops_float4_avg_accumulate is called 64 times less than float4_accum.\n> And inside it contains straightforward loop:\n>\n> for (i = 0; i < TILE_SIZE; i++) {\n> sum += opd->payload[i];\n> }\n>\n> which can be optimized by compiler (loop unrolling, use of SIMD\n> instructions,...).\n\nPart of the reason why the compiler can optimize that so well is\nprobably related to the fact that it includes no overflow checks.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 6 Dec 2019 10:53:44 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Why JIT speed improvement is so modest?"
},
{
"msg_contents": "\n\nOn 06.12.2019 18:53, Robert Haas wrote:\n> On Thu, Nov 28, 2019 at 2:08 AM Konstantin Knizhnik\n> <k.knizhnik@postgrespro.ru> wrote:\n>> calls float4_accum for each row of T, the same query in VOPS will call\n>> vops_float4_avg_accumulate for each tile which contains 64 elements.\n>> So vops_float4_avg_accumulate is called 64 times less than float4_accum.\n>> And inside it contains straightforward loop:\n>>\n>> for (i = 0; i < TILE_SIZE; i++) {\n>> sum += opd->payload[i];\n>> }\n>>\n>> which can be optimized by compiler (loop unrolling, use of SIMD\n>> instructions,...).\n> Part of the reason why the compiler can optimize that so well is\n> probably related to the fact that it includes no overflow checks.\n\nMay it makes sense to use in aggregate transformation function which is \nnot checking for overflow and perform this check only in final function?\nNaN and Inf values will be preserved in any case...\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Fri, 6 Dec 2019 19:52:15 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Why JIT speed improvement is so modest?"
},
{
"msg_contents": "\n\nOn 06.12.2019 19:52, Konstantin Knizhnik wrote:\n>\n>\n> On 06.12.2019 18:53, Robert Haas wrote:\n>> On Thu, Nov 28, 2019 at 2:08 AM Konstantin Knizhnik\n>> <k.knizhnik@postgrespro.ru> wrote:\n>>> calls float4_accum for each row of T, the same query in VOPS will call\n>>> vops_float4_avg_accumulate for each tile which contains 64 elements.\n>>> So vops_float4_avg_accumulate is called 64 times less than \n>>> float4_accum.\n>>> And inside it contains straightforward loop:\n>>>\n>>> for (i = 0; i < TILE_SIZE; i++) {\n>>> sum += opd->payload[i];\n>>> }\n>>>\n>>> which can be optimized by compiler (loop unrolling, use of SIMD\n>>> instructions,...).\n>> Part of the reason why the compiler can optimize that so well is\n>> probably related to the fact that it includes no overflow checks.\n>\n> May it makes sense to use in aggregate transformation function which \n> is not checking for overflow and perform this check only in final \n> function?\n> NaN and Inf values will be preserved in any case...\n>\nI have tried to comment check_float8_val in float4_pl/float8_pl and get \ncompletely no difference in performance.\n\nBut if I replace query\n\nselect\n sum(l_quantity) as sum_qty,\n sum(l_extendedprice) as sum_base_price,\n sum(l_extendedprice*(1-l_discount)) as sum_disc_price,\n sum(l_extendedprice*(1-l_discount)*(1+l_tax)) as sum_charge,\n sum(l_quantity) as avg_qty,\n sum(l_extendedprice) as avg_price,\n sum(l_discount) as avg_disc,\ncount(*) as count_order\nfrom lineitem_inmem;\n\n\nwith\n\nselect sum(l_quantity + l_extendedprice + l_discount + l_tax) from \nlineitem_inmem;\n\n\nthen time is reduced from 3686 to 1748 msec.\nSo at least half of this time we spend in expression evaluations and \naggregates accumulation.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Fri, 6 Dec 2019 20:18:56 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Why JIT speed improvement is so modest?"
},
{
"msg_contents": "Hi,\n\nOn 2019-12-06 19:52:15 +0300, Konstantin Knizhnik wrote:\n> On 06.12.2019 18:53, Robert Haas wrote:\n> > On Thu, Nov 28, 2019 at 2:08 AM Konstantin Knizhnik\n> > <k.knizhnik@postgrespro.ru> wrote:\n> > > calls float4_accum for each row of T, the same query in VOPS will call\n> > > vops_float4_avg_accumulate for each tile which contains 64 elements.\n> > > So vops_float4_avg_accumulate is called 64 times less than float4_accum.\n> > > And inside it contains straightforward loop:\n> > > \n> > > for (i = 0; i < TILE_SIZE; i++) {\n> > > sum += opd->payload[i];\n> > > }\n> > > \n> > > which can be optimized by compiler (loop unrolling, use of SIMD\n> > > instructions,...).\n\nI still fail to see what this has to do with the subject.\n\n\n> > Part of the reason why the compiler can optimize that so well is\n> > probably related to the fact that it includes no overflow checks.\n> \n> May it makes sense to use in aggregate transformation function which is not\n> checking for overflow and perform this check only in final function?\n> NaN and Inf values will be preserved in any case...\n\nI mean I personally think it'd be ok to skip the overflow checks for\nfloating point operations, they're not all that useful in practice (if\nnot the opposite). But if you want correct overflow detection behaviour,\nyou cannot just check in the final function, as you cannot discern\nbetween the state where infinity/NaN has been incorporated into the\ntransition state from cases where that has happened due to overflow etc.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 6 Dec 2019 09:36:07 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Why JIT speed improvement is so modest?"
}
] |
[
{
"msg_contents": "Hi,\n\nWe are still on the process to migrate our applications from proprietary RDBMS to PostgreSQL.\n\nHere is a simple query executed on various systems (real query is different but this one does not need any data) :\n\n\nConnected to:\n\nOracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production\n\nVersion 19.3.0.0.0\n\n\n\nSQL> select count(*) from (select 1 from dual where 0=1 group by grouping sets(())) tmp;\n\n\n\n COUNT(*)\n\n----------\n\n 0\n\n\n\n\n\nselect @@version;\n\nGO\n\n\n\n--------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------\n\nMicrosoft SQL Server 2017 (RTM-CU16) (KB4508218) - 14.0.3223.3 (X64)\n\n Jul 12 2019 17:43:08\n\n Copyright (C) 2017 Microsoft Corporation\n\n Developer Edition (64-bit) on Linux (Ubuntu 16.04.6 LTS)\n\n\n\nselect count(*) from (select 1 as c1 where 0=1 group by grouping sets(())) tmp;\n\nGO\n\n\n\n-----------\n\n 0\n\n\n\n(1 rows affected)\n\n\n\n\n\nselect version();\n\n version\n\n----------------------------------------------------------------------------------------------------------------\n\nPostgreSQL 11.5 (Debian 11.5-1+deb10u1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit\n\n\n\n\n\n\n\nselect count(*) from (select 1 from dual where 0=1 group by grouping sets(())) tmp;\n\ncount\n\n-------\n\n 1\n\n(1 ligne)\n\n\n\n\n\n0 or 1, which behaviour conforms to the SQL standard ? We have a workaround and it's just informational.\n\n\nRegards,\n\n\nPhil\n\n\n\n\n\n\n\n\n\nHi,\n\n\n\n\nWe are still on the process to migrate our applications from proprietary RDBMS to PostgreSQL.\n\n\n\n\n\n\nHere is a simple query executed on various systems (real query is different but this one does not need any data) :\n\n\n\n\nConnected to:\nOracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production\nVersion 19.3.0.0.0\n \nSQL> select count(*) from (select 1 from dual where 0=1 group by grouping sets(())) tmp;\n \n COUNT(*)\n----------\n 0\n \n \nselect @@version;\nGO\n \n--------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------- \n ------------------------------------------------------\nMicrosoft SQL Server 2017 (RTM-CU16) (KB4508218) - 14.0.3223.3 (X64)\n Jul 12 2019 17:43:08\n Copyright (C) 2017 Microsoft Corporation\n Developer Edition (64-bit) on Linux (Ubuntu 16.04.6 LTS)\n\n \nselect count(*) from (select 1 as c1 where 0=1 group by grouping sets(())) tmp;\nGO\n \n-----------\n 0\n \n(1 rows affected)\n \n \nselect version();\n version\n----------------------------------------------------------------------------------------------------------------\nPostgreSQL 11.5 (Debian 11.5-1+deb10u1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit\n \n \n \nselect count(*) from (select 1 from dual where 0=1 group by grouping sets(())) tmp;\ncount\n-------\n 1\n(1 ligne)\n\n\n\n\n\n\n\n\n0 or 1, which behaviour conforms to the SQL standard ? We have a workaround and it's just informational.\n\n\nRegards,\n\n\nPhil",
"msg_date": "Mon, 25 Nov 2019 19:31:42 +0000",
"msg_from": "Phil Florent <philflorent@hotmail.com>",
"msg_from_op": true,
"msg_subject": "GROUPING SETS and SQL standard"
},
{
"msg_contents": "po 25. 11. 2019 v 20:32 odesílatel Phil Florent <philflorent@hotmail.com>\nnapsal:\n\n> Hi,\n>\n> We are still on the process to migrate our applications from proprietary\n> RDBMS to PostgreSQL.\n>\n> Here is a simple query executed on various systems (real query is\n> different but this one does not need any data) :\n>\n> Connected to:\n>\n> Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production\n>\n> Version 19.3.0.0.0\n>\n>\n>\n> SQL> select count(*) from (select 1 from dual where 0=1 group by grouping\n> sets(())) tmp;\n>\n>\n>\n> COUNT(*)\n>\n> ----------\n>\n> 0\n>\n>\n>\n>\n>\n> select @@version;\n>\n> GO\n>\n>\n>\n>\n> ---------------------------------------------------------------------------------------------------------------------------\n> ---------------------------------------------------------------------------------------------------------------------------\n> ------------------------------------------------------\n>\n> Microsoft SQL Server 2017 (RTM-CU16) (KB4508218) - 14.0.3223.3 (X64)\n>\n> Jul 12 2019 17:43:08\n>\n> Copyright (C) 2017 Microsoft Corporation\n>\n> Developer Edition (64-bit) on Linux (Ubuntu 16.04.6 LTS)\n>\n>\n>\n> select count(*) from (select 1 as c1 where 0=1 group by grouping sets(()))\n> tmp;\n>\n> GO\n>\n>\n>\n> -----------\n>\n> 0\n>\n>\n>\n> (1 rows affected)\n>\n>\n>\n>\n>\n> select version();\n>\n> version\n>\n>\n> ----------------------------------------------------------------------------------------------------------------\n>\n> PostgreSQL 11.5 (Debian 11.5-1+deb10u1) on x86_64-pc-linux-gnu, compiled\n> by gcc (Debian 8.3.0-6) 8.3.0, 64-bit\n>\n>\n>\n>\n>\n>\n>\n> select count(*) from (select 1 from dual where 0=1 group by grouping\n> sets(())) tmp;\n>\n> count\n>\n> -------\n>\n> 1\n>\n> (1 ligne)\n>\n>\n>\n>\n>\n> 0 or 1, which behaviour conforms to the SQL standard ? We have a\n> workaround and it's just informational.\n>\n\nThis example has not too much sense - I am not sure if these corner cases\nare described by ANSI SQL standards.\n\nIf I add aggregate query to subquery - using grouping sets without\naggregation function is strange, then Postgres result looks more correct\n\npostgres=# select 1, count(*) from dual group by grouping sets(());\n┌──────────┬───────┐\n│ ?column? │ count │\n╞══════════╪═══════╡\n│ 1 │ 1 │\n└──────────┴───────┘\n(1 row)\n\npostgres=# select 1, count(*) from dual where false group by grouping\nsets(());\n┌──────────┬───────┐\n│ ?column? │ count │\n╞══════════╪═══════╡\n│ 1 │ 0 │\n└──────────┴───────┘\n(1 row)\n\nSELECT count(*) from this should be one in both cases.\n\nI am not sure, if standard describe using grouping sets without any\naggregation function\n\nPavel\n\n>\n> Regards,\n>\n>\n> Phil\n>\n>\n\npo 25. 11. 2019 v 20:32 odesílatel Phil Florent <philflorent@hotmail.com> napsal:\n\n\r\nHi,\n\n\n\n\r\nWe are still on the process to migrate our applications from proprietary RDBMS to PostgreSQL.\r\n\n\n\n\n\n\r\nHere is a simple query executed on various systems (real query is different but this one does not need any data) :\n\n\n\n\nConnected to:\nOracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production\nVersion 19.3.0.0.0\n \nSQL> select count(*) from (select 1 from dual where 0=1 group by grouping sets(())) tmp;\n \n COUNT(*)\n----------\n 0\n \n \nselect @@version;\nGO\n \n--------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------- \r\n ------------------------------------------------------\nMicrosoft SQL Server 2017 (RTM-CU16) (KB4508218) - 14.0.3223.3 (X64)\n Jul 12 2019 17:43:08\n Copyright (C) 2017 Microsoft Corporation\n Developer Edition (64-bit) on Linux (Ubuntu 16.04.6 LTS)\r\n\n \nselect count(*) from (select 1 as c1 where 0=1 group by grouping sets(())) tmp;\nGO\n \n-----------\n 0\n \n(1 rows affected)\n \n \nselect version();\n version\n----------------------------------------------------------------------------------------------------------------\nPostgreSQL 11.5 (Debian 11.5-1+deb10u1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit\n \n \n \nselect count(*) from (select 1 from dual where 0=1 group by grouping sets(())) tmp;\ncount\n-------\n 1\n(1 ligne)\n\n\n\n\n\n\n\n\n0 or 1, which behaviour conforms to the SQL standard ? We have a workaround and it's just informational.This example has not too much sense - I am not sure if these corner cases are described by ANSI SQL standards.If I add aggregate query to subquery - using grouping sets without aggregation function is strange, then Postgres result looks more correctpostgres=# select 1, count(*) from dual group by grouping sets(());┌──────────┬───────┐│ ?column? │ count │╞══════════╪═══════╡│ 1 │ 1 │└──────────┴───────┘(1 row)postgres=# select 1, count(*) from dual where false group by grouping sets(());┌──────────┬───────┐│ ?column? │ count │╞══════════╪═══════╡│ 1 │ 0 │└──────────┴───────┘(1 row)SELECT count(*) from this should be one in both cases.I am not sure, if standard describe using grouping sets without any aggregation functionPavel\n\n\nRegards,\n\n\nPhil",
"msg_date": "Mon, 25 Nov 2019 21:23:12 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: GROUPING SETS and SQL standard"
},
{
"msg_contents": "Hi,\r\nThank you, as you mentionned it's not really an interesting real life case anyway.\r\nRegards,\r\nPhil\r\n\r\n________________________________\r\nDe : Pavel Stehule <pavel.stehule@gmail.com>\r\nEnvoyé : lundi 25 novembre 2019 21:23\r\nÀ : Phil Florent <philflorent@hotmail.com>\r\nCc : pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>\r\nObjet : Re: GROUPING SETS and SQL standard\r\n\r\n\r\n\r\npo 25. 11. 2019 v 20:32 odesílatel Phil Florent <philflorent@hotmail.com<mailto:philflorent@hotmail.com>> napsal:\r\nHi,\r\n\r\nWe are still on the process to migrate our applications from proprietary RDBMS to PostgreSQL.\r\n\r\nHere is a simple query executed on various systems (real query is different but this one does not need any data) :\r\n\r\n\r\nConnected to:\r\n\r\nOracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production\r\n\r\nVersion 19.3.0.0.0\r\n\r\n\r\n\r\nSQL> select count(*) from (select 1 from dual where 0=1 group by grouping sets(())) tmp;\r\n\r\n\r\n\r\n COUNT(*)\r\n\r\n----------\r\n\r\n 0\r\n\r\n\r\n\r\n\r\n\r\nselect @@version;\r\n\r\nGO\r\n\r\n\r\n\r\n--------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------\r\n\r\nMicrosoft SQL Server 2017 (RTM-CU16) (KB4508218) - 14.0.3223.3 (X64)\r\n\r\n Jul 12 2019 17:43:08\r\n\r\n Copyright (C) 2017 Microsoft Corporation\r\n\r\n Developer Edition (64-bit) on Linux (Ubuntu 16.04.6 LTS)\r\n\r\n\r\n\r\nselect count(*) from (select 1 as c1 where 0=1 group by grouping sets(())) tmp;\r\n\r\nGO\r\n\r\n\r\n\r\n-----------\r\n\r\n 0\r\n\r\n\r\n\r\n(1 rows affected)\r\n\r\n\r\n\r\n\r\n\r\nselect version();\r\n\r\n version\r\n\r\n----------------------------------------------------------------------------------------------------------------\r\n\r\nPostgreSQL 11.5 (Debian 11.5-1+deb10u1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\nselect count(*) from (select 1 from dual where 0=1 group by grouping sets(())) tmp;\r\n\r\ncount\r\n\r\n-------\r\n\r\n 1\r\n\r\n(1 ligne)\r\n\r\n\r\n\r\n\r\n\r\n0 or 1, which behaviour conforms to the SQL standard ? We have a workaround and it's just informational.\r\n\r\nThis example has not too much sense - I am not sure if these corner cases are described by ANSI SQL standards.\r\n\r\nIf I add aggregate query to subquery - using grouping sets without aggregation function is strange, then Postgres result looks more correct\r\n\r\npostgres=# select 1, count(*) from dual group by grouping sets(());\r\n┌──────────┬───────┐\r\n│ ?column? │ count │\r\n╞══════════╪═══════╡\r\n│ 1 │ 1 │\r\n└──────────┴───────┘\r\n(1 row)\r\n\r\npostgres=# select 1, count(*) from dual where false group by grouping sets(());\r\n┌──────────┬───────┐\r\n│ ?column? │ count │\r\n╞══════════╪═══════╡\r\n│ 1 │ 0 │\r\n└──────────┴───────┘\r\n(1 row)\r\n\r\nSELECT count(*) from this should be one in both cases.\r\n\r\nI am not sure, if standard describe using grouping sets without any aggregation function\r\n\r\nPavel\r\n\r\n\r\nRegards,\r\n\r\n\r\nPhil\r\n\r\n\n\n\n\n\n\n\n\n\r\nHi,\r\nThank you, as you mentionned it's not really an interesting real life case anyway.\n\nRegards,\n\nPhil\n\n\n\n\n\nDe : Pavel Stehule <pavel.stehule@gmail.com>\nEnvoyé : lundi 25 novembre 2019 21:23\nÀ : Phil Florent <philflorent@hotmail.com>\nCc : pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>\nObjet : Re: GROUPING SETS and SQL standard\n \n\n\n\n\n\n\n\npo 25. 11. 2019 v 20:32 odesílatel Phil Florent <philflorent@hotmail.com> napsal:\n\n\n\n\r\nHi,\n\n\n\n\r\nWe are still on the process to migrate our applications from proprietary RDBMS to PostgreSQL.\r\n\n\n\n\n\n\r\nHere is a simple query executed on various systems (real query is different but this one does not need any data) :\n\n\n\n\nConnected to:\nOracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production\nVersion 19.3.0.0.0\n \nSQL> select count(*) from (select 1 from dual where 0=1 group by grouping sets(())) tmp;\n \n COUNT(*)\n----------\n 0\n \n \nselect @@version;\nGO\n \n--------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------- \r\n ------------------------------------------------------\nMicrosoft SQL Server 2017 (RTM-CU16) (KB4508218) - 14.0.3223.3 (X64)\n Jul 12 2019 17:43:08\n Copyright (C) 2017 Microsoft Corporation\n Developer Edition (64-bit) on Linux (Ubuntu 16.04.6 LTS)\r\n\n \nselect count(*) from (select 1 as c1 where 0=1 group by grouping sets(())) tmp;\nGO\n \n-----------\n 0\n \n(1 rows affected)\n \n \nselect version();\n version\n----------------------------------------------------------------------------------------------------------------\nPostgreSQL 11.5 (Debian 11.5-1+deb10u1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit\n \n \n \nselect count(*) from (select 1 from dual where 0=1 group by grouping sets(())) tmp;\ncount\n-------\n 1\n(1 ligne)\n\n\n\n\n\n\n\n\n0 or 1, which behaviour conforms to the SQL standard ? We have a workaround and it's just informational.\n\n\n\n\n\nThis example has not too much sense - I am not sure if these corner cases are described by ANSI SQL standards.\n\n\nIf I add aggregate query to subquery - using grouping sets without aggregation function is strange, then Postgres result looks more correct\n\n\npostgres=# select 1, count(*) from dual group by grouping sets(());\r\n┌──────────┬───────┐\r\n│ ?column? │ count │\r\n╞══════════╪═══════╡\r\n│ 1 │ 1 │\r\n└──────────┴───────┘\r\n(1 row)\n\r\npostgres=# select 1, count(*) from dual where false group by grouping sets(());\r\n┌──────────┬───────┐\r\n│ ?column? │ count │\r\n╞══════════╪═══════╡\r\n│ 1 │ 0 │\r\n└──────────┴───────┘\r\n(1 row)\n\n\nSELECT count(*) from this should be one in both cases.\n\n\n\nI am not sure, if standard describe using grouping sets without any aggregation function\n\n\nPavel\n\n\n\n\n\n\nRegards,\n\n\nPhil",
"msg_date": "Mon, 25 Nov 2019 21:18:00 +0000",
"msg_from": "Phil Florent <philflorent@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: GROUPING SETS and SQL standard"
},
{
"msg_contents": "A <grouping specification> of () (called grand total in the Standard) is equivalent to grouping the entire result Table;\r\n\r\nIf I get it correctly:\r\n\r\nselect max(dummy) from dual where 0 = 1 group by grouping sets(());\r\n\r\nand\r\n\r\nselect max(dummy) from dual where 0 = 1 ;\r\n\r\nshould have the same output.\r\n\r\nIt's the case with PostgreSQL, not with Oracle.\r\nHence it means it's PostgreSQL which conforms to the standard in this case.\r\n\r\nRegards,\r\nPhil\r\n\r\n________________________________\r\nDe : Phil Florent <philflorent@hotmail.com>\r\nEnvoyé : lundi 25 novembre 2019 22:18\r\nÀ : Pavel Stehule <pavel.stehule@gmail.com>\r\nCc : pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>\r\nObjet : RE: GROUPING SETS and SQL standard\r\n\r\nHi,\r\nThank you, as you mentionned it's not really an interesting real life case anyway.\r\nRegards,\r\nPhil\r\n\r\n________________________________\r\nDe : Pavel Stehule <pavel.stehule@gmail.com>\r\nEnvoyé : lundi 25 novembre 2019 21:23\r\nÀ : Phil Florent <philflorent@hotmail.com>\r\nCc : pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>\r\nObjet : Re: GROUPING SETS and SQL standard\r\n\r\n\r\n\r\npo 25. 11. 2019 v 20:32 odesílatel Phil Florent <philflorent@hotmail.com<mailto:philflorent@hotmail.com>> napsal:\r\nHi,\r\n\r\nWe are still on the process to migrate our applications from proprietary RDBMS to PostgreSQL.\r\n\r\nHere is a simple query executed on various systems (real query is different but this one does not need any data) :\r\n\r\n\r\nConnected to:\r\n\r\nOracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production\r\n\r\nVersion 19.3.0.0.0\r\n\r\n\r\n\r\nSQL> select count(*) from (select 1 from dual where 0=1 group by grouping sets(())) tmp;\r\n\r\n\r\n\r\n COUNT(*)\r\n\r\n----------\r\n\r\n 0\r\n\r\n\r\n\r\n\r\n\r\nselect @@version;\r\n\r\nGO\r\n\r\n\r\n\r\n--------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------\r\n\r\nMicrosoft SQL Server 2017 (RTM-CU16) (KB4508218) - 14.0.3223.3 (X64)\r\n\r\n Jul 12 2019 17:43:08\r\n\r\n Copyright (C) 2017 Microsoft Corporation\r\n\r\n Developer Edition (64-bit) on Linux (Ubuntu 16.04.6 LTS)\r\n\r\n\r\n\r\nselect count(*) from (select 1 as c1 where 0=1 group by grouping sets(())) tmp;\r\n\r\nGO\r\n\r\n\r\n\r\n-----------\r\n\r\n 0\r\n\r\n\r\n\r\n(1 rows affected)\r\n\r\n\r\n\r\n\r\n\r\nselect version();\r\n\r\n version\r\n\r\n----------------------------------------------------------------------------------------------------------------\r\n\r\nPostgreSQL 11.5 (Debian 11.5-1+deb10u1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\nselect count(*) from (select 1 from dual where 0=1 group by grouping sets(())) tmp;\r\n\r\ncount\r\n\r\n-------\r\n\r\n 1\r\n\r\n(1 ligne)\r\n\r\n\r\n\r\n\r\n\r\n0 or 1, which behaviour conforms to the SQL standard ? We have a workaround and it's just informational.\r\n\r\nThis example has not too much sense - I am not sure if these corner cases are described by ANSI SQL standards.\r\n\r\nIf I add aggregate query to subquery - using grouping sets without aggregation function is strange, then Postgres result looks more correct\r\n\r\npostgres=# select 1, count(*) from dual group by grouping sets(());\r\n┌──────────┬───────┐\r\n│ ?column? │ count │\r\n╞══════════╪═══════╡\r\n│ 1 │ 1 │\r\n└──────────┴───────┘\r\n(1 row)\r\n\r\npostgres=# select 1, count(*) from dual where false group by grouping sets(());\r\n┌──────────┬───────┐\r\n│ ?column? │ count │\r\n╞══════════╪═══════╡\r\n│ 1 │ 0 │\r\n└──────────┴───────┘\r\n(1 row)\r\n\r\nSELECT count(*) from this should be one in both cases.\r\n\r\nI am not sure, if standard describe using grouping sets without any aggregation function\r\n\r\nPavel\r\n\r\n\r\nRegards,\r\n\r\n\r\nPhil\r\n\r\n\n\n\n\n\n\n\n\nA <grouping specification> of () (called grand total in the Standard) is equivalent to grouping the entire result Table;\r\n\n\n\n\n\n\r\nIf I get it correctly:\n\n\n\n\nselect max(dummy) from dual where 0 = 1 group by grouping sets(());\n\n\n\n\nand\n\n\n\nselect max(dummy) from dual where 0 = 1 ;\n\n\n\n\nshould have the same output.\r\n\n\n\n\nIt's the case with PostgreSQL, not with Oracle.\n\r\nHence it means it's PostgreSQL which conforms to the standard in this case.\n\n\n\n\r\nRegards,\n\r\nPhil\n\n\n\n\n\nDe : Phil Florent <philflorent@hotmail.com>\nEnvoyé : lundi 25 novembre 2019 22:18\nÀ : Pavel Stehule <pavel.stehule@gmail.com>\nCc : pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>\nObjet : RE: GROUPING SETS and SQL standard\n \n\n\n\n\r\nHi,\r\nThank you, as you mentionned it's not really an interesting real life case anyway.\n\nRegards,\n\nPhil\n\n\n\n\n\nDe : Pavel Stehule <pavel.stehule@gmail.com>\nEnvoyé : lundi 25 novembre 2019 21:23\nÀ : Phil Florent <philflorent@hotmail.com>\nCc : pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>\nObjet : Re: GROUPING SETS and SQL standard\n \n\n\n\n\n\n\n\npo 25. 11. 2019 v 20:32 odesílatel Phil Florent <philflorent@hotmail.com> napsal:\n\n\n\n\r\nHi,\n\n\n\n\r\nWe are still on the process to migrate our applications from proprietary RDBMS to PostgreSQL.\r\n\n\n\n\n\n\r\nHere is a simple query executed on various systems (real query is different but this one does not need any data) :\n\n\n\n\nConnected to:\nOracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production\nVersion 19.3.0.0.0\n \nSQL> select count(*) from (select 1 from dual where 0=1 group by grouping sets(())) tmp;\n \n COUNT(*)\n----------\n 0\n \n \nselect @@version;\nGO\n \n--------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------- \r\n ------------------------------------------------------\nMicrosoft SQL Server 2017 (RTM-CU16) (KB4508218) - 14.0.3223.3 (X64)\n Jul 12 2019 17:43:08\n Copyright (C) 2017 Microsoft Corporation\n Developer Edition (64-bit) on Linux (Ubuntu 16.04.6 LTS)\r\n\n \nselect count(*) from (select 1 as c1 where 0=1 group by grouping sets(())) tmp;\nGO\n \n-----------\n 0\n \n(1 rows affected)\n \n \nselect version();\n version\n----------------------------------------------------------------------------------------------------------------\nPostgreSQL 11.5 (Debian 11.5-1+deb10u1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit\n \n \n \nselect count(*) from (select 1 from dual where 0=1 group by grouping sets(())) tmp;\ncount\n-------\n 1\n(1 ligne)\n\n\n\n\n\n\n\n\n0 or 1, which behaviour conforms to the SQL standard ? We have a workaround and it's just informational.\n\n\n\n\n\nThis example has not too much sense - I am not sure if these corner cases are described by ANSI SQL standards.\n\n\nIf I add aggregate query to subquery - using grouping sets without aggregation function is strange, then Postgres result looks more correct\n\n\npostgres=# select 1, count(*) from dual group by grouping sets(());\r\n┌──────────┬───────┐\r\n│ ?column? │ count │\r\n╞══════════╪═══════╡\r\n│ 1 │ 1 │\r\n└──────────┴───────┘\r\n(1 row)\n\r\npostgres=# select 1, count(*) from dual where false group by grouping sets(());\r\n┌──────────┬───────┐\r\n│ ?column? │ count │\r\n╞══════════╪═══════╡\r\n│ 1 │ 0 │\r\n└──────────┴───────┘\r\n(1 row)\n\n\nSELECT count(*) from this should be one in both cases.\n\n\n\nI am not sure, if standard describe using grouping sets without any aggregation function\n\n\nPavel\n\n\n\n\n\n\nRegards,\n\n\nPhil",
"msg_date": "Tue, 26 Nov 2019 00:16:49 +0000",
"msg_from": "Phil Florent <philflorent@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: GROUPING SETS and SQL standard"
},
{
"msg_contents": "Phil Florent <philflorent@hotmail.com> writes:\n> A <grouping specification> of () (called grand total in the Standard) is equivalent to grouping the entire result Table;\n\nYeah, I believe so. Grouping by no columns is similar to what happens\nif you compute an aggregate with no GROUP BY: the whole table is\ntaken as one group. If the table is empty, the group is empty, but\nthere's still a group --- that's why you get one aggregate output\nvalue, not none, from\n\nregression=# select count(*) from dual where 0 = 1;\n count \n-------\n 0\n(1 row)\n\nThus, in your example, the sub-query should give\n\nregression=# select 1 from dual where 0=1 group by grouping sets(());\n ?column? \n----------\n 1\n(1 row)\n\nand therefore it's correct that\n\nregression=# select count(*) from (select 1 from dual where 0=1 group by grouping sets(())) tmp;\n count \n-------\n 1\n(1 row)\n\nAFAICS, Oracle and SQL Server are getting it wrong.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 25 Nov 2019 19:39:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: GROUPING SETS and SQL standard"
},
{
"msg_contents": "Thank you, it's noticed. Seems Oracle does not like too much \"grouping sets\". We discovered we had more serious \"wrong results\" bugs with this clause in our migration process. Anyway we don't have to maintain a double compatibility and soon it won't be a problem anymore.\nRegards\nPhil\n\n________________________________\nDe : Tom Lane <tgl@sss.pgh.pa.us>\nEnvoyé : mardi 26 novembre 2019 01:39\nÀ : Phil Florent <philflorent@hotmail.com>\nCc : Pavel Stehule <pavel.stehule@gmail.com>; pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>\nObjet : Re: GROUPING SETS and SQL standard\n\nPhil Florent <philflorent@hotmail.com> writes:\n> A <grouping specification> of () (called grand total in the Standard) is equivalent to grouping the entire result Table;\n\nYeah, I believe so. Grouping by no columns is similar to what happens\nif you compute an aggregate with no GROUP BY: the whole table is\ntaken as one group. If the table is empty, the group is empty, but\nthere's still a group --- that's why you get one aggregate output\nvalue, not none, from\n\nregression=# select count(*) from dual where 0 = 1;\n count\n-------\n 0\n(1 row)\n\nThus, in your example, the sub-query should give\n\nregression=# select 1 from dual where 0=1 group by grouping sets(());\n ?column?\n----------\n 1\n(1 row)\n\nand therefore it's correct that\n\nregression=# select count(*) from (select 1 from dual where 0=1 group by grouping sets(())) tmp;\n count\n-------\n 1\n(1 row)\n\nAFAICS, Oracle and SQL Server are getting it wrong.\n\n regards, tom lane\n\n\n\n\n\n\n\n\nThank you, it's noticed. Seems Oracle does not like too much \"grouping sets\". We discovered we had more serious \"wrong results\" bugs with this clause in our migration process. Anyway we don't have to maintain a double compatibility and soon it won't be a problem\n anymore.\n\nRegards\n\nPhil\n\n\n\n\n\n\n\nDe : Tom Lane <tgl@sss.pgh.pa.us>\nEnvoyé : mardi 26 novembre 2019 01:39\nÀ : Phil Florent <philflorent@hotmail.com>\nCc : Pavel Stehule <pavel.stehule@gmail.com>; pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>\nObjet : Re: GROUPING SETS and SQL standard\n \n\n\nPhil Florent <philflorent@hotmail.com> writes:\n> A <grouping specification> of () (called grand total in the Standard) is equivalent to grouping the entire result Table;\n\nYeah, I believe so. Grouping by no columns is similar to what happens\nif you compute an aggregate with no GROUP BY: the whole table is\ntaken as one group. If the table is empty, the group is empty, but\nthere's still a group --- that's why you get one aggregate output\nvalue, not none, from\n\nregression=# select count(*) from dual where 0 = 1;\n count \n-------\n 0\n(1 row)\n\nThus, in your example, the sub-query should give\n\nregression=# select 1 from dual where 0=1 group by grouping sets(());\n ?column? \n----------\n 1\n(1 row)\n\nand therefore it's correct that\n\nregression=# select count(*) from (select 1 from dual where 0=1 group by grouping sets(())) tmp;\n count \n-------\n 1\n(1 row)\n\nAFAICS, Oracle and SQL Server are getting it wrong.\n\n regards, tom lane",
"msg_date": "Tue, 26 Nov 2019 08:29:26 +0000",
"msg_from": "Phil Florent <philflorent@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: GROUPING SETS and SQL standard"
}
] |
[
{
"msg_contents": "The optimizer cost model usually needs 2 inputs, one is used to represent\ndata distribution and the other one is used to represent the capacity of\nthe hardware, like cpu/io let's call this one as system stats.\n\nIn Oracle database, the system stats can be gathered with\ndbms_stats.gather_system_stats [1] on the running hardware, In\npostgresql, the value is set on based on experience (user can change the\nvalue as well, but is should be hard to decide which values they should\nuse). The pg way is not perfect in theory(In practice, it may be good\nenough or not). for example, HDD & SSD have different capacity regards to\nseq_scan_cost/random_page_cost, cpu cost may also different on different\nhardware as well.\n\nI run into a paper [2] which did some research on dynamic gathering the\nvalues for xxx_cost, looks it is interesting. However it doesn't provide\nthe code for others to do more research. before I dive into this, It\nwould be great to hear some suggestion from experts.\n\nso what do you think about this method and have we have some discussion\nabout this before and the result?\n\n[1] https://docs.oracle.com/database/121/ARPLS/d_stats.htm#ARPLS68580\n[2] https://dsl.cds.iisc.ac.in/publications/thesis/pankhuri.pdf\n\nThanks\n\nThe optimizer cost model usually needs 2 inputs, one is used to represent data distribution and the other one is used to represent the capacity of the hardware, like cpu/io let's call this one as system stats.In Oracle database, the system stats can be gathered with dbms_stats.gather_system_stats [1] on the running hardware, In postgresql, the value is set on based on experience (user can change the value as well, but is should be hard to decide which values they should use). The pg way is not perfect in theory(In practice, it may be good enough or not). for example, HDD & SSD have different capacity regards to seq_scan_cost/random_page_cost, cpu cost may also different on different hardware as well. I run into a paper [2] which did some research on dynamic gathering the values for xxx_cost, looks it is interesting. However it doesn't provide the code for others to do more research. before I dive into this, It would be great to hear some suggestion from experts. so what do you think about this method and have we have some discussion about this before and the result? [1] https://docs.oracle.com/database/121/ARPLS/d_stats.htm#ARPLS68580[2] https://dsl.cds.iisc.ac.in/publications/thesis/pankhuri.pdf Thanks",
"msg_date": "Tue, 26 Nov 2019 08:59:22 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Dynamic gathering the values for seq_page_cost/xxx_cost"
},
{
"msg_contents": "On Tue, Nov 26, 2019 at 08:59:22AM +0800, Andy Fan wrote:\n>The optimizer cost model usually needs 2 inputs, one is used to represent\n>data distribution and the other one is used to represent the capacity of\n>the hardware, like cpu/io let's call this one as system stats.\n>\n>In Oracle database, the system stats can be gathered with\n>dbms_stats.gather_system_stats [1] on the running hardware, In\n>postgresql, the value is set on based on experience (user can change the\n>value as well, but is should be hard to decide which values they should\n>use). The pg way is not perfect in theory(In practice, it may be good\n>enough or not). for example, HDD & SSD have different capacity regards to\n>seq_scan_cost/random_page_cost, cpu cost may also different on different\n>hardware as well.\n>\n>I run into a paper [2] which did some research on dynamic gathering the\n>values for xxx_cost, looks it is interesting. However it doesn't provide\n>the code for others to do more research. before I dive into this, It\n>would be great to hear some suggestion from experts.\n>\n>so what do you think about this method and have we have some discussion\n>about this before and the result?\n>\n\nIMHO it would be great to have a tool that helps with tuning those\nparameters, particularly random_page_cost. I'm not sure how feasible it\nis, though, but if you're willing to do some initial experiments and\nresearch, I think it's worth looking into.\n\nIt's going to be challenging, though, because even random_page_cost=4\nmismatches the \"raw\" characteristics on any existing hardware. On old\ndrives the sequential/random difference is way worse, on SSDs it's about\nright. But then again, we know random_page_cost=1.5 or so works mostly\nfine on SSDs, and that's much lower than just raw numbers.\n\nSo it's clearly one thing to measure HW capabilities, and it's another\nthing to conclude what the parameters should be ...\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 27 Nov 2019 17:48:21 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Dynamic gathering the values for seq_page_cost/xxx_cost"
},
{
"msg_contents": "On Thu, Nov 28, 2019 at 12:48 AM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> On Tue, Nov 26, 2019 at 08:59:22AM +0800, Andy Fan wrote:\n> >The optimizer cost model usually needs 2 inputs, one is used to represent\n> >data distribution and the other one is used to represent the capacity of\n> >the hardware, like cpu/io let's call this one as system stats.\n> >\n> >In Oracle database, the system stats can be gathered with\n> >dbms_stats.gather_system_stats [1] on the running hardware, In\n> >postgresql, the value is set on based on experience (user can change the\n> >value as well, but is should be hard to decide which values they should\n> >use). The pg way is not perfect in theory(In practice, it may be good\n> >enough or not). for example, HDD & SSD have different capacity regards\n> to\n> >seq_scan_cost/random_page_cost, cpu cost may also different on different\n> >hardware as well.\n> >\n> >I run into a paper [2] which did some research on dynamic gathering the\n> >values for xxx_cost, looks it is interesting. However it doesn't provide\n> >the code for others to do more research. before I dive into this, It\n> >would be great to hear some suggestion from experts.\n> >\n> >so what do you think about this method and have we have some discussion\n> >about this before and the result?\n> >\n>\n> IMHO it would be great to have a tool that helps with tuning those\n> parameters, particularly random_page_cost. I'm not sure how feasible it\n> is, though, but if you're willing to do some initial experiments and\n> research, I think it's worth looking into.\n>\n> It's going to be challenging, though, because even random_page_cost=4\n> mismatches the \"raw\" characteristics on any existing hardware. On old\n> drives the sequential/random difference is way worse, on SSDs it's about\n> right. But then again, we know random_page_cost=1.5 or so works mostly\n> fine on SSDs, and that's much lower than just raw numbers.\n>\n> So it's clearly one thing to measure HW capabilities, and it's another\n> thing to conclude what the parameters should be ...\n>\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nI recently tried something in this direction and the result looks\npromising based on my limited test.\n\nSince the unit of a xxx_cost is \"seq_page_cost\", then how to detect\nseq_page_cost is important. In the cost model, the IO cost of a seqscan is\nrel->pages * seq_page_cost, it doesn't consider any cache (file system\ncache or\nshared buffer cache). However, it assumes the OS will prefetch the IO. So\nto\ndetect the seq_page_cost, I enabled the prefetch but avoided the file system\ncache. I tested this with 1). drop the cache on the file system. 2). Open\nthe test\nfile without O_DIRECT so that the prefetch can work.\n\nTo detect the random page read, I read it with pread with a random offset.\nSince the random offsets may be the same as each other during the test,\nso even dropping the file system cache at the beginning doesn't work. so\nI open it with the O_DIRECT option.\n\nI also measure the cost of reading a page from a file system cache, during\nmy test, it is about 10% of a seq scan read.\n\nAfter I get the basic numbers about the hardware capability, I let the user\nprovide a cache hit ratio (This is a place where we can further improve if\nthis\nis a right direction).\n\nHere is the test result on my hardware.\n\nfs_cache_lat = 0.832025us, seq_read_lat = 8.570290us, random_page_lat =\n73.987732us\n\ncache hit ratio: 1.000000 random_page_cost 1.000000\ncache hit ratio: 0.900000 random_page_cost 5.073692\ncache hit ratio: 0.500000 random_page_cost 7.957589\ncache hit ratio: 0.100000 random_page_cost 8.551591\ncache hit ratio: 0.000000 random_page_cost 8.633049\n\n\nThen I tested the suggested value with the 10GB TPCH\nworkload. I compared the plans with 2 different settings random_page_cost =\n1). 4 is the default value) 2). 8.6 the cache hint ratio = 0 one. Then 11\nout of the 22\nqueries generated a different plan. At last I drop the cache (including\nboth\nfile system cache and shared_buffer) before run each query and run the 11\nqueries\nunder the 2 different settings. The execution time is below.\n\n\n| | random_page_cost=4 | random_page_cost=8.6 |\n|-----+--------------------+----------------------|\n| Q1 | 1425.964 | 1121.928 |\n| Q2 | 2553.072 | 2567.450 |\n| Q5 | 4397.514 | 1475.343 |\n| Q6 | 12576.985 | 4622.503 |\n| Q7 | 3459.777 | 2987.241 |\n| Q8 | 8360.995 | 8415.311 |\n| Q9 | 4661.842 | 2930.370 |\n| Q11 | 4885.289 | 2348.541 |\n| Q13 | 2610.937 | 1497.776 |\n| Q20 | 13218.122 | 10985.738 |\n| Q21 | 264.639 | 262.350 |\n\n\nThe attached main.c is the program I used to detect the\nrandom_page_cost. result.tar.gz is the test result, you can run a git log\nfirst\nto see the difference on plan or execution stat.\n\nAny feedback is welcome. Thanks!\n\n\n-- \nBest Regards\nAndy Fan",
"msg_date": "Fri, 18 Sep 2020 21:28:10 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Dynamic gathering the values for seq_page_cost/xxx_cost"
},
{
"msg_contents": "On Fri, Sep 18, 2020 at 09:28:10PM +0800, Andy Fan wrote:\n>On Thu, Nov 28, 2019 at 12:48 AM Tomas Vondra <tomas.vondra@2ndquadrant.com>\n>wrote:\n>\n>> On Tue, Nov 26, 2019 at 08:59:22AM +0800, Andy Fan wrote:\n>> >The optimizer cost model usually needs 2 inputs, one is used to represent\n>> >data distribution and the other one is used to represent the capacity of\n>> >the hardware, like cpu/io let's call this one as system stats.\n>> >\n>> >In Oracle database, the system stats can be gathered with\n>> >dbms_stats.gather_system_stats [1] on the running hardware, In\n>> >postgresql, the value is set on based on experience (user can change the\n>> >value as well, but is should be hard to decide which values they should\n>> >use). The pg way is not perfect in theory(In practice, it may be good\n>> >enough or not). for example, HDD & SSD have different capacity regards\n>> to\n>> >seq_scan_cost/random_page_cost, cpu cost may also different on different\n>> >hardware as well.\n>> >\n>> >I run into a paper [2] which did some research on dynamic gathering the\n>> >values for xxx_cost, looks it is interesting. However it doesn't provide\n>> >the code for others to do more research. before I dive into this, It\n>> >would be great to hear some suggestion from experts.\n>> >\n>> >so what do you think about this method and have we have some discussion\n>> >about this before and the result?\n>> >\n>>\n>> IMHO it would be great to have a tool that helps with tuning those\n>> parameters, particularly random_page_cost. I'm not sure how feasible it\n>> is, though, but if you're willing to do some initial experiments and\n>> research, I think it's worth looking into.\n>>\n>> It's going to be challenging, though, because even random_page_cost=4\n>> mismatches the \"raw\" characteristics on any existing hardware. On old\n>> drives the sequential/random difference is way worse, on SSDs it's about\n>> right. But then again, we know random_page_cost=1.5 or so works mostly\n>> fine on SSDs, and that's much lower than just raw numbers.\n>>\n>> So it's clearly one thing to measure HW capabilities, and it's another\n>> thing to conclude what the parameters should be ...\n>>\n>>\n>> regards\n>>\n>> --\n>> Tomas Vondra http://www.2ndQuadrant.com\n>> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>>\n>\n>I recently tried something in this direction and the result looks\n>promising based on my limited test.\n>\n>Since the unit of a xxx_cost is \"seq_page_cost\", then how to detect\n>seq_page_cost is important. In the cost model, the IO cost of a seqscan is\n>rel->pages * seq_page_cost, it doesn't consider any cache (file system\n>cache or\n>shared buffer cache). However, it assumes the OS will prefetch the IO. So\n>to\n>detect the seq_page_cost, I enabled the prefetch but avoided the file system\n>cache. I tested this with 1). drop the cache on the file system. 2). Open\n>the test\n>file without O_DIRECT so that the prefetch can work.\n>\n>To detect the random page read, I read it with pread with a random offset.\n>Since the random offsets may be the same as each other during the test,\n>so even dropping the file system cache at the beginning doesn't work. so\n>I open it with the O_DIRECT option.\n>\n>I also measure the cost of reading a page from a file system cache, during\n>my test, it is about 10% of a seq scan read.\n>\n>After I get the basic numbers about the hardware capability, I let the user\n>provide a cache hit ratio (This is a place where we can further improve if\n>this\n>is a right direction).\n>\n>Here is the test result on my hardware.\n>\n>fs_cache_lat = 0.832025us, seq_read_lat = 8.570290us, random_page_lat =\n>73.987732us\n>\n>cache hit ratio: 1.000000 random_page_cost 1.000000\n>cache hit ratio: 0.900000 random_page_cost 5.073692\n>cache hit ratio: 0.500000 random_page_cost 7.957589\n>cache hit ratio: 0.100000 random_page_cost 8.551591\n>cache hit ratio: 0.000000 random_page_cost 8.633049\n>\n>\n>Then I tested the suggested value with the 10GB TPCH\n>workload. I compared the plans with 2 different settings random_page_cost =\n>1). 4 is the default value) 2). 8.6 the cache hint ratio = 0 one. Then 11\n>out of the 22\n>queries generated a different plan. At last I drop the cache (including\n>both\n>file system cache and shared_buffer) before run each query and run the 11\n>queries\n>under the 2 different settings. The execution time is below.\n>\n>\n>| | random_page_cost=4 | random_page_cost=8.6 |\n>|-----+--------------------+----------------------|\n>| Q1 | 1425.964 | 1121.928 |\n>| Q2 | 2553.072 | 2567.450 |\n>| Q5 | 4397.514 | 1475.343 |\n>| Q6 | 12576.985 | 4622.503 |\n>| Q7 | 3459.777 | 2987.241 |\n>| Q8 | 8360.995 | 8415.311 |\n>| Q9 | 4661.842 | 2930.370 |\n>| Q11 | 4885.289 | 2348.541 |\n>| Q13 | 2610.937 | 1497.776 |\n>| Q20 | 13218.122 | 10985.738 |\n>| Q21 | 264.639 | 262.350 |\n>\n>\n>The attached main.c is the program I used to detect the\n>random_page_cost. result.tar.gz is the test result, you can run a git log\n>first\n>to see the difference on plan or execution stat.\n>\n>Any feedback is welcome. Thanks!\n>\n\nThat seems pretty neat. What kind of hardware have you done these tests\non? It's probably worth testing on various other storage systems to see\nhow that applies to those.\n\nHave you tried existing I/O testing tools, e.g. fio? If your idea is to\npropose some built-in tool (similar to pg_test_fsync) then we probably\nshould not rely on external tools, but I wonder if we're getting the\nsame numbers.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 18 Sep 2020 15:50:33 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Dynamic gathering the values for seq_page_cost/xxx_cost"
},
{
"msg_contents": "Hi Tomas:\n Thanks for checking.\n\nOn Fri, Sep 18, 2020 at 9:50 PM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> >I recently tried something in this direction and the result looks\n> >promising based on my limited test.\n> >\n> >Since the unit of a xxx_cost is \"seq_page_cost\", then how to detect\n> >seq_page_cost is important. In the cost model, the IO cost of a seqscan is\n> >rel->pages * seq_page_cost, it doesn't consider any cache (file system\n> >cache or\n> >shared buffer cache). However, it assumes the OS will prefetch the IO. So\n> >to\n> >detect the seq_page_cost, I enabled the prefetch but avoided the file\n> system\n> >cache. I tested this with 1). drop the cache on the file system. 2). Open\n> >the test\n> >file without O_DIRECT so that the prefetch can work.\n> >\n> >To detect the random page read, I read it with pread with a random offset.\n> >Since the random offsets may be the same as each other during the test,\n> >so even dropping the file system cache at the beginning doesn't work. so\n> >I open it with the O_DIRECT option.\n> >\n> >I also measure the cost of reading a page from a file system cache, during\n> >my test, it is about 10% of a seq scan read.\n> >\n> >After I get the basic numbers about the hardware capability, I let the\n> user\n> >provide a cache hit ratio (This is a place where we can further improve if\n> >this\n> >is a right direction).\n> >\n> >Here is the test result on my hardware.\n> >\n> >fs_cache_lat = 0.832025us, seq_read_lat = 8.570290us, random_page_lat =\n> >73.987732us\n> >\n> >cache hit ratio: 1.000000 random_page_cost 1.000000\n> >cache hit ratio: 0.900000 random_page_cost 5.073692\n> >cache hit ratio: 0.500000 random_page_cost 7.957589\n> >cache hit ratio: 0.100000 random_page_cost 8.551591\n> >cache hit ratio: 0.000000 random_page_cost 8.633049\n> >\n> >\n> >Then I tested the suggested value with the 10GB TPCH\n> >workload. I compared the plans with 2 different settings random_page_cost\n> =\n> >1). 4 is the default value) 2). 8.6 the cache hint ratio = 0 one. Then\n> 11\n> >out of the 22\n> >queries generated a different plan. At last I drop the cache (including\n> >both\n> >file system cache and shared_buffer) before run each query and run the 11\n> >queries\n> >under the 2 different settings. The execution time is below.\n> >\n> >\n> >| | random_page_cost=4 | random_page_cost=8.6 |\n> >|-----+--------------------+----------------------|\n> >| Q1 | 1425.964 | 1121.928 |\n> >| Q2 | 2553.072 | 2567.450 |\n> >| Q5 | 4397.514 | 1475.343 |\n> >| Q6 | 12576.985 | 4622.503 |\n> >| Q7 | 3459.777 | 2987.241 |\n> >| Q8 | 8360.995 | 8415.311 |\n> >| Q9 | 4661.842 | 2930.370 |\n> >| Q11 | 4885.289 | 2348.541 |\n> >| Q13 | 2610.937 | 1497.776 |\n> >| Q20 | 13218.122 | 10985.738 |\n> >| Q21 | 264.639 | 262.350 |\n> >\n> >\n> >The attached main.c is the program I used to detect the\n> >random_page_cost. result.tar.gz is the test result, you can run a git log\n> >first\n> >to see the difference on plan or execution stat.\n> >\n> >Any feedback is welcome. Thanks!\n> >\n>\n> That seems pretty neat. What kind of hardware have you done these tests\n> on?\n\n\nThe following is my hardware info.\n\nI have 12 SSD behind the MR9271-8i RAID Controller which has a 1GB buffer.\n[1]\n\nroot# lshw -short -C disk\nH/W path Device Class Description\n==============================================================\n/0/100/2/0/2.0.0 /dev/sda disk 2398GB MR9271-8i\n/0/100/2/0/2.1.0 /dev/sdb disk 5597GB MR9271-8i <-- my\ndata location\n\n\n/opt/MegaRAID/MegaCli/MegaCli64 -AdpAllInfo -aALL\n\nAdapter #0\n\nMemory Size : 1024MB\nRAID Level : Primary-5, Secondary-0, RAID Level Qualifier-3\n..\nCurrent Cache Policy: WriteBack, ReadAheadNone, Direct, Write Cache OK if\nBad\nBBU\n...\n Device Present\n ================\nVirtual Drives : 2\n Degraded : 0\n Offline : 0\nPhysical Devices : 14\n Disks : 12\n Critical Disks : 0\n Failed Disks : 0\n\n\nroot# /opt/MegaRAID/MegaCli/MegaCli64 -LdPdInfo -a0 | egrep 'Media\nType|Raw Size'\nRaw Size: 745.211 GB [0x5d26ceb0 Sectors]\nMedia Type: Solid State Device\nRaw Size: 745.211 GB [0x5d26ceb0 Sectors]\nMedia Type: Solid State Device\nRaw Size: 745.211 GB [0x5d26ceb0 Sectors]\nMedia Type: Solid State Device\nRaw Size: 745.211 GB [0x5d26ceb0 Sectors]\nMedia Type: Solid State Device\nRaw Size: 745.211 GB [0x5d26ceb0 Sectors]\nMedia Type: Solid State Device\nRaw Size: 745.211 GB [0x5d26ceb0 Sectors]\nMedia Type: Solid State Device\nRaw Size: 745.211 GB [0x5d26ceb0 Sectors]\nMedia Type: Solid State Device\nRaw Size: 745.211 GB [0x5d26ceb0 Sectors]\nMedia Type: Solid State Device\nRaw Size: 745.211 GB [0x5d26ceb0 Sectors]\nMedia Type: Solid State Device\nRaw Size: 745.211 GB [0x5d26ceb0 Sectors]\nMedia Type: Solid State Device\nRaw Size: 745.211 GB [0x5d26ceb0 Sectors]\nMedia Type: Solid State Device\nRaw Size: 745.211 GB [0x5d26ceb0 Sectors]\nMedia Type: Solid State Device\n\nCPU: Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz, 32 processors.\nMemory: 251 GB\nLinux: 3.10.0-327\nfs: ext4. mount options: defaults,noatime,nodiratime,nodelalloc,barrier=0\nPhysical machine.\n\nIt's probably worth testing on various other storage systems to see\n> how that applies to those.\n>\n> Yes, I can test more on new hardware once I get it. Now it is still in\nprogress.\nHowever I can only get a physical machine with SSD or Virtual machine with\nSSD, other types are hard for me right now.\n\n\nHave you tried existing I/O testing tools, e.g. fio? If your idea is to\n> propose some built-in tool (similar to pg_test_fsync) then we probably\n> should not rely on external tools, but I wonder if we're getting the\n> same numbers.\n>\n\nThanks for this hint, I found more interesting stuff during the comparison.\n\nI define the FIO jobs as below.\n\nrandom_page_cost.job:\n[global]\nblocksize=8k\nsize=1Gi\nfilesize=1Gi\nioengine=sync\ndirectory=/u01/yizhi/data/fio\n\n[random_page_cost]\ndirect=1\nreadwrite=randread\n\n\nEven it is direct IO, the device cache still plays an important\npart. The device cache is filled in preparing the test data file stage.\nI invalidate the device cache by writing a new dummy file. At last the avg\nlatency time is 148 us.\n\n\nseq.job\n\n[global]\nblocksize=8k\nsize=1Gi\nfilesize=1Gi\nioengine=sync\ndirectory=/u01/yizhi/data/fio\n\n[seq_page_cost]\nbuffered=1\nreadwrite=read\n\nFor seq read, We need buffered IO for perfetch, however, we need to bypass\nthe file\nsystem cache and device cache. fio have no control of such caches, so I did:\n\n1). Run fio to generate the test file.\n2). Invalidate device cache first with dd if=/dev/zero of=a_dummy_file\nbs=1048576 count=1024\n3). drop the file system cache.\n4). Run the fio again.\n\nThe final avg latency is ~12 us.\n\nThis is 1.5 ~ 2 X difference with my previous result. (seq_read_lat =\n8.570290us, random_page_lat =\n73.987732us)\n\nHere are some changes for my detection program.\n\n| | seq_read_lat (us) |\nrandom_read_lat (us) |\n| FIO | 12 |\n 148 |\n| Previous main.c | 8.5 |\n 74 |\n| invalidate_device_cache before each testing | 9 |\n 150 |\n| prepare the test data file with O_DIRECT option | 15 |\n 150 |\n\nIn invalidate_device_cache, I just create another 1GB data file and read\nit. (see invalidate_device_cache function) this is similar as the previous\nfio setup.\n\nprepare test data file with O_DIRECT option means in the past, I prepare\nthe test\nfile with buffer IO. and before testing, I do invalidate device cache, file\nsystem cache. but the buffered prepared file still get better performance, I\nhave no idea of it. Since I don't want any cache. I use O_DIRECT\noption at last. The seq_read_lat changed from 9us to 15us.\nI still can't find out the 25% difference with the FIO result. (12 us vs 9\nus).\n\nAt last, the random_page_cost happens to not change very much.\n\n/u/y/g/fdirect> sudo ./main\nfs_cache_lat = 0.569031us, seq_read_lat = 18.901749us, random_page_lat =\n148.650589us\n\ncache hit ratio: 1.000000 random_page_cost 1.000000\ncache hit ratio: 0.900000 random_page_cost 6.401019\ncache hit ratio: 0.500000 random_page_cost 7.663772\ncache hit ratio: 0.100000 random_page_cost 7.841498\ncache hit ratio: 0.000000 random_page_cost 7.864383\n\nThis result looks much different from \"we should use 1.1 ~ 1.5 for SSD\".\n\nThe attached is the modified detection program.\n\n\n[1]\nhttps://www.cdw.com/product/lsi-megaraid-sas-9271-8i-storage-controller-raid-sas-pcie-3.0-x8/4576538#PO\n\n-- \nBest Regards\nAndy Fan",
"msg_date": "Mon, 21 Sep 2020 11:41:09 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Dynamic gathering the values for seq_page_cost/xxx_cost"
},
{
"msg_contents": "On Mon, Sep 21, 2020 at 9:11 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> Here are some changes for my detection program.\n>\n> | | seq_read_lat (us) | random_read_lat (us) |\n> | FIO | 12 | 148 |\n> | Previous main.c | 8.5 | 74 |\n> | invalidate_device_cache before each testing | 9 | 150 |\n> | prepare the test data file with O_DIRECT option | 15 | 150 |\n>\n> In invalidate_device_cache, I just create another 1GB data file and read\n> it. (see invalidate_device_cache function) this is similar as the previous fio setup.\n>\n> prepare test data file with O_DIRECT option means in the past, I prepare the test\n> file with buffer IO. and before testing, I do invalidate device cache, file\n> system cache. but the buffered prepared file still get better performance, I\n> have no idea of it. Since I don't want any cache. I use O_DIRECT\n> option at last. The seq_read_lat changed from 9us to 15us.\n> I still can't find out the 25% difference with the FIO result. (12 us vs 9 us).\n>\n> At last, the random_page_cost happens to not change very much.\n>\n> /u/y/g/fdirect> sudo ./main\n> fs_cache_lat = 0.569031us, seq_read_lat = 18.901749us, random_page_lat = 148.650589us\n>\n> cache hit ratio: 1.000000 random_page_cost 1.000000\n> cache hit ratio: 0.900000 random_page_cost 6.401019\n> cache hit ratio: 0.500000 random_page_cost 7.663772\n> cache hit ratio: 0.100000 random_page_cost 7.841498\n> cache hit ratio: 0.000000 random_page_cost 7.864383\n>\n> This result looks much different from \"we should use 1.1 ~ 1.5 for SSD\".\n>\n\nVery interesting. Thanks for working on this. In an earlier email you\nmentioned that TPCH plans changed to efficient ones when you changed\nrandom_page_cost = =8.6 from 4 and seq_page_cost was set to 1. IIUC,\nsetting random_page_cost to seq_page_cost to the same ratio as that\nbetween the corresponding latencies improved the plans. How about\ntrying this with that ratio set to the one obtained from the latencies\nprovided by FIO? Do we see any better plans?\n\npage cost is one thing, but there are CPU costs also involved in costs\nof expression evaluation. Should those be changed accordingly to the\nCPU latency?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 21 Sep 2020 18:33:35 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Dynamic gathering the values for seq_page_cost/xxx_cost"
},
{
"msg_contents": "Thanks Ashutosh for coming:)\n\nOn Mon, Sep 21, 2020 at 9:03 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n\n> On Mon, Sep 21, 2020 at 9:11 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> >\n> > Here are some changes for my detection program.\n> >\n> > | | seq_read_lat (us) |\n> random_read_lat (us) |\n> > | FIO | 12 |\n> 148 |\n> > | Previous main.c | 8.5 |\n> 74 |\n> > | invalidate_device_cache before each testing | 9 |\n> 150 |\n> > | prepare the test data file with O_DIRECT option | 15 |\n> 150 |\n> >\n> > In invalidate_device_cache, I just create another 1GB data file and read\n> > it. (see invalidate_device_cache function) this is similar as the\n> previous fio setup.\n> >\n> > prepare test data file with O_DIRECT option means in the past, I prepare\n> the test\n> > file with buffer IO. and before testing, I do invalidate device cache,\n> file\n> > system cache. but the buffered prepared file still get better\n> performance, I\n> > have no idea of it. Since I don't want any cache. I use O_DIRECT\n> > option at last. The seq_read_lat changed from 9us to 15us.\n> > I still can't find out the 25% difference with the FIO result. (12 us vs\n> 9 us).\n> >\n> > At last, the random_page_cost happens to not change very much.\n> >\n> > /u/y/g/fdirect> sudo ./main\n> > fs_cache_lat = 0.569031us, seq_read_lat = 18.901749us, random_page_lat =\n> 148.650589us\n> >\n> > cache hit ratio: 1.000000 random_page_cost 1.000000\n> > cache hit ratio: 0.900000 random_page_cost 6.401019\n> > cache hit ratio: 0.500000 random_page_cost 7.663772\n> > cache hit ratio: 0.100000 random_page_cost 7.841498\n> > cache hit ratio: 0.000000 random_page_cost 7.864383\n> >\n> > This result looks much different from \"we should use 1.1 ~ 1.5 for SSD\".\n> >\n>\n> Very interesting. Thanks for working on this. In an earlier email you\n> mentioned that TPCH plans changed to efficient ones when you changed\n> random_page_cost = =8.6 from 4 and seq_page_cost was set to 1. IIUC,\n> setting random_page_cost to seq_page_cost to the same ratio as that\n> between the corresponding latencies improved the plans.\n\n\nYes.\n\nHow about\n> trying this with that ratio set to the one obtained from the latencies\n> provided by FIO? Do we see any better plans?\n>\n\nMy tools set the random_page_cost to 8.6, but based on the fio data, it\nshould be\nset to 12.3 on the same hardware. and I do see the better plan as well\nwith 12.3.\nLooks too smooth to believe it is true..\n\nThe attached result_fio_mytool.tar.gz is my test result. You can use git\nshow HEAD^^\nis the original plan with 8.6. git show HEAD^ show the plan changes after\nwe changed\nthe random_page_cost. git show HEAD shows the run time statistics changes\nfor these queries.\nI also uploaded the test tool [1] for this for your double check.\n\n\n| | 8.6 | 12.3 |\n\n|-----+----------+----------|\n\n| Q2 | 2557.064 | 2444.995 |\n\n| Q4 | 3544.606 | 3148.884 |\n\n| Q7 | 2965.820 | 2240.185 |\n\n| Q14 | 4988.747 | 4931.375 |\n\n\n\n>\n> page cost is one thing, but there are CPU costs also involved in costs\n> of expression evaluation. Should those be changed accordingly to the\n> CPU latency?\n>\n> Yes, we need that as well. At the beginning of this thread, I treat all\nof them equally.\nIn the first reply of Tomas, he mentioned random_page_cost specially. After\n~10 months, I tested TPCH on a hardware and then found random_page_cost\nis set too incorrectly, after fixing it, the result looks much better. So\nI'd like to work\non this special thing first.\n\n[1]\nhttps://github.com/zhihuiFan/tpch-postgres/blob/master/random_page_cost.sh\n\n-- \nBest Regards\nAndy Fan",
"msg_date": "Tue, 22 Sep 2020 13:26:58 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Dynamic gathering the values for seq_page_cost/xxx_cost"
},
{
"msg_contents": ">\n>\n> It's probably worth testing on various other storage systems to see\n>> how that applies to those.\n>>\n>> Yes, I can test more on new hardware once I get it. Now it is still in\n> progress.\n> However I can only get a physical machine with SSD or Virtual machine with\n> SSD, other types are hard for me right now.\n>\n>\nHere is a result on a different hardware. The test method is still not\nchanged.[1]\n\nHardware Info:\n\nVirtual Machine with 61GB memory.\nLinux Kernel: 5.4.0-31-generic Ubuntu\n\n# lshw -short -C disk\nH/W path Device Class Description\n=====================================================\n/0/100/4/0 /dev/vda disk 42GB Virtual I/O device\n/0/100/5/0 /dev/vdb disk 42GB Virtual I/O device\n\nThe disk on the physical machine is claimed as SSD.\n\nThis time the FIO and my tools can generate the exact same result.\n\nfs_cache_lat = 0.957756us, seq_read_lat = 70.780327us, random_page_lat =\n438.837257us\n\ncache hit ratio: 1.000000 random_page_cost 1.000000\ncache hit ratio: 0.900000 random_page_cost 5.635470\ncache hit ratio: 0.500000 random_page_cost 6.130565\ncache hit ratio: 0.100000 random_page_cost 6.192183\ncache hit ratio: 0.000000 random_page_cost 6.199989\n\n| | seq_read_lat(us) | random_read_lat(us) |\n| FIO | 70 | 437 |\n| MY Tool | 70 | 438 |\n\nThe following query plans have changed because we change random_page_cost\nto 4\nto 6.2, the Execution time also changed.\n\n| | random_page_cost=4 | random_page_cost=6.2 |\n|-----+--------------------+----------------------|\n| Q1 | 2561 | 2528.272 |\n| Q10 | 4675.749 | 4684.225 |\n| Q13 | 18858.048 | 18565.929 |\n| Q2 | 329.279 | 308.723 |\n| Q5 | 46248.132 | 7900.173 |\n| Q6 | 52526.462 | 47639.503 |\n| Q7 | 27348.900 | 25829.221 |\n\nQ5 improved by 5.8 times and Q6 & Q7 improved by ~10%.\n\n[1]\nhttps://www.postgresql.org/message-id/CAKU4AWpRv50k8E3tC3tiLWGe2DbKaoZricRh_YJ8y_zK%2BHdSjQ%40mail.gmail.com\n\n-- \nBest Regards\nAndy Fan\n\n It's probably worth testing on various other storage systems to see\nhow that applies to those.\nYes, I can test more on new hardware once I get it. Now it is still in progress. However I can only get a physical machine with SSD or Virtual machine withSSD, other types are hard for me right now. Here is a result on a different hardware. The test method is still not changed.[1] Hardware Info:Virtual Machine with 61GB memory.Linux Kernel: 5.4.0-31-generic Ubuntu# lshw -short -C diskH/W path Device Class Description=====================================================/0/100/4/0 /dev/vda disk 42GB Virtual I/O device/0/100/5/0 /dev/vdb disk 42GB Virtual I/O deviceThe disk on the physical machine is claimed as SSD.This time the FIO and my tools can generate the exact same result.fs_cache_lat = 0.957756us, seq_read_lat = 70.780327us, random_page_lat = 438.837257uscache hit ratio: 1.000000 random_page_cost 1.000000cache hit ratio: 0.900000 random_page_cost 5.635470cache hit ratio: 0.500000 random_page_cost 6.130565cache hit ratio: 0.100000 random_page_cost 6.192183cache hit ratio: 0.000000 random_page_cost 6.199989| | seq_read_lat(us) | random_read_lat(us) || FIO | 70 | 437 || MY Tool | 70 | 438 |The following query plans have changed because we change random_page_cost to 4to 6.2, the Execution time also changed.| | random_page_cost=4 | random_page_cost=6.2 ||-----+--------------------+----------------------|| Q1 | 2561 | 2528.272 || Q10 | 4675.749 | 4684.225 || Q13 | 18858.048 | 18565.929 || Q2 | 329.279 | 308.723 || Q5 | 46248.132 | 7900.173 || Q6 | 52526.462 | 47639.503 || Q7 | 27348.900 | 25829.221 |Q5 improved by 5.8 times and Q6 & Q7 improved by ~10%.[1] https://www.postgresql.org/message-id/CAKU4AWpRv50k8E3tC3tiLWGe2DbKaoZricRh_YJ8y_zK%2BHdSjQ%40mail.gmail.com -- Best RegardsAndy Fan",
"msg_date": "Tue, 22 Sep 2020 14:19:00 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Dynamic gathering the values for seq_page_cost/xxx_cost"
},
{
"msg_contents": "On Tue, Sep 22, 2020 at 10:57 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n>\n> My tools set the random_page_cost to 8.6, but based on the fio data, it should be\n> set to 12.3 on the same hardware. and I do see the better plan as well with 12.3.\n> Looks too smooth to believe it is true..\n>\n> The attached result_fio_mytool.tar.gz is my test result. You can use git show HEAD^^\n> is the original plan with 8.6. git show HEAD^ show the plan changes after we changed\n> the random_page_cost. git show HEAD shows the run time statistics changes for these queries.\n> I also uploaded the test tool [1] for this for your double check.\n\nThe scripts seem to start and stop the server, drop caches for every\nquery. That's where you are seeing that setting random_page_cost to\nfio based ratio provides better plans. But in practice, these costs\nneed to be set on a server where the queries are run concurrently and\nrepeatedly. That's where the caching behaviour plays an important\nrole. Can we write a tool which can recommend costs for that scenario?\nHow do the fio based cost perform when the queries are run repeatedly?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 25 Sep 2020 14:45:05 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Dynamic gathering the values for seq_page_cost/xxx_cost"
},
{
"msg_contents": "On Fri, Sep 25, 2020 at 5:15 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n\n> On Tue, Sep 22, 2020 at 10:57 AM Andy Fan <zhihui.fan1213@gmail.com>\n> wrote:\n> >\n> >\n> > My tools set the random_page_cost to 8.6, but based on the fio data, it\n> should be\n> > set to 12.3 on the same hardware. and I do see the better plan as well\n> with 12.3.\n> > Looks too smooth to believe it is true..\n> >\n> > The attached result_fio_mytool.tar.gz is my test result. You can use\n> git show HEAD^^\n> > is the original plan with 8.6. git show HEAD^ show the plan changes\n> after we changed\n> > the random_page_cost. git show HEAD shows the run time statistics\n> changes for these queries.\n> > I also uploaded the test tool [1] for this for your double check.\n>\n> The scripts seem to start and stop the server, drop caches for every\n> query. That's where you are seeing that setting random_page_cost to\n> fio based ratio provides better plans. But in practice, these costs\n> need to be set on a server where the queries are run concurrently and\n> repeatedly. That's where the caching behaviour plays an important\n\nrole. Can we write a tool which can recommend costs for that scenario?\n\n\nI totally agree with this. Actually the first thing I did is to define a\nproper IO workload. At the very beginning, I used DIRECT_IO for both seq\nread\nand random read on my SSD, and then found the result is pretty bad per\ntesting\n(random_page_cost = ~1.6). then I realized postgresql relies on the\nprefetch\nwhich is disabled by DIRECT_IO. After I fixed this, I tested again with the\nabove\nscenario (cache hit ratio = 0) to verify my IO model. Per testing, it looks\ngood.\nI am also thinking if the random_page_cost = 1.1 doesn't provide a good\nresult\non my SSD because it ignores the prefects of seq read.\n\nAfter I am OK with my IO model, I test with the way you see above. but\nI also detect the latency for file system cache hit, which is handled by\nget_fs_cache_latency_us in my code (I ignored the shared buffer hits for\nnow).\nand allows user to provides a cache_hit_ratio, the final random_page_cost\n= (real_random_lat) / real_seq_lat, where\nreal_xxx_lat = cache_hit_ratio * fs_cache_lat + (1 - cache_hit_ratio) *\nxxx_lat.\nSee function cal_real_lat and cal_random_page_cost.\n\nAs for the testing with cache considered, I found how to estimate cache hit\nratio is hard or how to control a hit ratio to test is hard. Recently I am\nthinking\na method that we can get a page_reads, shared_buffer_hit from pg_kernel\nand the real io (without the file system cache hit) at os level (just as\nwhat\niotop/pidstat do). then we can know the shared_buffer hit ratio and file\nsystem\ncache hit ratio (assume it will be stable after a long run). and then do a\ntesting.\nHowever this would be another branch of manual work and I still have not got\nit done until now.\n\nI'd not like to share too many details, but \"lucky\" many cases I\nhave haven't file\nsystem cache, that makes things a bit easier. What I am doing right now is\nto\ncalculate the random_page_cost with the above algorithm with only\nshared_buffer\nconsidered. and test the real benefits with real workload to see how it\nworks.\nIf it works well, I think the only thing left is to handle file system\ncache.\n\nThe testing is time consuming since I have to cooperate with many site\nengineers,\nso any improvement on the design will be much helpful.\n\n\n> How do the fio based cost perform when the queries are run repeatedly?\n>\n>\nThat probably is not good since I have 280G+ file system cache and I have to\nprepare much more than 280G data size for testing.\n\n\n-- \nBest Regards\nAndy Fan\n\nOn Fri, Sep 25, 2020 at 5:15 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:On Tue, Sep 22, 2020 at 10:57 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n>\n> My tools set the random_page_cost to 8.6, but based on the fio data, it should be\n> set to 12.3 on the same hardware. and I do see the better plan as well with 12.3.\n> Looks too smooth to believe it is true..\n>\n> The attached result_fio_mytool.tar.gz is my test result. You can use git show HEAD^^\n> is the original plan with 8.6. git show HEAD^ show the plan changes after we changed\n> the random_page_cost. git show HEAD shows the run time statistics changes for these queries.\n> I also uploaded the test tool [1] for this for your double check.\n\nThe scripts seem to start and stop the server, drop caches for every\nquery. That's where you are seeing that setting random_page_cost to\nfio based ratio provides better plans. But in practice, these costs\nneed to be set on a server where the queries are run concurrently and\nrepeatedly. That's where the caching behaviour plays an important role. Can we write a tool which can recommend costs for that scenario? I totally agree with this. Actually the first thing I did is to define aproper IO workload. At the very beginning, I used DIRECT_IO for both seq readand random read on my SSD, and then found the result is pretty bad per testing(random_page_cost = ~1.6). then I realized postgresql relies on the prefetchwhich is disabled by DIRECT_IO. After I fixed this, I tested again with the abovescenario (cache hit ratio = 0) to verify my IO model. Per testing, it looks good. I am also thinking if the random_page_cost = 1.1 doesn't provide a good resulton my SSD because it ignores the prefects of seq read.After I am OK with my IO model, I test with the way you see above. but I also detect the latency for file system cache hit, which is handled by get_fs_cache_latency_us in my code (I ignored the shared buffer hits for now).and allows user to provides a cache_hit_ratio, the final random_page_cost = (real_random_lat) / real_seq_lat, where real_xxx_lat = cache_hit_ratio * fs_cache_lat + (1 - cache_hit_ratio) * xxx_lat. See function cal_real_lat and cal_random_page_cost.As for the testing with cache considered, I found how to estimate cache hitratio is hard or how to control a hit ratio to test is hard. Recently I am thinkinga method that we can get a page_reads, shared_buffer_hit from pg_kerneland the real io (without the file system cache hit) at os level (just as what iotop/pidstat do). then we can know the shared_buffer hit ratio and file systemcache hit ratio (assume it will be stable after a long run). and then do a testing.However this would be another branch of manual work and I still have not gotit done until now.I'd not like to share too many details, but \"lucky\" many cases I have haven't filesystem cache, that makes things a bit easier. What I am doing right now is to calculate the random_page_cost with the above algorithm with only shared_bufferconsidered. and test the real benefits with real workload to see how it works. If it works well, I think the only thing left is to handle file system cache.The testing is time consuming since I have to cooperate with many site engineers, so any improvement on the design will be much helpful. \nHow do the fio based cost perform when the queries are run repeatedly?\nThat probably is not good since I have 280G+ file system cache and I have toprepare much more than 280G data size for testing. -- Best RegardsAndy Fan",
"msg_date": "Sat, 26 Sep 2020 08:16:52 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Dynamic gathering the values for seq_page_cost/xxx_cost"
},
{
"msg_contents": "On Sat, Sep 26, 2020 at 8:17 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> As for the testing with cache considered, I found how to estimate cache hit\n> ratio is hard or how to control a hit ratio to test is hard. Recently I am thinking\n> a method that we can get a page_reads, shared_buffer_hit from pg_kernel\n> and the real io (without the file system cache hit) at os level (just as what\n> iotop/pidstat do). then we can know the shared_buffer hit ratio and file system\n> cache hit ratio (assume it will be stable after a long run). and then do a testing.\n> However this would be another branch of manual work and I still have not got\n> it done until now.\n\nFWIW pg_stat_kcache [1] extension accumulates per (database, user,\nqueryid) physical reads and writes, so you can easily compute a\nshared_buffers / IO cache / disk hit ratio.\n\n[1] https://github.com/powa-team/pg_stat_kcache\n\n\n",
"msg_date": "Sat, 26 Sep 2020 13:51:37 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Dynamic gathering the values for seq_page_cost/xxx_cost"
},
{
"msg_contents": "On Sat, Sep 26, 2020 at 1:51 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> On Sat, Sep 26, 2020 at 8:17 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> >\n> > As for the testing with cache considered, I found how to estimate cache\n> hit\n> > ratio is hard or how to control a hit ratio to test is hard. Recently I\n> am thinking\n> > a method that we can get a page_reads, shared_buffer_hit from pg_kernel\n> > and the real io (without the file system cache hit) at os level (just as\n> what\n> > iotop/pidstat do). then we can know the shared_buffer hit ratio and file\n> system\n> > cache hit ratio (assume it will be stable after a long run). and then do\n> a testing.\n> > However this would be another branch of manual work and I still have not\n> got\n> > it done until now.\n>\n> FWIW pg_stat_kcache [1] extension accumulates per (database, user,\n> queryid) physical reads and writes, so you can easily compute a\n> shared_buffers / IO cache / disk hit ratio.\n>\n> [1] https://github.com/powa-team/pg_stat_kcache\n>\n\nWOW, this would be a good extension for this purpose. Thanks for sharing\nit.\n\n-- \nBest Regards\nAndy Fan\n\nOn Sat, Sep 26, 2020 at 1:51 PM Julien Rouhaud <rjuju123@gmail.com> wrote:On Sat, Sep 26, 2020 at 8:17 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> As for the testing with cache considered, I found how to estimate cache hit\n> ratio is hard or how to control a hit ratio to test is hard. Recently I am thinking\n> a method that we can get a page_reads, shared_buffer_hit from pg_kernel\n> and the real io (without the file system cache hit) at os level (just as what\n> iotop/pidstat do). then we can know the shared_buffer hit ratio and file system\n> cache hit ratio (assume it will be stable after a long run). and then do a testing.\n> However this would be another branch of manual work and I still have not got\n> it done until now.\n\nFWIW pg_stat_kcache [1] extension accumulates per (database, user,\nqueryid) physical reads and writes, so you can easily compute a\nshared_buffers / IO cache / disk hit ratio.\n\n[1] https://github.com/powa-team/pg_stat_kcache\nWOW, this would be a good extension for this purpose. Thanks for sharing it. -- Best RegardsAndy Fan",
"msg_date": "Sat, 26 Sep 2020 15:06:17 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Dynamic gathering the values for seq_page_cost/xxx_cost"
}
] |
[
{
"msg_contents": "Hi,\nI know it's very hard, but is possible. Just someone with the knowledge to do.\n\nHere a proof of concept:\n#include <stdlib.h>\n#include <string.h>\n\n#define MAXPGPATH 256\n\nint main(int argc, char ** argv)\n{\n\tchar\t\ttbsoid[MAXPGPATH];\n\tchar\t\tstr[MAXPGPATH];\n\tint\t\t\tch,\n\t\t\t\tprev_ch = -1,\n\t\t\t\ti = 0,\n\t\t\t\tn;\n FILE * lfp;\n\n lfp = fopen(\"c:\\\\tmp\\\\crash.dat\", \"rb\");\n\twhile ((ch = fgetc(lfp)) != EOF)\n\t{\n\t\tif ((ch == '\\n' || ch == '\\r') && prev_ch != '\\\\')\n\t\t{\n\t\t\tstr[i] = '\\0';\n\t\t\tif (sscanf(str, \"%s %n\", tbsoid, &n) != 1) {\n printf(\"tbsoid size=%u\\n\", strlen(tbsoid));\n printf(\"tbsoid=%s\\n\", tbsoid);\n exit(1);\n }\n\t\t\ti = 0;\n\t\t\tcontinue;\n\t\t}\n\t\telse if ((ch == '\\n' || ch == '\\r') && prev_ch == '\\\\')\n\t\t\tstr[i - 1] = ch;\n\t\telse\n\t\t\tstr[i++] = ch;\n\t\tprev_ch = ch;\n\t}\n fclose(lfp);\n}\n\nOverflow with (MAXPGPATH=256)\nC:\\usr\\src\\tests\\scanf>sscanf3\ntbsoid size=260\ntbsoid=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\nxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\nxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\nxxxxxxxxxxxxxxxxxxxxxxxxxxx\n\nNow with patch:\nC:\\usr\\src\\tests\\scanf>sscanf3\ntbsoid size=255\ntbsoid=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\nxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\nxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\nxxxxxxxxxxxxxxxxxxxxxx\n\nThe solution is simple, but clumsy. I hope that is enough.\nsscanf(str, \"%1023s %n\", tbsoid, &n)\n\nBest regards.\nRanier Vilela",
"msg_date": "Tue, 26 Nov 2019 01:51:30 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Fix possible string overflow with sscanf (xlog.c)"
}
] |
[
{
"msg_contents": "Hi,\nThe var pageop has twice assigment, maybe is a mistake?\nThe assigned in the line 593, has no effect?\n\nRanier Vilela",
"msg_date": "Tue, 26 Nov 2019 12:13:16 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Remove twice assignment with var pageop (nbtree.c)."
},
{
"msg_contents": "Same case on nbtpage.c at line 1637, with var opaque.\nmake check, passed all 195 tests here with all commits.\n\nRanier Vilela",
"msg_date": "Tue, 26 Nov 2019 13:45:10 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Remove twice assignment with var pageop (nbtree.c)."
},
{
"msg_contents": "On Tue, Nov 26, 2019 at 01:45:10PM +0000, Ranier Vilela wrote:\n> Same case on nbtpage.c at line 1637, with var opaque.\n> make check, passed all 195 tests here with all commits.\n> \n> Ranier Vilela\n\nYou were right about both of these, so removed in master. I am\nsurprised no one else saw this before.\n\n---------------------------------------------------------------------------\n\n> diff --git a/src/backend/access/nbtree/nbtpage.c b/src/backend/access/nbtree/nbtpage.c\n> index 268f869a36..144fefccad 100644\n> --- a/src/backend/access/nbtree/nbtpage.c\n> +++ b/src/backend/access/nbtree/nbtpage.c\n> @@ -1634,8 +1634,6 @@ _bt_mark_page_halfdead(Relation rel, Buffer leafbuf, BTStack stack)\n> \t * delete the following item.\n> \t */\n> \tpage = BufferGetPage(topparent);\n> -\topaque = (BTPageOpaque) PageGetSpecialPointer(page);\n> -\n> \titemid = PageGetItemId(page, topoff);\n> \titup = (IndexTuple) PageGetItem(page, itemid);\n> \tBTreeInnerTupleSetDownLink(itup, rightsib);\n> \n\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Thu, 19 Dec 2019 10:33:54 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Remove twice assignment with var pageop (nbtree.c)."
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Tue, Nov 26, 2019 at 01:45:10PM +0000, Ranier Vilela wrote:\n>> Same case on nbtpage.c at line 1637, with var opaque.\n>> make check, passed all 195 tests here with all commits.\n\n> You were right about both of these, so removed in master. I am\n> surprised no one else saw this before.\n\nI don't think this is actually a good idea. What it is is a foot-gun,\nbecause if anyone adds code there that wants to access the special area\nof that particular page, it'll do the wrong thing, unless they remember\nto put back the assignment of \"opaque\". The sequence of BufferGetPage()\nand PageGetSpecialPointer() is a very standard switch-our-attention-\nto-another-page locution in nbtree and other index AMs.\n\nAny optimizing compiler will delete the dead store, we do not have\nto do it by hand.\n\nLet me put it this way: if we had the BufferGetPage() and\nPageGetSpecialPointer() calls wrapped up as an \"access new page\" macro,\nwould we undo that in order to make this code change? We would not.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 19 Dec 2019 10:55:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Remove twice assignment with var pageop (nbtree.c)."
},
{
"msg_contents": "On Thu, Dec 19, 2019 at 10:55:42AM -0500, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Tue, Nov 26, 2019 at 01:45:10PM +0000, Ranier Vilela wrote:\n> >> Same case on nbtpage.c at line 1637, with var opaque.\n> >> make check, passed all 195 tests here with all commits.\n> \n> > You were right about both of these, so removed in master. I am\n> > surprised no one else saw this before.\n> \n> I don't think this is actually a good idea. What it is is a foot-gun,\n> because if anyone adds code there that wants to access the special area\n> of that particular page, it'll do the wrong thing, unless they remember\n> to put back the assignment of \"opaque\". The sequence of BufferGetPage()\n> and PageGetSpecialPointer() is a very standard switch-our-attention-\n> to-another-page locution in nbtree and other index AMs.\n\nOh, I was not aware that was boilerplate code. I agree it should be\nconsistent, so patch reverted. Sorry.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Thu, 19 Dec 2019 11:19:59 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Remove twice assignment with var pageop (nbtree.c)."
},
{
"msg_contents": "De: Bruce Momjian <bruce@momjian.us>\nEnviado: quinta-feira, 19 de dezembro de 2019 16:19\n\n>Oh, I was not aware that was boilerplate code. I agree it should be\n>consistent, so patch reverted. Sorry.\nI apologize to you, Bruce.\nIt is difficult to define where a \"boilerplate\" exists.\nI agree that decent compiler, will remove, maybe, msvc not, but that's another story...\n\nBest regards,\nRanier Vilela\n\n",
"msg_date": "Thu, 19 Dec 2019 16:41:44 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Remove twice assignment with var pageop (nbtree.c)."
},
{
"msg_contents": "On Thu, Dec 19, 2019 at 7:55 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I don't think this is actually a good idea. What it is is a foot-gun,\n> because if anyone adds code there that wants to access the special area\n> of that particular page, it'll do the wrong thing, unless they remember\n> to put back the assignment of \"opaque\". The sequence of BufferGetPage()\n> and PageGetSpecialPointer() is a very standard switch-our-attention-\n> to-another-page locution in nbtree and other index AMs.\n\n+1\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 19 Dec 2019 10:05:18 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Remove twice assignment with var pageop (nbtree.c)."
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.